Paul Gilster's Blog, page 72

February 28, 2020

Exploring the Contact Paradox

Keith Cooper is a familiar face on Centauri Dreams, both through his own essays and the dialogues he and I have engaged in on interstellar topics. Keith is the editor of Astronomy Now and the author of both The Contact Paradox: Challenging Assumptions in the Search for Extraterrestrial Intelligence (Bloomsbury Sigma), and Origins of the Universe: The Cosmic Microwave Background and the Search for Quantum Gravity (Icon Books) to be published later this year. The Contact Paradox is a richly detailed examination of the history and core concepts of SETI, inspiring a new set of conversations, of which this is the first. With the recent expansion of the search through Breakthrough Listen, where does SETI stand both in terms of its likelihood of success and its perception among the general public?



Paul Gilster

Keith, we’re 60 years into SETI and no contact yet, though there are a few tantalizing things like the WOW! signal to hold our attention. Given that you have just given us an exhaustive study of the field and mined its philosophical implications, what’s your take on how this lack of results is playing with the general public? Are we more or less ready today than we were in the days of Project Ozma to receive news of a true contact signal?


And despite what we saw in the film Contact, do you think the resultant clamor would be as widespread and insistent? Because to me, one of the great paradoxes about the whole idea of contact is that the public seems to get fired up for the idea in film and books, but relatively uninterested in the actual work that’s going on. Or am I misjudging this?



Keith Cooper


What a lot of people don’t realise is just how big space is. Our Galaxy is home to somewhere between 100 billion and 200 billion stars. Yet, until Yuri Milner’s $100 million Breakthrough Listen project, we had looked and listened, in detail, at about a thousand of those stars. And when I say listened closely, I mean we pointed a telescope at each of those stars for half an hour or so. Even Breakthrough Listen, which will survey a million stars in detail, finds the odds stacked against it. Let’s imagine there are 10,000 technological species in our Galaxy. That sounds like a lot, but on average we’d have to search between 10 million and 20 million stars just to find one of those species.


And remember, we’re only listening for a short time. If they’re not transmitting during that time frame, then we won’t detect them, at least not with a radio telescope. Coupled with the fact that incidental radio leakage will be much harder to detect than we thought, then it’s little wonder that we’ve not found anyone out there yet. Of course, the public doesn’t see these nuances – they just see that we’ve been searching for 60 years and all we’ve found is negative or null results. So I’m not surprised that the public are often uninspired by SETI.



Some of this dissatisfaction might stem from the assumptions made in the early days of SETI, when it was assumed that ETI would be blasting out messages through powerful beacons that would be pretty obvious and easy to detect. Clearly, that doesn’t seem to be the case. Maybe that’s because they’re not out there, or maybe it’s because the pure, selfless altruism required to build such a huge, energy-hungry transmitter to beam messages to unknown species is not very common in nature. Certainly on Earth, in the animal kingdom, altruism usually operates either on the basis of protecting one’s kin, or via quid pro quo, neither of which lend themselves to encouraging interstellar communication.


So I think we – that is, both the public and the SETI scientific community – need to readjust our expectations a little bit.


Are we ready to receive a contact signal? I suspect that we think we are, but that’s different from truly being ready. Of course, it depends upon a number of variables, such as the nature of the contact, whether we can understand the message if one is sent, and whether the senders are located close in space to us or on the other side of the Galaxy. A signal detected from thousands of light years away and which we can’t decode the message content of, will have much less impact than one from, say, 20 or 30 light years away, and which we can decode the message content and perhaps even start to communicate with on a regular basis.



Paul Gilster

I’ll go further than that. To me, the optimum SETI signal to receive first would be one from an ancient civilization, maybe one way toward galactic center, which would make by virtue of its extreme distance a non-threatening experience. Or at least it would if we quickly went to work on expanding public understanding of the size of the Galaxy and the Universe itself, as you point out. An even more ancient signal from a different galaxy would be even better, as even the most rabid conspiracy theorist would have little sense of immediate threat.


I suppose the best scenario of all would be a detection that demonstrated other intelligent life somewhere far away in the cosmos, and then a century or so for humanity to digest the idea, working it not only into popular culture, but also into philosophy, art, so that it becomes a given in our school textbooks (or whatever we’ll use in the future in place of school textbooks). Then, if we’re going to receive a signal from a relatively nearby system, let it come after this period of acclimatization.


Great idea, right? As if we could script what happens when we’re talking about something as unknowable as SETI contact. I don’t even think we’d have to have a message we could decode at first, because the important thing would be the simple recognition of the fact that other civilizations are out there. On that score, maybe Dysonian SETI turns the trick with the demonstration of a technology at work around another star. The fact of its existence is what we have to get into our basic assumptions about the universe. I used to assume this would be easy and come soon, and while I do understand about all those stars out there, I’m still a bit puzzled that we haven’t turned up something. I’d call that no more than a personal bias, but there it is.



Image: The Parkes 64m radio telescope in Parkes, New South Wales, Australia with the Milky Way overhead. Breakthrough Listen is now conducting a survey of the Milky Way galactic plane over 1.2 to 1.5 GHz and a targeted search of approximately 1000 nearby stars over the frequency range 0.7 to 4 GHz. Credit: Wikimedia Commons / Daniel John Reardon.



Keith Cooper

It’s the greatest puzzle that there is. Radio SETI approaches things from the assumption that ET just sat at home belting out radio signals, and yet, as we know, the Universe is so old that ET has had ample time to reach us, or to build some kind of Dysonian artefact, or to do something to make their presence more obvious. And over the years we’ve all drawn our own conclusions as to why this does not seem to be the case – maybe they are here but hidden, watching us like we’re in some kind of cosmic zoo. Or maybe interstellar travel and building megastructures are more difficult than we envision. Perhaps they are all dead, or technological intelligence is rare, or they were never out there in the first place. We just don’t know. All we can do is look.


I think science fiction has also trained us to expect alien life to be out there – and I don’t mean that as a criticism of the genre. Indeed, in The Contact Paradox, I often use science fiction as allegory, largely because that’s where discussions about what form alien life may take and what might happen during contact have already taken place. So let me ask you this, Paul: From all the sf that you’ve read, are there any particular stories that stand out as a warning about the subtleties of contact?



Paul Gilster

I suppose my favorite of all the ‘first contact through SETI’ stories is James Gunn’s The Listeners (1972). Here we have multiple narrators working a text that is laden with interesting quotations. Gunn’s narrative methods go all the way back to Dos Passos and anticipate John Brunner (think Stand on Zanzibar, for example). It’s fascinating methodology, but beyond that, the tumult that greets the decoding of an image from Capella transforms into acceptance as we learn more about a culture that seems to be dying and await what may be the reply to a message humanity had finally decided to send in response. So The Listeners isn’t really a warning as much as an exploration of this tangled issue in all its complexity.


Of course, if we widen the topic to go beyond SETI and treat other forms of contact, I love what Stanislaw Lem did with Solaris (1961). A sentient ocean! I also have to say that I found David Brin’s Existence (2012) compelling. Here competing messages are delivered by something akin to Bracewell probes, reactivated after long dormancy. Which one do you believe, and how do you resolve deeply contradictory information? Very interesting stuff! I mean, how do we respond if we get a message, and then a second one saying “Don’t pay any attention to that first message?”


What are some of your choices? I could go on for a bit about favorite science fiction but I’d like to hear from you. I assume Sagan’s Contact (1985) is on your list, but how about dazzling ‘artifact’ contact, as in the Strugatsky brothers’ Roadside Picnic (1972)? And how do we fit in Cixin Liu’s The Three Body Problem (2008)? At first glance, I thought we were talking about Alpha Centauri, but the novel shows no familiarity with the actual Centauri system, while still being evocative and exotic. Here the consequences of contact are deeply disturbing.



Keith Cooper

I wish I were as well read as you are, Paul! I did read The Three Body Problem, but it didn’t strike a chord with me, which is a shame. For artefact contact, however, I have to mention the Arthur C. Clarke classic, Rendezvous with Rama (1973). One of the things I liked about that story is that it removed us from the purpose of Rama. We just happened to be bystanders, oblivious to Rama’s true intent and destination (at least until the sequel novels).


Clarke’s story feels relevant to SETI today, in which embracing the search for ‘technosignatures’ has allowed researchers to consider wider forms of detection than just radio signals. In particular, we’ve seen more speculation about finding alien spacecraft in our own Solar System – see Avi Loeb pondering whether 1I/‘Oumuamua was a spacecraft (I don’t think it was), or Jim Benford’s paper about looking for lurkers.


I’ve got mixed feelings about this. On the one hand, although it’s speculative and I really don’t expect us to find anything, I see no reason why we shouldn’t look for probes in the Solar System, just in case, and it would be done in a scientific manner. On the other hand, it sets SETI on a collision course with ufology, and I’d be interested to see how that would play out in the media and with the public.


It could also change how we think about contact. Communication over many light years via radio waves or optical signals is one thing, but if the SETI community agrees that it’s possible that there could be a probe in our Solar System, then that would bring things into the arena of direct contact. As a species, I don’t think we’re ready to produce a coherent response to a radio signal, and we are certainly not ready for direct contact.


Contact raises ethical dilemmas. There’s the obvious stuff, such as who has the right to speak for Earth, and indeed whether we should respond at all, or stay silent. I think there are other issues though. There may be information content in the detected signal, for example a message containing details of new technology, or new science, or new cultural artefacts.


However, we live in a world in which resources are not shared equally. Would the information contained within the signal be shared to the whole world, or will governments covet that information? If the technological secrets learned from the signal could change the world, for good or ill, who should we trust to manage those secrets?


These issues become amplified if contact is direct, such as finding one of Benford’s lurkers. Would we all agree that the probe should have its own sovereignty and keep our distance? Or would one or more nations or organisations seek to capture the probe for their own ends? How could we disseminate what we learn from the probe so that it benefits all humankind? And what if the probe doesn’t want to be captured, and defends itself?


My frustration with SETI is that we devote our efforts to trying to make contact, but then shun any serious discussion of what could happen during contact. The search and the discussion should be happening in tandem, so that we are ready should SETI find success, and I’m frankly puzzled that we don’t really do this. Paul, do you have any insight into why this might be?



Paul Gilster

You’ve got me. You and I are on a slightly different page when it comes to METI, for example (Messaging to Extraterrestrial Intelligence). But we both agree that while we search for possible evidence of ETI, we should be having this broad discussion about the implications of success. And if we’re talking about actually sending a signal without any knowledge whatsoever of what might be out there, then that discussion really should take priority, as far as I’m concerned. I’d be much more willing to accept the idea of sending signals if we came to an international consensus on the goal of METI and its possible consequences.


As to why we don’t do this, I hear a lot of things. Most people from the METI side argue that the cat is already out of the bag anyway, with various private attempts to send signals proliferating, and the assumption that ever more sophisticated technology will allow everyone from university scientists to the kid in the basement to send signals whenever they want. I can’t argue with that. But I don’t think the fact that we have sent messages means we should give up on the idea of discussing why we’re doing it and why it may or may not be a sound idea. I’m not convinced anyway that any signals yet sent have the likelihood of being received at interstellar distances.


But let’s leave METI alone for a moment. On the general matter of SETI and implications of receiving a signal or finding ETI in astronomical data, I think we’re a bit schizophrenic. When I talk about ‘we,’ I mean western societies, as I have no insights into how other traditions now view the implications of such knowledge. But in the post-Enlightenment tradition of places like my country and yours, contacting ETI is on one level accepted (I think this can be demonstrated in recent polling) while at the same time it is viewed as a mere plot device in movies.


This isn’t skepticism, because that implies an effort to analyze the issue. This is just a holdover of old paradigms. Changing them might take a silver disc touching down and Michael Rennie strolling out. On the day that happens, the world really would stand still.


Let’s add in the fact that we’re short-sighted in terms of working for results beyond the next dividend check (or episode of a favorite show). With long-term thinking in such perilously short supply (and let’s acknowledge the Long Now Foundation’s heroic efforts at changing this), we have trouble thinking about how societies change over time with the influx of new knowledge.


Our own experience says that superior technologies arriving in places without warning can lead to calamity, whether intentional or not, which in and of itself should be a lesson as we ponder signals from the stars. A long view of civilization would recognize how fragile its assumptions can be when faced with sudden intervention, as any 500 year old Aztec might remind us.



Image: A 17th century CE oil painting depicting the Spanish Conquistadores led by Hernan Cortes besieging the Aztec capital of Tenochtitlan in 1519 CE. (Jay I. Kislak Collection).


Keith, what’s your take on the ‘cat out of the bag’ argument with regard to METI? It seems to me to ignore the real prospect that we can change policy and shape behavior if we find it counterproductive, instead focusing on human powerlessness to control our impulses. Don’t we on the species level have agency here? How naive do you think I am on this topic?



Keith Cooper

That is the ‘contact paradox’ in a nutshell, isn’t it? This idea that we’re actively reaching out to ETI, yet we can’t agree on whether it’s safe to do so or not. That’s the purpose of my book, to try and put the discussion regarding contact in front of a wider audience.


In The Contact Paradox, I’m trying not to tell people what they should think about contact, although of course I give my own opinions on the matter. What I am asking is that people take the time to think more carefully about this issue, and about our assumptions, by embarking on having the broader debate.


Readers of Centauri Dreams might point out that they have that very debate in the comments section of this website on a frequent basis. And while that’s true to an extent, I think the debate, whether on this site or among researchers at conferences or even in the pages of science fiction, has barely scratched the surface. There are so many nuances and details to examine, so many assumptions to challenge, and it’s all too easy to slip back into the will they/won’t they invade discussion, which to me is a total straw-man argument.


To compound this, while the few reviews that The Contact Paradox has received so far have been nice, I am seeing a misunderstanding arise in those reviews that once again brings the debate back down to the question of whether ETI will be hostile or not. Yet the point I am making in the book is that even if ETI is benign, contact could potentially still go badly, through misunderstandings, or through the introduction of disruptive technology or culture.


Let me give you a hypothetical example based on a science-fiction technology. Imagine we made contact with ETI, and they saw the problems we face on Earth currently, such as poverty, disease and climate change. So they give us some of their technology – a replicator, like that in Star Trek, capable of making anything from the raw materials of atoms. Let’s also assume that the quandaries that I mentioned earlier, about who takes possession of that technology and whether they horde it, don’t apply. Instead, for the purpose of this argument, let’s assume that soon enough the technology is patented by a company on Earth and rolled out into society to the point that replicators became as common a sight in people’s homes as microwave ovens.


Just imagine what that could do! There would be no need for people to starve or suffer from drought – the replicators could make all the food and water we’d ever need. Medicine could be created on the spot, helping people in less wealthy countries who can’t ordinarily get access to life-saving drugs. And by taking away the need for industry and farming, we’d cut down our carbon emissions drastically. So all good, right?


But let’s flip the coin and look at the other side. All those people all across the world who work in manufacturing and farming would suddenly be out of a job, and with people wanting for nothing, the economy would crash completely, and international trade would become non-existent – after all, why import cocoa beans when you can just make them in your replicator at home? We’d have a sudden obesity crisis, because when faced with an abundance of resources, history tells us that it is often human nature to take too much. We’d see a drugs epidemic like never before, and people with malicious intent would be able to replicate weapons out of thin air. Readers could probably imagine other disruptive consequences of such a technology.


It’s only a thought experiment, but it’s a useful allegory showing that there are pros and cons to the consequences of contact. What we as a society have to do is decide whether the pros outweigh the cons, and to be prepared for the disruptive consequences. We can get some idea of what to expect by looking at contact between different societies on Earth throughout history. Instead of the replicator, consider historical contact events where gunpowder, or fast food, or religion, or the combustion engine have been given to societies that lacked them. What were the consequences in those situations?


This is the discussion that we’re not currently having when we do METI. There’s no risk assessment, just a bunch of ill-thought-out assumptions masquerading as a rationale for attempting contact before we’re ready.


There’s still time though. ETI would really have to be scrutinising us closely to detect our leakage or deliberate signals so far, and if they’re doing that then they would surely already know we are here. So I don’t think the ‘cat is out of the bag’ just yet, which means there is still time to have this discussion, and more importantly to prepare. Because long-term I don’t think we should stay silent, although I do think we need to be cautious, and learn what is out there first, and get ready for it, before we raise our voice. And if it turns out that no one is out there, then we’ve not wasted our time, because I think this discussion can teach us much about ourselves too.



Paul Gilster

We’re on the same wavelength there, Keith. I’m not against the idea of communicating with ETI if we receive a signal, but only within the context you suggest, which means thinking long and hard about what we want to do, making a decision based on international consultation, and realizing that any such contact would have ramifications that have to be carefully considered. On balance, we might just decide to stay silent until we gathered further information.


I do think many people have simply not considered this realistically. I was talking to a friend the other day whose reaction was typical. He had been asking me about SETI from a layman’s perspective, and I was telling him a bit about current efforts like Breakthrough Listen. But when I added that we needed to be cautious about how we responded, if we responded, to any reception, he was incredulous, then thoughtful. “I’ve just never thought about that,” he said. “I guess it just seems like science fiction. But of course I realize it isn’t.”


So we’re right back to paradox. If we have knowledge of the size of the galaxy — indeed, of the visible cosmos — why do we not see more public understanding of the implications? I think people could absorb the idea of a SETI reception without huge disruption, but it will force a cultural shift that turns what had been fiction into the realm of possibility.


But maybe we should now identify the broad context within which this shift can occur. In the beginning of your book, Keith, you say this: “Understanding altruism may ultimately be the single most significant factor in our quest to make contact with other intelligent life in the Universe.”


I think this is exactly right, and the next time we talk, I’d like us to dig into why this statement is true, and its ramifications for how we deal with not only extraterrestrial contact but our own civilization. Along with this, let’s get into that thorny question of ‘deep time’ and how our species sees itself in the cosmos.


tzf_img_post


 •  0 comments  •  flag
Share on Twitter
Published on February 28, 2020 05:38

February 26, 2020

G 9-40b: Confirming a Planet Candidate

M-class dwarfs within 100 light years are highly sought after objects these days, given that any transiting worlds around such stars will present unusually useful opportunities for atmospheric analysis. That’s because these stars are small, allowing large transit depth — in other words, a great deal of the star’s light is blocked by the planet. Studying a star’s light as it filters through a planetary atmosphere — transmission spectroscopy — can tell us much about the chemical constituents involved. We’ll soon extend that with space-based direct imaging.


While the discoveries we’re making today are exciting in their own right, bear in mind that we’re also building the catalog of objects that next generation ground telescopes (the extremely large, or ELT, instruments on the way) and their space-based cousins can examine in far greater depth. And it’s also true that we are tuning up our methods for making sure that our planet candidates are real and not products of data contamination.


Thus a planet called G 9-40b orbiting its red dwarf host about 90 light years out is significant not so much for the planet itself but for the methods used to confirm it. Probably the size of Neptune or somewhat smaller, G 9-40b is a world first noted by Kepler (in its K2 phase) as the candidate planet made transits of the star every six days. Confirmation that this is an actual planet has been achieved through three instruments. The first is the Habitable-zone Planet Finder (HPF), a spectrograph developed at Penn State that has been installed on the 10m Hobby-Eberly Telescope at McDonald Observatory in Texas.


HPF provides high precision Doppler readings in the infrared, allowing astronomers to exclude possible signals that might have mimicked a transiting world — we now know that G 9-40b is not a close stellar or substellar binary companion. HPF is distinguished by its spectral calibration using a laser frequency comb built by scientists at the National Institute of Standards and Technology and the University of Colorado. The instrument was able to achieve high precision in its radial velocity study of this planet while also observing the world’s transits across the star.


A post on the Habitable Zone Planet Finder blog notes that the brightness of the host star (given its proximity) and the large transit depth of the planet makes G 9-40b “…one of the most favorable sub-Neptune-sized planets orbiting an M-dwarf for transmission spectroscopy with the James Webb Space Telescope (JWST) in the future…”


But the thing to note about this work is the collaborative nature of the validation process, putting different techniques into play. High contrast adaptive optics imaging at Lick Observatory showed no stellar companions near the target, helping researchers confirm that the transits detected in the K2 mission were indeed coming from the star G 9-40. The Apache Point observations using high-precision diffuser-assisted photometry (see the blog entry for details on this technique) produced a transit plot that agreed with the K2 observations and allowed the team to tighten the timing of the transit. The Apache Point observations grew out of lead author Guðmundur Stefánsson’s doctoral work at Penn State. Says Stefánsson:


“G 9-40b is amongst the top twenty closest transiting planets known, which makes this discovery really exciting. Further, due to its large transit depth, G 9-40b is an excellent candidate exoplanet to study its atmospheric composition with future space telescopes.”



Image: Drawn from the HPF blog. Caption: Precise radial velocities from HPF (left) on the 10m Hobby-Eberly Telescope (right) allowed us to place an upper limit on the mass of the planet of 12 Earth masses. We hope to get a further precise mass constraint by continuing to observe G 9-40 in the future. Image credit: Figure 11a from the paper (left), Gudmundur Stefansson (right).


Near-infrared radial velocities from HPF allowed the 12 MEarth mass determination, the tightening of which through future work will allow the composition of the planet to be constrained. All of this is by way of feeding a space-based instrument like the James Webb Space Telescope with the data it will need to study the planet’s atmosphere. In such ways do we pool the results of our instruments, with HPF continuing its survey of the nearest low-mass stars in search of other planets in the Sun’s immediate neighborhood.


The paper is Stefansson et al., “A Sub-Neptune-sized Planet Transiting the M2.5 Dwarf G 9-40: Validation with the Habitable-zone Planet Finder,” Astronomical Journal Vol. 159, No. 3 (12 February 2020). Abstract / preprint.


tzf_img_post


 •  0 comments  •  flag
Share on Twitter
Published on February 26, 2020 09:32

February 24, 2020

How NASA Approaches Deep Space Missions

Centauri Dreams reader Charley Howard recently wrote to ask about how NASA goes about setting its mission priorities and analyzing mission concepts like potential orbiter missions to the ice giants. It’s such a good question that I floated it past Ashley Baldwin, who is immersed in the evolution of deep space missions and moves easily within the NASA structure to extract relevant information. Dr. Baldwin had recently commented on ice giant mission analysis by the Outer Planets Advisory Group. But what is this group, and where does it fit within the NASA hierarchy? Here is Ashley’s explanation of this along with links to excellent sources of information on the various mission concepts under analysis for various targets, and a bit of trenchant commentary.


By Ashley Baldwin



Each of the relevant NASA advisory groups has its own page on the NASA site with archives stuffed full of great presentations. The most germane to our discussion here is the Outer Planets Assessment Group (OPAG). My own focus has been on the products OPAG and the other PAGs produce, though OPAG produces the most elegant presentations with interesting subject matter. Product more than process is my focus, along with politics with a little ‘p’ within the NASA administration, and ‘high’ politics with a big P.


There are a number of such “advisory groups” feeding into NASA through its Planetary Science Advisory Committee (PAC), some of them of direct interest to Centauri Dreams readers::


Exoplanet Exploration Program Analysis Group (ExoPAG);


Mars Exploration Program Analysis Group (MEPAG);


Venus Exploration Analysis Group (VEXAG);


Lunar Exploration Analysis Group (LEAG);


Small Bodies Assessment Group (SBAG)


The relative influence of these groups doubtless waxes and wanes over time, with Mars in the ascendancy for a long time and Venus in inferior conjunction for ages. Most were formed in 2004, with the exoplanet group unfortunately a year later (see * below for my thoughts on why and how this happened).


These groups are essentially panels of relevant experts/academics — astronomers, astrophysicists, geophysicists, planetary scientists, astronautical engineers, astrobiologists etc — from within the various NASA centers (JPL, Glenn, Goddard et al.), along with universities and related institutions. The chairpersons are elected and serve a term of three years. James Kasting, for instance, chaired the exoplanetary advisory group ExoPAG during the first decade of this century.


Each group has two to three full member meetings per year which are open to the public. They have set agendas and take the form of plenary sessions discussing presentations – all of which are made available in the meeting archives, which over the years tell the story of what is being prioritised as well as offering a great deal on planetary science. There are also more frequent policy committee meetings, some of which I have attended via Skype. The PAGs also work in collaboration with other space agencies, the European Space Agency (ESA) and Japan Aerospace Exploration Agency (JAXA) in particular. This all creates technological advice that informs and is informed by NASA policy, which is in turn informed politically, as you would imagine. All of this leads to the missions under consideration, such as Europa Clipper, the Space Launch System (SLS), the James Webb Space Telescope (JWST), the International Space Station (ISS) and the planning for future manned Lunar/Martian landings.


NASA can task the advisory groups to produce work relating to particular areas, such as ice giant missions, and with contributing towards the Decadal studies via a report that is due in March of 2023. On the Decadals: The National Research Council (NRC) conducts studies that provide a science community consensus on key questions being examined by NASA and other agencies. The broadest of these studies in NASA’s areas of research are the Decadal surveys.


So once each decade NASA and its partners ask the NRC to project 10 or more years into the future in order to prioritize research areas and develop mission concepts needed to make the relevant observations. You can find links to the most recent Decadal surveys here.


There is obviously jostling and internal competition for each group to get its priorities as high up the Decadal priority list as possible. Bearing in mind that there is a similar and equally competitive pyramid lobby for astrophysics, earth science and heliophysics.


Each PAG is encouraged to get its members to both individually and collectively submit ‘white papers’ championing research areas they feel are relevant. That’s thousands, so no wonder they need some serious and time consuming collation to produce the final document. This time around it will be Mars sample return versus the ice giants vying for the all important top spot (anything less than this and you are unlikely to receive a once-a-decade flagship mission).


The Planetary Science Advisory Committee, in turn, advises the central NASA Advisory Council (NAC). Its members are appointed at the discretion of and are directly advisory to the NASA administrator on all science matters within NASA’s purview. NAC was formed from the merger of several related groups in 1977, though its origins predate NASA’s formation in 1958.


The Discovery (small) and New Frontiers (medium ) Planetary Science programmes (with “flagship” missions like Clipper effectively being “large,” occurring generally once per decade) each run over a five year cycle, with one New Frontiers being picked each round and up to two Discovery missions chosen. This after short-listing from all concepts submitted in response to “an announcement of opportunity” – the formal NASA application process. The Discovery and New Frontiers programmes are staggered, as are the missions chosen under those programmes, with the aim of having a mission launching roughly on a 24 monthly rolling basis, presumably to help spread out their operational costs.


Both Discovery and New Frontiers come with a set budget cap, the $850-1000 million New Frontiers and $500 million Discovery. However, on top of this they have receive a free launcher (from a preselected list), some or all operational costs for the duration of the primary mission (which without extensions is about 2 years for Discovery like Insight and 3-4 years for a New Frontiers). There are also varying additional government furnished equipment (GFEs) on offer, consisting of equipment, special tooling, or special test equipment.


Sometimes other additional cost technology is included such as multi-mission radioisotope thermoelectric generators (MMRTG). Two have been slotted this time around for Discovery, which is very unusual as MMRTGs are at a premium and generally limited to New Frontiers missions or bigger. There were three on offer for last year’s New Frontiers round but as Dragonfly to Titan only needs one, there were two left over and they only have a limited shelf life.


This Discovery round also has broken with former policy in so much as ALL operations costs are being covered, including those outside of the mission proper (i.e whilst in transit to the target), thus removing cost penalties for missions with long transit times, like Trident to Triton. Even in hibernation there are system engineering costs and maintaining a science team that together add up to several million dollars per year. A big clue as to NASA’s Planetary Science Division’s priorities? I hope so!


The Explorer programme is the Astrophysics Division parallel process, run in similar fashion with one medium Explorer and one small Explorer (budget $170 million) picked every five years, though each programme is again staggered to effectively push out a mission about every two and a half years. There is some talk of the next Decadal study creating a funded “Probe” programme. Such programmes are generally only conceptual, but there is talk of a $1 billion budget for some sort of astrophysics mission, hopefully exoplanet related. No more than gossip at this point, though.


* And here is the ExoPAG bone of contention I mentioned above. Kepler was selected as a Discovery mission in 2003 prior to the formation of ExoPAG, and the rest of the planetary science groups went ballistic. This led to NASA excluding exoplanet missions from future Discovery and New Frontier rounds. Despite the tremendous success of Kepler, this limited ExoPAG to analogous but smaller Astrophysics Explorer funding. These are small- and medium-class, PI-led astrophysics missions, as well as astrophysics missions of opportunity.


Imagine what could have been produced, for instance, if the ESA’s ARIEL (or EChO) transit telescope had been done in conjunction with a New Frontiers budget instead of Astrophysics Explorer. The Medium Explorer budget reaches $200 million plus; New Frontiers gets up to $850-1000 million.


tzf_img_post


 •  0 comments  •  flag
Share on Twitter
Published on February 24, 2020 09:27

February 21, 2020

Juno: Looking Deep into Jupiter’s Atmosphere

We’re learning more about the composition of Jupiter’s atmosphere, and in particular, the amount of water therein, as a result of data from the Juno mission. The data come in the 1.25 to 22 GHz range from Juno’s microwave radiometer (MWR), depicting the deep atmosphere in the equatorial region. Here, water (considered in terms of its component oxygen and hydrogen) makes up about 0.25 percent of the molecules in Jupiter’s atmosphere, almost three times the percentage found in the Sun. All of this gets intriguing when compared to the results from Galileo.


You’ll recall that the Galileo probe descended into the Jovian atmosphere back in 1995, sending back spectrometer measurements of the amount of water it found down to almost 120 kilometers, where atmospheric pressure reached 320 pounds per square inch (22 bar). Unlike Juno, Galileo showed that Jupiter might be dry compared to the Sun — there was in fact ten times less water than expected — but it also found water content increasing even as it reached its greatest depth, an oddity given the assumption that mixing in the atmosphere would create a constant water content. Did Galileo run into some kind of meteorological anomaly?


A new paper in Nature Astronomy looks at the matter as part of its analysis of the Juno results, which also depict an atmosphere not well mixed:


The findings of the Galileo probe were puzzling because they showed that where ammonia and hydrogen sulfide become uniformly mixed occurs at a level much deeper (~10 bar) than what was predicted by an equilibrium thermochemical model. The concentration of water was subsolar and still increasing at 22 bar, where radio contact with the probe was lost, although the concentrations of nitrogen and sulfur stabilized at ~3 times solar at ~10 bar. The depletion of water was proposed to be caused by meteorological effects at the probe location. The observed water abundance was assumed not to represent the global mean water abundance on Jupiter, which is an important quantity that distinguishes planetary formation models and affects atmospheric thermal structure.


Now Juno has found water content greater than what Galileo measured. But the fact that Galileo showed a water concentration that was still increasing when the probe no longer could send data makes its results inconclusive. The matter is important for those interested in planet formation because as the likely first planet to form, Jupiter would have contained the great bulk of gas and dust that did not go into the composition of the Sun. Thus planet formation models are keyed to factors like the amount of water the young planet would have assimilated. Scott Bolton, Juno principal investigator at the Southwest Research Institute in San Antonio, comments:


“Just when we think we have things figured out, Jupiter reminds us how much we still have to learn. Juno’s surprise discovery that the atmosphere was not well mixed even well below the cloud tops is a puzzle that we are still trying to figure out. No one would have guessed that water might be so variable across the planet.”



Image: The JunoCam imager aboard NASA’s Juno spacecraft captured this image of Jupiter’s southern equatorial region on Sept. 1, 2017. The image is oriented so Jupiter’s poles (not visible) run left-to-right of frame. Credit: NASA/JPL-Caltech/SwRI/MSSS/Kevin M. Gill.


The research team, led by Cheng Li (JPL/Caltech) used data from Juno’s first eight science flybys, focusing on the equatorial region first because the atmosphere appears to be better mixed there than in other regions. Juno’s microwave radiometer can measure the absorption of microwave radiation by water at multiple depths at the same time. Using these methods, Juno could collect data from deeper in the atmosphere than Galileo, where pressures reach about 480 psi (33 bar). The next move will be to compare this with other regions, giving us a picture of water abundance as Juno coverage extends deeper into Jupiter’s northern hemisphere. Of particular interest will be what Juno will find at the planet’s poles.



From the paper:


We have shown that the structure of Jupiter’s EZ [equatorial zone] is steady, relatively uniform vertically and close to a moist adiabat [a region where heat does not enter or leave the system]; from this we have derived its water abundance. The thermal structure outside of the equator is still ambiguous owing to the non-uniform distribution of ammonia gas, for which we do not know the physical origin. Deriving the thermal structure outside of the equator in the future not only hints about the water abundance on Jupiter at other latitudes but also places constraints on the atmospheric circulation model for giant planets in the Solar System and beyond.


Image: Thick white clouds are present in this JunoCam image of Jupiter’s equatorial zone. These clouds complicate the interpretation of infrared measurements of water. At microwave frequencies, the same clouds are transparent, allowing Juno’s Microwave Radiometer to measure water deep into Jupiter’s atmosphere. The image was acquired during Juno’s flyby of the gas giant on Dec. 16, 2017. Credit: NASA/JPL-Caltech/SwRI/MSSS/Kevin M. Gill.


The authors add that Juno has already revealed a deep atmosphere that is surprisingly variable as a function of latitude, highlighting the need to tread cautiously before making any assumptions about the planet’s overall water abundance. Extending these observations into other regions of the planet will be useful because oxygen is the most common element after hydrogen and helium in Jupiter’s atmosphere, and as water ice may thus have been the primary condensable in the protoplanetary disk. Consider this a deep probe into planet formation.


The paper is Li et al., “The water abundance in Jupiter’s equatorial zone,” Nature Astronomy 10 February 2020 (abstract).


tzf_img_post


 •  0 comments  •  flag
Share on Twitter
Published on February 21, 2020 04:55

February 19, 2020

Trident: Firming up the Triton Flyby

It’s not a Triton, or even a Neptune orbiter, but Trident is still an exciting mission, a Triton flyby that would take a close look at the active resurfacing going on on this remarkable moon. Trident has recently been selected by NASA’s Discovery Program as one of four science investigations that will lead to one to two missions being chosen at the end of the study for development and launch in the 2020s.


These are nine-month studies, and they include, speaking of young and constantly changing surfaces, the Io Volcanic Observer (IVO). The other two missions are the Venus Emissivity, Radio Science, InSAR, Topography, and Spectroscopy (VERITAS) mission, and DAVINCI+ (Deep Atmosphere Venus Investigation of Noble gases, Chemistry, and Imaging Plus).


Each of these studies will receive $3 million to bring its concepts to fruition, concluding with a Concept Study Report, at which point we’ll get word on the one or two that have made it to further development and flight. The NASA Discovery program has been in place since 1992, dedicated to supporting smaller missions with lower cost and shorter development times than the larger flagship missions. That these missions can have serious clout is obvious from some of the past selections: Kepler, Dawn, Deep Impact, MESSENGER, Stardust and NEAR.


Active missions at the moment include Lunar Reconnaissance Orbiter and InSight, but we leave the inner system with Lucy, a Discovery mission visiting a main belt asteroid as well as six Jupiter trojans, and Psyche, which will explore the unusual metal asteroid 16 Psyche. Discovery missions set a $500 million cost-cap excluding launch vehicle operations, data analysis or partner contributions. The next step up in size is New Frontiers, now with a $1 billion cost-cap — here we can mention New Horizons, OSIRIS-REx and Juno as well as Dragonfly.



I assume that New Horizons’ success at Pluto/Charon helped Trident along, showing how much good science can be collected from a flyby. Triton makes for a target of high interest because of its atmosphere and erupting plumes, along with the potential for an interior ocean. The goal of Trident is to characterize the processes at work while mapping a large swath of Triton and learning whether in fact the putative ocean beneath the surface exists. A mid-2020s launch takes advantage of a rare and efficient gravity assist alignment to make the mission feasible. Louise Prockter, director of the Lunar and Planetary Institute in Houston, is principal investigator.


Image: Dr. Louise Prockter, program director for the Universities Space Research Association, as well as director of the Lunar and Planetary Institute, is now principal investigator for Trident. Credit: USRA.


We can thank Voyager 2 for providing our only close-up images of Triton, which was revealed to be a place where explosive venting blows dark material from beneath the ice into the air, material which falls back onto the surface to create new features. The terrain is varied and notable for the striking ‘cantaloupe’ pattern covering large areas. With its distinctive retrograde rotation, orbiting opposite to Neptune’s rotation, and high inclination orbit, Triton may well be an object captured from the Kuiper Belt, in an orbit where tidal forces likely lead to interior heating that could maintain an ocean. What we learn here could inform our understanding not just of KBOs, but also giant moons like Titan and Europa, and smaller ocean worlds like Enceladus.


This would be a flyby with abundant opportunities for data collection, as this precis from the 2019 Lunar and Planetary Science Conference makes clear:


An active-redundant operational sequence ensures unique observations during an eclipse of Triton – and another of Neptune itself – and includes redundant data collection throughout the flyby… High-resolution imaging and broad-spectrum IR imaging spectroscopy, together with high-capacity onboard storage, allow near-full-body mapping over the course of one Triton orbit… Trident passes through Triton’s thin atmosphere, within 500 km of the surface, sampling its ionosphere with a plasma spectrometer and performing magnetic induction measurements to verify the existence of an extant ocean. Trident’s passage through a total eclipse allows observations through two atmospheric radio occultations for mapping electron and neutral atmospheric density, Neptune-shine illuminated eclipse imaging for change detection since the 1989 Voyager 2 flyby, and high-phase angle atmospheric imaging for mapping haze layers and plumes.



Image: Global color mosaic of Triton, taken in 1989 by Voyager 2 during its flyby of the Neptune system. Color was synthesized by combining high-resolution images taken through orange, violet, and ultraviolet filters; these images were displayed as red, green, and blue images and combined to create this color version. With a radius of 1,350 kilometers (839 mi), about 22% smaller than Earth’s moon, Triton is by far the largest satellite of Neptune. It is one of only three objects in the Solar System known to have a nitrogen-dominated atmosphere (the others are Earth and Saturn’s giant moon, Titan). Triton has the coldest surface known anywhere in the Solar System (38 K, about -391 degrees Fahrenheit); it is so cold that most of Triton’s nitrogen is condensed as frost, making it the only satellite in the Solar System known to have a surface made mainly of nitrogen ice. The pinkish deposits constitute a vast south polar cap believed to contain methane ice, which would have reacted under sunlight to form pink or red compounds. The dark streaks overlying these pink ices are believed to be an icy and perhaps carbonaceous dust deposited from huge geyser-like plumes, some of which were found to be active during the Voyager 2 flyby. The bluish-green band visible in this image extends all the way around Triton near the equator; it may consist of relatively fresh nitrogen frost deposits. The greenish areas includes what is called the cantaloupe terrain, whose origin is unknown, and a set of “cryovolcanic” landscapes apparently produced by icy-cold liquids (now frozen) erupted from Triton’s interior.

Credit: NASA/JPL/USGS.


If it flies, Trident would launch in 2026 and reach Triton in 2038, using gravity assists at Venus, the Earth and, finally, Jupiter for a final course deflection toward Neptune. The current thinking is to bring the spacecraft, which will weigh about twice New Horizons’ 478 kg, within 500 kilometers of Triton, a close pass indeed compared to New Horizons’ 12,500 kilometer pass by Pluto. This is indeed close enough for the spacecraft to sample Triton’s ionosphere and conduct the needed magnetic induction measurements to confirm or refute the existence of its ocean. As this mission firms up, we’ll be keeping a close eye on its prospects in the outer system. Remember, too, the 2017 workshop in Houston examining a possible Pluto orbiter, still a long way from being anything more than a concept, but interesting enough to make the pulse race.


My friend Ashley Baldwin, who sent along some good references re Trident, also noted that Trident’s trajectory is such that the gravity assist around Jupiter could, at 1.24 Jupiter radii, provide a close flyby of Io. Interesting in terms of the competing Io Volcanic Observer entry.


tzf_img_post


 •  0 comments  •  flag
Share on Twitter
Published on February 19, 2020 12:03

February 18, 2020

A New Look at the ‘Pale Blue Dot’

The 30th anniversary of the famous ‘Pale Blue Dot’ image of Earth, which took place on February 14, is an appropriate occasion for the newly updated image below, which brings the latest methods to bear on the data Voyager 1 presented us. Our planet takes up less than a single pixel and for that reason is not fully resolved. The rays of sunlight due to scattering within the camera optics intersect with Earth, reminding us that from Voyager’s position 6 billion kilometers from home, the Earth/Sun separation was only a matter of a few degrees.



Image: For the 30th anniversary of one of the most iconic images taken by NASA’s Voyager mission, a new version of the image known as “the Pale Blue Dot.” Planet Earth is visible as a bright speck within the sunbeam just right of center and appears softly blue, as in the original version published in 1990. This updated version uses modern image-processing software and techniques to revisit the well-known Voyager view while attempting to respect the original data and intent of those who planned the images. Credit: NASA/JPL-Caltech.


What we’re looking at is a color composite that combines images taken through green, blue, and violet spectral filters with the Voyager 1 Narrow-Angle Camera. 34 minutes after this work was done, Voyager 1 powered off its cameras, there being no targets for future flybys. In any case, controllers needed to conserve power for what would become the Voyager Interstellar Mission, an undertaking that is still alive. The image also reminds us that images of the inner planets would have been dangerous if taken earlier in the mission, given the possibility of damaging the cameras due to their proximity to the Sun.


And here is the original, with JPL caption from 1996:



Image: This narrow-angle color image of the Earth, dubbed ‘Pale Blue Dot’, is a part of the first ever ‘portrait’ of the solar system taken by Voyager 1. The spacecraft acquired a total of 60 frames for a mosaic of the solar system from a distance of more than 4 billion miles from Earth and about 32 degrees above the ecliptic. From Voyager’s great distance Earth is a mere point of light, less than the size of a picture element even in the narrow-angle camera. Earth was a crescent only 0.12 pixel in size. Coincidentally, Earth lies right in the center of one of the scattered light rays resulting from taking the image so close to the sun. This blown-up image of the Earth was taken through three color filters — violet, blue and green — and recombined to produce the color image. The background features in the image are artifacts resulting from the magnification. Credit: NASA/JPL.


A series of 60 images went into making what the mission team called a Family Portrait of the Solar System, a sequence that captured six planets as well as the Sun. For readers of Centauri Dreams, I doubt I have to wax poetic here, as most of you have your own thoughts, remembering the first time you saw this breathtaking image and tried to fit it into your own perspective on the cosmos. It was Carl Sagan who came up with the idea for the image, and it’s fitting that the Carl Sagan Institute’s Lisa Kaltenegger (Cornell) should comment on it;


“The Pale Blue Dot image shows our world as both breathtakingly beautiful and fragile, urging us to take care of our home. We are living in an amazing time, where for the first time ever we have the technical means to spot worlds orbiting other stars. Could one of them be another pale blue dot, harboring life? That is what we are trying to find out at the Carl Sagan Institute.”



Image: This simulated view, made using NASA’s Eyes on the Solar System app, approximates Voyager 1’s perspective when it took its final series of images known as the “Family Portrait of the Solar System,” including the “Pale Blue Dot” image. Credit; NASA/JPL-Caltech.


Ed Stone, Voyager project scientist, called this final use of the spacecraft’s lenses ‘last light,’ in contrast to the initial imaging by a telescope, which is known in the trade as ‘first light.’ It’s remarkable to consider today that at the time, the idea of Voyager’s look back at the Solar System was dismissed by some as a stunt, and it’s worth remembering that there were those in the Voyager design days who advocated that the spacecraft carry no cameras at all.


Jim Bell writes about the matter in his book The Interstellar Age:


Fortunately, after the Neptune encounter, top NASA officials such as associate administrator for science Len Fisk and administrator Richard Truly shared Carl Sagan’s vision of the historic, aesthetic value of the solar-system family portrait. Ed Stone was also a strong supporter of the idea. He recalls a dinner at Caltech organized by Sagan and The Planetary Society just before the Voyager Neptune flyby in 1989, during which he, Sagan, Fisk, and Voyager project manager Norm Haynes talked about what it would take to make ‘the picture of the century’ happen. By this point in time it was essentially a budgetary issue, as Voyager’s funding was set to ramp down steeply right after Neptune. Happily, Fisk and Truly interceded to make sure the people and resources were made available for this one last Voyager mosaic…


We might not have had the ‘pale blue dot’ image at all if it had not been for this intervention, a reminder of the significance of decisions that at the time may seem small once examined in the context of history. It’s only right to quote Sagan’s famous words on the image to close, from his book Pale Blue Dot: A Vision of the Human Future in Space (1994):


“Look again at that dot. That’s here. That’s home. That’s us. On it, everyone you love, everyone you know, everyone you ever heard of, every human being who ever was, lived out their lives. The aggregate of our joy and suffering, thousands of confident religions, ideologies and economic doctrines, every hunter and forager, every hero and coward, every creator and destroyer of civilization, every king and peasant, every young couple in love, every mother and father, hopeful child, inventor and explorer, every teacher of morals, every corrupt politician, every ‘superstar,’ every ‘supreme leader,’ every saint and sinner in the history of our species lived there – on a mote of dust, suspended in a sunbeam.”


tzf_img_post


 •  0 comments  •  flag
Share on Twitter
Published on February 18, 2020 09:25

February 14, 2020

Boundary Conditions for Emergent Complexity Longevity

We usually think about habitability in terms of liquid water on the surface, which is the common definition of the term ‘habitable zone.’ But even in our own system, we have great interest in places where this is not the case (e.g. Europa). In today’s essay, Nick Nielsen begins with the development of complex life in terms not just of a habitable zone, but what some scientists are calling an ‘abiogenesis zone.’ The implications trigger SETI speculation, particularly in systems whose host star is nearing the end of its life on the main sequence. Are there analogies between habitable zones and the conditions that can lead not just to life but civilization? These boundary conditions offers a new direction for SETI theorists to explore.


by J. N. Nielsen



Recently a paper of some interest was posted to arXiv, “There’s No Place Like Home (in Our Own Solar System): Searching for ET Near White Dwarfs,” by John Gertz. (Gertz has several other interesting papers on arXiv that are working looking at.) Here is the abstract of the paper in its entirety:


The preponderance of white dwarfs in the Milky Way were formed from the remnants of stars of the same or somewhat higher mass as the Sun, i.e., from G-stars. We know that life can exist around G-stars. Any technologically advanced civilization residing within the habitable zone of a G-star will face grave peril when its star transitions from the main sequence and successively enters sub-giant, red giant, planetary nebula, and white dwarf stages. In fact, if the civilization takes no action it will face certain extinction. The two alternatives to passive extinction are (a) migrate away from the parent star in order to colonize another star system, or (b) find a viable solution within one’s own solar system. It is argued in this paper that migration of an entire biological population or even a small part of a population is virtually impossible, but in any event, far more difficult than remaining in one’s home solar system where the problem of continued survival can best be solved. This leads to the conclusion that sub-giants, red giants, planetary nebula, and white dwarfs are the best possible candidate targets for SETI observations. Search strategies are suggested.


There are a number of interesting ideas in the above. The first thing that strikes me about this is that it exemplifies what I call the SETI paradigm: interstellar travel is either impossible or so difficult that SETI is the only possibility for contact with other civilizations. [1]


The SETI paradigm is worth noting in this context because Gertz is considering these matters on a multi-billion year time scale, i.e., a cosmological scale of time, and not the scale of time at which we usually measure civilization. Taking our own case of civilization as normative, if terrestrial civilization endures through the red giant and white dwarf stages of our star, that means our civilization will endure for billions of years, and in those billions of years (in the Gertz scenario) we will not develop any of the technology that would allow us to make the journey to other stars, including those other stars that will come within less than a light year of our own star with some frequency over cosmological scales of time. [2] We will, however, according to this scenario, develop technologies that would allow us to migrate to other parts of our own planetary system. I find that this contrast in technological achievement makes unrealistic demands upon credulity, but this is merely tangential to what I want to talk about in relation to this paper.


What most interests me about the scenario contemplated in this paper is its applicability to forms of emergent complexity other than human civilization. What I mean by “other forms of emergent complexity” is what I now call emergent complexity pluralism, which I present in my upcoming paper “Peer Complexity during the Stelliferous Era.” The paper isn’t out yet, but you can see a video of my presentation in Milan in July 2019: Peer Complexity during the Stelliferous Era, Life in the Universe: Big History, SETI and the Future of Humankind, IBHA & INAF-IASF MI Symposium. (Write to me if you’d like a copy of the paper.) In brief, we aren’t the only kind of complexity that may arise in the universe.


The simplest case of an alternative emergent complexity, and the case most familiar to us, is to think of Gertz’s scenario in terms of life without the further emergent complexities that have come to supervene upon human activity, chiefly civilization. In the case of a planet like Earth, possessed of a biosphere that has endured for billions of years and which has produced complex forms of life, one could expect to see exactly what Gertz attributes to technological civilizations, though biology alone could be sufficient to account for these developments. However — and this is a big however — the conditions must be “just right” for this to happen. In other words, something like the Goldilocks conditions of the “Goldilocks Zone” (the circumstellar habitable zone, or CHZ) must obtain, though in a more generalized form, so that each form of emergent complexity may have its own distinctive boundary conditions.


A further distinction should be introduced at this point. The boundary conditions of the emergence of complexity (whether of life, or civilization, or something else yet) may be distinct from the boundary conditions for the further development of complexity, and especially for developments that involve further complexity emerging from a given complexity, in the way that consciousness and intelligence emerged from life on Earth, and civilization emerged in turn from consciousness and intelligence. This distinction has been captured in origins of life research by the distinction between the habitability zone (the CHZ, in its conventional use) and the abiogenesis zone. The former is the region around a star where biology is possible, whereas the latter is the region in which biology can arise.


In a 2018 paper, The origin of RNA precursors on exoplanets, by Paul B. Rimmer, Jianfeng Xu, Samantha J. Thompson, Ed Gillen, John D. Sutherland, and Didier Queloz, this distinction between conditions for the genesis of life and conditions for the development and furtherance of life is made, and the two sets of boundary conditions are shown to overlap, but not to precisely coincide:


“The abiogenesis zone we define need not overlap the liquid water habitable zone. The liquid water habitable zone identifies those planets that are a sufficient distance from their host star for liquid water to exist stably over a large fraction of their surfaces. In the scenario we consider, the building blocks of life could have been accumulated very rapidly compared to geological time scales, in a local transient environment, for which liquid water could be present outside the liquid water habitable zone. The local and transient occurrences of these building blocks would almost certainly be undetectable. The liquid water habitable zone helpfully identifies where life could be sufficiently abundant to be detectable.” [3]


The idea implicit in defining an abiogenesis zone distinct from a habitable zone can be extrapolated to other forms of complexity: boundary conditions of emergence may be distinct from boundary conditions for development and longevity; the conditions for the emergence of civilization may be distinct from the conditions for the longevity of civilization. But let us return to the scenario of life maintaining itself within its planetary system without the assistance of intelligence or technology.



Image: This is Figure 4 from the Rimmer et al. paper. Caption: A period-effective temperature diagram of confirmed exoplanets within the liquid water habitable zone (and Earth), taken from a catalog (1, 42, 43), along with the TRAPPIST-1 planets (3) and LHS 1140b (4). The “abiogenesis zone” indicates where the stellar UV flux is large enough to result in a 50% yield of the photochemical product. The red region shows the propagated experimental error. The liquid water habitable zone [from (44, 45)] is also shown. Credit: Rimmer et al.


Whereas the CHZ is usually defined in terms of a region of space around a star clement for life as we know it, the boundary conditions for alternative emergent complexities will be optimal relative to the emergent complexity in question. That is to say, the wider we construe “habitability” (i.e., the more diverse kinds of emergent complexity that might inhabit a planet or planetary system) the more CHZs there will be, as each form of emergent complexity will have boundary conditions distinctive to itself.


In a planetary system with a large number of rocky worlds spaced relatively close together, these worlds could serve as “stepping stones” for enhanced lithopanspermia. [4] At each stage in the life of the parent star of such a planetary system with life, the life would be distributed among the available planets, and it would flourish into a planetary-scale biosphere on the world with the most clement conditions. When the star began to swell into a red giant, the inner planets would become inhospitable to life, but life could then migrate outward to the cooler planets. And then, when the star cooled down again, life could once again planet-hop nearer to the now-cooler star.


We do not yet know if the boundary conditions for emergent complexity longevity obtain within our own solar system. Is Mars close enough that life, going extinct on Earth, could make the transition to this cooler world, and possibly also further out to the moons of the gas giants? In The Jovian Oceans [5] I suggested that, as the sun grows into a red giant, the outer regions of the solar system will become warmer and the subsurface oceans of some of the moons of Jupiter and Saturn may thaw out and become watermoons (in contradistinction to waterworlds). These regions of our solar system may be clement to life when Earth is no longer habitable, but if life cannot make the journey to these worlds, they may as well not exist at all. We still have a billion years for sufficiently hardy microorganisms to evolve, and for collisions with large bodies to blast microorganisms off the surface of Earth and into trajectories that would eventually result in their impacting on Mars. The chances for this strike me as marginal, but over a billion years we cannot exclude marginal scenarios.


As I have noted in Life: from Sea to Land to Space, the expansion of life from Earth into space (like the expansion of life from the oceans onto land) will open up a vastly greater number of niches to life than could exist on any one planet, so that the opportunities for adaptive radiation are increased by orders of magnitude. But this expansive scenario for life in space is contingent upon the proper boundary conditions obtaining; life must expand into an optimal environment in order for it to experience optimal expansion and adaptive radiation. [6] And as the boundary conditions for the emergence of emergent complexity may be distinct from the boundary conditions for the longevity of emergent complexity, emergent complexity (like a biosphere) may flourish and die on one planet without the opportunity to exploit the potential of other niches. [7]


There are also distinctive boundary conditions for the longevity of civilization. If a civilization is to employ technological means to extend its longevity, whether through journeying to other stars, or, according to Gertz’s scenario, shifting itself within its home planetary system (“sheltering in place”), then the conditions must first be right for a life to arise, and then for civilization to supervene upon life, and finally for civilization to pass beyond its planetary origins by technological means. These boundary conditions might include, for example, an adequate supply of fossil fuels for the civilization to make its original transition to industrialization, and, later, sufficient titanium resources to build spacecraft, and sufficient fissionables to supply nuclear power or to operate nuclear rockets.


It takes a “just right” planetary system for a technological civilization to successfully make a spacefaring breakout from its homeworld — just as being a space-capable civilization is a necessary condition for spacefaring breakout, coming to an initial threshold of technological maturity in the context of favorable boundary conditions is also a necessary condition for being a spacefaring civilization. It also takes a “just right” stellar neighborhood for a spacefaring civilization to make an interstellar breakout from its home system. The boundary conditions for interstellar civilization are subject to change over cosmological scales of time, because stars change their relationships to each other within the galaxy, but there will still be regions in the galaxy with more favorable conditions and regions in the galaxy with less favorable conditions.


As I have noted in other contexts, technology is a means to an end, and usually not an end in itself, so that there is a certain fungibility in the use of technologies: if the resources are unavailable for a particular technology, they may be available for some other technology that can serve in a similar capacity. A marginal technology in favorable boundary conditions, or a superior technology in unfavorable boundary conditions, might do the trick either way. However, there are limits to technological fungibility. The boundary conditions for the longevity of technological civilizations set these limits.


Notes


[1] I have written about the SETI paradigm in my Centauri Dreams post Stagnant Supercivilizations and Interstellar Travel, inter alia.


[2] I discussed interstellar travel by waiting for other planetary systems to pass near our own in the aforementioned Stagnant Supercivilizations and Interstellar Travel.


[3] “The origin of RNA precursors on exoplanets,” by Paul B. Rimmer, Jianfeng Xu, Samantha J. Thompson, Ed Gillen, John D. Sutherland, and Didier Queloz, Science Advances, 01 Aug 2018: Vol. 4, no. 8, DOI: 10.1126/sciadv.aar3302


[4] Cf. two papers on this, “Enhanced interplanetary panspermia in the TRAPPIST-1 system” by Manasvi Lingam and Abraham Loeb, and “Fast litho-panspermia in the habitable zone of the TRAPPIST-1 system”, by Sebastiaan Krijt, Timothy J. Bowling, Richard J. Lyons, and Fred J. Ciesla, and my post Emergent Complexity in Multi-Planetary Ecosystems.


[5] This post also noted two papers, then recent, on habitability zones around post-main sequence stars, “Habitable Zones Of Post-Main Sequence Stars” by Ramses M. Ramirez, et al., and “Habitability of Super-Earth Planets around Other Suns: Models including Red Giant Branch Evolution” by W. von Bloh, M. Cuntz, K.-P. Schroeder, C. Bounama, and S. Franck, both of which are relevant to Gertz’s argument.


[6] René Heller has introduced the concept of superhabitable worlds, i.e., worlds more clement for life than Earth, thus optimal for life (cf., e.g., “Superhabitable Worlds”, by René Heller and John Armstrong), which suggests a similar implicit distinction between merely habitable planetary systems and superhabitable planetary systems, merely habitable galaxies and superhabitable galaxies, and so on.


[7] Freeman Dyson argued for the value of life that can adapt to conditions distinct from the planetary endemism that characterizes life as we know it: “…planets compare unfavourably with other places as habitats. Planets have many disadvantages. For any form of life adapted to living in an atmosphere, they are very difficult to escape from. For any form of life adapted to living in vacuum they are death-traps, like open wells full of water for a human child. And they have a more fundamental defect: their mass is almost entirely inaccessible to creatures living on their surface.” (Dyson, F. J. 2003. “Looking for life in unlikely places: reasons why planets may not be the best places to look for life.” International Journal of Astrobiology, 2(2), 103–110) Dyson’s reasons for favoring life independent of planets does not alter the fact that a lot of interesting chemistry occurs on planets that does not occur elsewhere because other environments do have not large scale geomorphological processes; however, Dyson’s observations do point to the selective value of life that can adapt to habitats without planets.


tzf_img_post


 •  0 comments  •  flag
Share on Twitter
Published on February 14, 2020 07:13

February 12, 2020

A Nearby ‘Planet’ in Formation

330 light years from the Sun is the infant planet 2MASS 1155-7919 b, recently discovered in Gaia data by a team from the Rochester Institute of Technology. It’s a useful world to have in our catalog because we have no newborn massive planet closer to Earth than this one. Circling a star in the Epsilon Chamaeleontis Association, 2MASS 1155-7919 b is thought to be no more than 5 million years old, orbiting its host at roughly 600 times the Earth/Sun distance. A stellar association like Epsilon Chamaeleontis is a loose cluster, with stars that have a common origin but are no longer gravitationally bound as they move in rough proximity through space.


RIT graduate student Annie Dickson-Vandervelde is lead author on the discovery paper:


“The dim, cool object we found is very young and only 10 times the mass of Jupiter, which means we are likely looking at an infant planet, perhaps still in the midst of formation. Though lots of other planets have been discovered through the Kepler mission and other missions like it, almost all of those are ‘old’ planets. This is also only the fourth or fifth example of a giant planet so far from its ‘parent’ star, and theorists are struggling to explain how they formed or ended up there.”



Image: Artist’s conception of a massive planet orbiting a cool, young star. In the case of the system discovered by RIT astronomers, the planet is 10 times more massive than Jupiter, and the orbit of the planet around its host star is nearly 600 times that of Earth around the sun. NASA/JPL-Caltech/R. Hurt (SSC-Caltech).


So a star a thousand times younger than the Sun has produced a giant planet far enough from its star to challenge our models of gas giant formation. But is it actually a planet?


From the paper:


The origins of systems involving such wide-separation substellar objects are presently the subject of vigorous debate (Rodet et al. 2019, and references therein). Given that 2MASS 1155-7919 b is quite possibly the youngest massive planet within ~100 pc—i.e., closer to Earth than the aforementioned massive young planets, as well as nearby star-forming clouds—this object is richly deserving of followup spectroscopy and imaging aimed at confirming its spectral type, age, and luminosity, in order to better understand its nature and origin.


Despite its unusual interest, the paper also reminds us, 2MASS J1155-7919 b joins other systems with wide separation, such worlds as HD 106906 b, 1RXS 1609 b, CT Cha b, and DENIS1538-1038. The fact that the latter two are thought to be brown dwarf candidates highlights the idea that such objects may form like low-mass stars, although the physical processes at work in their accretion and development are poorly understood. Here I’m going to switch to a different paper, this one on DENIS1538−1038 (Nguyen-Thanh et al., citation below).


Our discovery of the 1-Myr old BD [brown dwarf] that exhibits sporadic accretion with low accretion rates supports a possible scenario for BD formation…where low-mass accretion rates at very early stages (possibly with high outlfow mass-loss rate-to-mass-accretion-rate ratios) prevent VLM [very low mass] cores from accreting enough gas to become stars, and thus these cores would end up as BDs.


The authors of the 2MASS J1155-7919 b discovery paper point out that their putative gas giant is only slightly below the boundary between brown dwarfs and massive planets, making the above brown dwarf formation scenario a possibility. In either case, we have much to learn about widely separated objects in the same system when we find them this early in their evolution. The intellectual ferment in this area is exciting to watch.


The paper is Dickson-Vandervelde et al., “Identification of the Youngest Known Substellar Object within ~100 pc,” Research Notes of the AAS Vol. 4, No. 2 (7 February 2020). Full text. The Nguyen-Thanh et al paper is “Sporadic and intense accretion in a 1 Myr-old brown dwarf candidate,” in process at Astronomy & Astrophysics (preprint).


tzf_img_post


 •  0 comments  •  flag
Share on Twitter
Published on February 12, 2020 09:02

February 11, 2020

A Heliophysics Gateway to Deep Space

Are missions to the Sun particularly relevant to our interstellar ambitions? At the current state of our technology, the answer is yes. Consider Solar Cruiser, which is the planned NASA mission using a solar sail that could maintain non-Keplerian orbits, allowing it to investigate the Sun’s high latitudes. And throw in the European Space Agency-led Solar Orbiter, which left our planet early Monday (UTC) on a United Launch Alliance Atlas V rocket, lifting off from Launch Complex 41 at Cape Canaveral Air Force Station in Florida. Herewith the gorgeous arc of ascent:



Image: Launch of the ESA/NASA Solar Orbiter mission to study the Sun from Cape Canaveral Air Force Station in Florida on Feb. 9, 2020. Credit: Jared Frankle.


Missions to the Sun allow us to explore conditions close to a star and, significantly, deep in its gravity well, where interesting things can happen. When we discuss one way of propelling a sail beyond the heliosphere, the irony is that an Oberth maneuver, which takes place at a few solar radii, can bring additional chemical propulsion online at perihelion to extract the maximum push. So in propulsive terms, we go to the Sun in order to get flung from the Sun at highest speed. If we want to get beyond the heliosphere fast and with today’s tools, the Sun is a major factor.


Solar Orbiter is not, of course, designed around interstellar matters, but the synchronicity here works well for us. The more data about conditions near the Sun, the better for what we will want to do in the future. Günther Hasinger is the European Space Agency’s director of science:


“As humans, we have always been familiar with the importance of the Sun to life on Earth, observing it and investigating how it works in detail, but we have also long known it has the potential to disrupt everyday life should we be in the firing line of a powerful solar storm. By the end of our Solar Orbiter mission, we will know more about the hidden force responsible for the Sun’s changing behavior and its influence on our home planet than ever before.”


And, I would add, we’ll know a great deal more about how spacecraft operate inside Mercury’s orbit. Moreover, think about all the interesting maneuvers that have to take place to make this happen. Three gravity assists come into play as Solar Orbiter goes for the Sun, two of them past Venus in late 2020 and August of 2021, and one past Earth in November of 2021. The first close pass of the Sun will be in 2022, at about a third of an AU, with the gravity of Venus being used to push Solar Orbiter up out of the ecliptic plane. Ulysses achieved an inclined orbit in 1990, but Solar Orbiter will be carrying cameras allowing us to directly image the Sun’s poles, a role for which Ulysses was not equipped. The spacecraft is to reach an inclination 17 degrees above and below the solar equator.


Solar Cruiser and Solar Orbiter have much to teach us about interstellar possibilities, as does, for that matter, the continuing Parker Solar Probe mission. Along the way we learn, in addition to the significant science return about the Sun itself, about how spacecraft cope with being subjected to the solar wind and the temperatures of passage near the Sun. We learn about heat shielding and how to minimize what is needed so as to maximize payload. Solar Orbiter will face temperatures of up to 500º C, 13 times that experienced by satellites in Earth orbit.


So if we’re thinking deep space today, we should also be thinking about heliophysics. Our best bet at getting a successor to the Voyager missions well beyond the heliosphere and at significantly higher speeds that Voyager 1 is a close solar pass and propulsive kick that will demand deep knowledge of conditions at perihelion. Solar Orbiter’s 10 scientific instruments will measure electric and magnetic fields, passing particles and waves, solar atmospheric conditions and the outflow of material.


All these are factors as we contemplate the close approaches that will fling solar sails into the Kuiper Belt. In not many years, we could build a ‘sundiver’ mission that would make for great heliophysics as well as data from deep space — two missions in one.


tzf_img_post


 •  0 comments  •  flag
Share on Twitter
Published on February 11, 2020 06:23

February 6, 2020

Modeling Circulation at Pluto’s Heart

The dataflow from New Horizons has been abundant enough that we are now drilling down to atmospheric models that may explain the dwarf planet’s topography. Mention topography on Pluto and the first thing that leaps to mind is Tombaugh Regio, and a new paper in the Journal of Geophysical Research Planets actually takes us into its role in the formation of regional weather patterns on the icy orb. For at the heart of Pluto’s weather appears to be the terrain often called Pluto’s ‘heart,’ from the distinctive shape it imposes upon the landscape.


Pluto’s atmosphere, 100,000 times thinner than ours, is primarily nitrogen, with but small amounts of carbon monoxide and methane. Tombaugh Regio is covered by nitrogen ice, which warms during the day, turning to vapor that condenses in Pluto’s night to once again form ice. The researchers, led by Tanguy Bertrand, an astrophysicist and planetary scientist at NASA’s Ames Research Center in California and the study’s lead author, liken the process to a heartbeat that drives nitrogen winds around Pluto. And here we run into a distinct oddity.


For the paper suggests that this cycle actually produces retro-rotation in Pluto’s atmosphere, so that it circulates in a direction opposite to the dwarf planet’s spin. Air moving past the surface shifts not just heat but grains of ice and haze particles, resulting in the dark wind streaks and plains found across the north and northwestern regions of Tombaugh Regio. Here’s Bertrand on the matter:


“This highlights the fact that Pluto’s atmosphere and winds – even if the density of the atmosphere is very low – can impact the surface. Before New Horizons, everyone thought Pluto was going to be a netball – completely flat, almost no diversity. But it’s completely different. It has a lot of different landscapes and we are trying to understand what’s going on there.”



Image: Four images from NASA’s New Horizons’ Long Range Reconnaissance Imager (LORRI) were combined with color data from the Ralph instrument to create this global view of Pluto. Credit: NASA/Johns Hopkins University Applied Physics Laboratory/Southwest Research Institute.


We learn that most of the nitrogen ice on Pluto is found in Tombaugh Regio, where the 3 kilometer deep basin called Sputnik Planitia (the left lobe of the ‘heart’) contains a 1,000 kilometer ice sheet. Next to this, in the right lobe of the ‘heart,’ we find nitrogen-rich glaciers and highlands. The range of landscapes was one of the most intriguing things found when the first close-up images from New Horizons began to come in. Few had expected to find Pluto an active, geologically interesting world.


The team’s work proceeded through the use of simulations of the nitrogen cycle with a weather forecast model that could be analyzed to study the movement of wind across the surface. The paper discusses the methods at work here:


We used the GCM [Global Climate Model] to investigate different surface-atmosphere interactions involving the near-surface winds, such as the effect of the conductive heat flux from the atmosphere, the erosion of the ice, and the transport of ice grains and dark materials. We find that the cumulative effect of these mechanisms could induce significant contrasts in ice sublimation rate and color, and could explain the formation of the bright and dark plains in Sputnik Planitia.


Winds above 4 kilometers in altitude are found to blow to the west, opposite to the planet’s eastern spin, for most of the dwarf planet’s year. Vaporizing nitrogen in the north of Tombaugh Regio turns to ice in the south, triggering these winds. Along the western boundary of Sputnik Planitia, a current of fast-moving air pushes near the surface, the result of the atmospheric nitrogen condensing back into ice, and the trapping of cold air inside the basin, where its circulation allows it to strengthen as it moves through high cliffs in the area. The western boundary current is strong and is clearly related to the specific terrain it flows across.


As to retro-rotation, it’s unusual, although perhaps not unique. From the paper:


This retro-rotation of Pluto’s atmosphere is a unique circulation regime in the Solar system, except maybe on Triton, where pole-to-pole transport of N2 could also lead to a similar regime. We find that the retro-rotation is maintained during most of Pluto’s year. It could be responsible for many longitudinal asymmetries and geological features observed on Pluto’s surface, such as the depletion of Bladed Terrains at eastern longitudes and the formation of bright pits in eastern Tombaugh regio, although this remains to be explored. Our work confirms that despite a frozen surface and a tenuous atmosphere, Pluto’s climate is remarkably active.


We can only imagine how different Pluto’s surface would look with different wind patterns as we consider Sputnik Planitia and contrast the dark plains and wind streaks to its west. What a curious place Pluto turns out to be, and what a role Sputnik Planitia plays. Says Bertrand:


“Sputnik Planitia may be as important for Pluto’s climate as the ocean is for Earth’s climate. If you remove Sputnik Planitia – if you remove the heart of Pluto – you won’t have the same circulation.”


The New Horizons dataset will be producing papers for many years to come. Oh for a similar mine of information about Triton!


The paper is Bertrand et al., “Pluto’s beating heart regulates the atmospheric circulation: results from high resolution and multi‐year numerical climate simulations,” Journal of Geophysical Research Planets 04 February 2020 (abstract).


tzf_img_post


 •  0 comments  •  flag
Share on Twitter
Published on February 06, 2020 08:11

Paul Gilster's Blog

Paul Gilster
Paul Gilster isn't a Goodreads Author (yet), but they do have a blog, so here are some recent posts imported from their feed.
Follow Paul Gilster's blog with rss.