Paul Gilster's Blog, page 110

March 22, 2018

A Prehistoric Close Pass

Given the vast distances of interstellar space, you wouldn’t think there would be much chance of stars colliding. But it’s conceivable that so-called ‘blue straggler’ stars are the remnants of just such an event. A large blue straggler contains far more hydrogen than smaller stars around it, and burns at higher temperatures, with a correspondingly shorter life. When you find a blue straggler inside an ancient globular cluster, it’s natural to ask: How did this star emerge?


Packing stars as tightly as globular clusters must produce the occasional collision, and in fact astrophysicist Michael Shara (then at the American Museum of Natural History) has estimated there may be as many as several hundred collisions per hour somewhere in the universe. We would never be aware of most of these, but we could expect a collision every 10,000 years or so within one of the Milky Way’s globular clusters. In fact, the globular cluster NGC 6397 shows evidence for what may have been a three-star collision, the result of an outside star moving into a binary system and eventually coalescing (see Two Stars Collide: A New Star Is Born).


In any case, we can think of our own Solar System’s history to reflect on relatively close passes between other stars and the Sun. WISE J072003.20−084651.2 is the designation for a star more mercifully known as Scholz’s Star, discovered in 2013 in the southern constellation Monoceros. It took a scant two years for Eric Mamajek and co-researchers to report that Scholz’s Star passed through the Oort Cloud some 70,000 years ago.


It was evident that Scholz’s Star showed little tangential velocity. But which way was it moving? Mamajek discussed the matter in a 2015 news release:


“Most stars this nearby show much larger tangential motion. The small tangential motion and proximity initially indicated that the star was most likely either moving towards a future close encounter with the solar system, or it had ‘recently’ come close to the solar system and was moving away. Sure enough, the radial velocity measurements were consistent with it running away from the Sun’s vicinity – and we realized it must have had a close flyby in the past.”


Now 20 light years away, the star is the subject of new work on Solar System orbits. For a close stellar pass can leave traces that linger. A team led by Carlos and Raúl de la Fuente Marcos (Complutense University of Madrid), working with Sverre J. Aarseth of the University of Cambridge, has created numerical simulations to analyze the positions of some 340 objects on hyperbolic orbits. The idea is to work out the radiants, positions in the sky from which these objects appear to come. You can see why this paper caught my eye given our recent discussion of ‘Oumuamua and how we might calculate future such arrivals.


Assuming the objects on hyperbolic orbits are moving toward us from the Oort Cloud, which seems a reasonable assumption, we would figure that they would be more or less evenly distributed in the sky. Instead, the paper identifies what the authors call “a statistically significant accumulation of radiants,” an over-density that projects in the direction of Gemini. This, in turn, fits with the system’s encounter with Scholz’s Star 70,000 years ago. From the paper:


It is difficult to attribute to mere chance the near coincidence in terms of timing and position in the sky between the most recent known stellar fly-by and the statistically significant overdensity visible in Figs 3 and 4. It is unclear whether other clusterings present may have the same origin or be the result of other, not yet-documented, stellar fly-bys or perhaps interactions with one or more unseen perturbers orbiting the Sun well beyond Neptune…


And later:


The overdensity of high-speed radiants appears to be consistent in terms of location and time constraints with the latest known stellar fly-by, that of Scholz’s star.


I’ll send you to the paper for the actual figures — they won’t reproduce well here.


Scholz’s Star is a binary system, a red dwarf orbited by a brown dwarf, and it is likely that there was a time when our ancestors could see it in the sky. But only barely — Eric Mamajek has pointed out that even at its closest approach, the apparent magnitude would have been in the range of 11.4, which is five magnitudes fainter than what the naked eye can see, even in the pristine skies of paleolithic Earth. What might have been visible would have been flares from the M-dwarf, which could have created short-lived transient events, fleeting but noticeable.



Image: At a time when modern humans were beginning to leave Africa and the Neanderthals were living on our planet, Scholz’s star approached to within less than a light-year. It may have been briefly visible during flare events on the M9.5 primary. Credit: José A. Peñas/SINC


Here’s a graph that Mamajek published on Twitter in 2015.



So we have the evidence of disrupted trajectories to back up the finding that Scholz’s Star made a close pass. ‘Oumuamua, incidentally, is not implicated in any of this. Its radiant is in the constellation Lyra, meaning it is not part of the over-density observed by the De la Fuente Marcos team. On the matter of deep space interlopers, though, it’s interesting that the paper names eight hyperbolic comets as being good candidates to have an interstellar origin.


The paper is C. de la Fuente Marcos, R. de la Fuente Marcos, S. J. Aarseth. “Where the Solar system meets the solar neighbourhood: patterns in the distribution of radiants of observed hyperbolic minor bodies,” MNRAS Letters, 2018 (preprint). The Mamajek paper is Mamajek et al., “The Closest Known Flyby of a Star to the Solar System,” Astrophysical Journal Letters 800 (2015), L17 (preprint).


tzf_img_post




 •  0 comments  •  flag
Share on Twitter
Published on March 22, 2018 06:55

March 21, 2018

Mission to an Interstellar Asteroid

On the matter of interstellar visitors, bear in mind that our friend ‘Oumuamua, the subject of yesterday’s post, was discovered at the University of Hawaii’s Institute for Astronomy, using the Pan-STARRS telescope. The Panoramic Survey Telescope and Rapid Response System is located at Haleakala Observatory on Maui, where it has proven adept at finding new asteroids, comets and variable stars. Consider ‘Oumuamua a bonus, and according to a new paper from Greg Laughlin and Darryl Seligman (Yale University), a type of object we’ll be seeing again.


Pan-STARRS may find objects like this every few years, but we’ll get a bigger payoff in terms of interstellar wanderers with the Large Synoptic Survey Telescope (LSST), now under construction at Cerro Pachón (Chile). Laughlin and Seligman think that this instrument will up the discovery rate as high as several per year, allowing us to see ‘Oumuamua in context, and also, perhaps, setting up the possibility of an intercept mission with a kinetic impactor.


More on that in a moment. But first, it’s interesting to see theories about its place of origin springing up in the brief interval since ‘Oumuamua’s passage. One of the stars of the Carina/Columba association (165-275 light years from Earth) is suggested, as is the double star system HD 200325. One recent survey of more than 200,000 nearby stars could find no conclusive evidence, but does suggest that 820,000 years ago, ‘Oumuamua encountered Gliese 876. There is even a possibility, which Laughlin and Seligman dismiss, that the object originally was ejected from our own Solar System and has now had a new encounter with it.


But back to a potential mission to ‘Oumuamua. What the authors have in mind is a kinetic impactor, which has the great advantage of producing a debris plume that we could look at with a spectroscope. We have a history of comet exploration dating back to Comet Giacobini Zinner in 1985 (International Sun-Earth Explorer), the flurry of missions — Giotto, Vega 1, Vega 2, Sakigake, Suisei — that investigated Comet Halley in 1986, the Deep Space 1 mission at Comet Borrelly (2001), and Stardust at Comet Wild 2. And the, of course, there is Deep Impact, a kinetic impactor that struck Comet Tempel I, and the European Space Agency’s highly successful Rosetta at Comet 67/Churyumov-Gerasimenko.


Even now we have the Osiris-REx mission enroute to the asteroid Bennu on a sample return mission. Thus a mission to an object from outside the Solar System seems feasible, though the challenges are obvious. As the paper notes:


Such a mission would face a number of challenges, including (1) the large heliocentric velocities of objects on hyperbolic trajectories, and (2) the lack of substantial time following the discovery of the target object for mission planning and execution, and (3) uncertainty in targeting during final approach. It is worth noting that when ‘Oumuamua was detected and announced in late October 2017, it had already passed its periastron location (which occurred on 9 September, 2017), and indeed, was already more than 1 AU from the Sun.


You may recall Andreas Hein and a team from the Initiative for Interstellar Studies, who have explored potential rendezvous missions to ’Oumuamua (see Project Lyra: Sending a Spacecraft to 1I/’Oumuamua). Laughlin and Seligman consider their mission as complementary to Hein and team, assessing how to investigate an interstellar object using chemical propulsion. Like Deep Impact, the actual impactor would be accompanied by a companion flyby probe that would examine the results spectroscopically. The feasibility of such a mission depends on having sufficient lead-time to launch the interceptor to the incoming object on a hyperbolic orbit.


Lead time is considered here in terms of the expected arrival directions and speeds of such objects. The authors do this by assuming a kinematic distribution similar to Population I stars, bearing in mind that the number of interstellar asteroids may be as much as 1016 higher than the number of stars. The paper samples the distribution of such asteroids in a cube of 10 AU around the Sun, pinpointing where they would be detectable and for how long. Such knowledge would allow us to determine optimal interception trajectories.



Image: This is Figure 3 from the paper. Caption: A sky map showing the probability that a future interstellar asteroid will approach the Solar System on a trajectory parallel to that direction. The darker colors indicate a higher probability. The axes denote degrees from a heliocentric point of view and the ecliptic is plotted in black. The sky positions of the constellations Serpens and Lepus, which are close in proximity to the Solar apex and anti-apex respectively, are plotted for context. The black circle indicates the sky location that ‘Oumuamua entered our Solar System, consistent with the prediction that the majority of these objects will approach with velocities parallel to the galactic apex. Credit: Laughlin & Seligman.


I send you to the paper for the specifics, but do note this with regard to ‘Oumuamua. When it was discovered, the object was three weeks beyond periastron passage, making reaching it problematic. Putting their trajectory analysis methods to work on ‘Oumuamua, the authors find that with an earlier detection, interception of ‘Oumuamua would not have been out of the question. Future interstellar asteroids could be reached given early detection and favorable trajectories — in fact, the authors conclude that wait times for mission opportunities should be in the range of 10 years, once we have the LSST (scheduled for first light in 2021) available.


And these interesting specifics on a potential mission:


The SpaceX Falcon Heavy quotes a payload capability to Mars of 16,800 kg, which we conservatively use for the payload constraint to L1. The Deep Impact mission to Tempel I had an impactor weighing ∼ 400 kg and a scientific package weighing ∼ 600 kg (A’Hearn et al. 2005). Due to the uncertainty of the position of the ISO [interstellar object], it seems appropriate to use ∼ 16 impactors, with a total weight of 400 kg. The mission program is greatly assisted by the expected 40 km/s velocity of impact with the hyperbolic ISO. Assuming that the remainder of the payload consists of fuel and oxidants, to account for the oxidants and efficiency of the rocket, we allow ∼ 1200 kg of fuel (with specific energy similar to compressed hydrogen) to produce the ∆V. Equating the kinetic energy to the energy produced by the fuel, we calculate that a maximum ∆V ∼ 15 km/s should be attainable, to impart the same amount of kinetic energy (per impact) as the Deep Impact Tempel I interception did.



Image: This is Figure 7 from the paper. Caption: Trajectory of the minimum-∆V mission interception mission sent on July 25th 2017, which had a flight time of 83.38 days. The trajectories for ‘Oumuamua, the Earth, and the rocket are plotted in red, blue and grey respectively in four day intervals in the smaller circles, while the larger circles are plotted in 28 day intervals. The arrows indicate the positions in space of ‘Oumuamua and the rocket on the launch and interception date, 7/25/2017 and 10/16/2016. Projections in the X-Y, Y-Z and X-Z planes are shown in the left, right upper, and right lower panel respectively. Credit: Laughlin & Seligman.


16 impactors to an interstellar asteroid on a manageable trajectory, with the promise of spectroscopic analysis to equal those we have achieved with previous cometary missions. With a discovery rate ramping up to several per year once LSST is available, we should have targets to work with in the 2020s, helping us learn whether what we know of ‘Oumuamua is indicative of the population of these objects. The close-up study of remnants of planetary formation around other stars is now becoming possible provided we know where to look and when to launch.


The Laughlin & Seligman paper is “The Feasibility and Benefits of In Situ Exploration of ‘Oumuamua-like Objects,” accepted at the Astronomical Journal and available as a preprint.


tzf_img_post




 •  0 comments  •  flag
Share on Twitter
Published on March 21, 2018 10:05

March 20, 2018

A Binary Origin for ‘Oumuamua?

The fleeting interstellar visitor we call ‘Oumuamua is back in the news, an object whose fascination burns bright given its status as a visitor from another star system. Just what kind of system is the subject of a new letter just published in Monthly Notices of the Royal Astronomical Society, in which Alan Jackson and colleagues argue that the star-crossed wanderer is most likely the offspring of a binary stellar system, these being far more likely to eject rocky objects. Our first confirmed interstellar asteroid just grows in interest.


Jackson (University of Toronto – Scarborough) is quoted in this news release from the Royal Astronomical Society as saying that the odds didn’t favor the first interstellar object detected in our system being an asteroid. Comets are more likely to be spotted, and our system is more efficient at ejecting comets than asteroids. But ‘Oumuamua is what we got, and its eccentricity of 1.2 and 30 km/sec speed pegged its orbit as hyperbolic, clearly not bound by the Sun’s gravity.



Image: Artist’s impression of ‘Oumuamua. Credit: ESO / M. Kornmesser.


How much do we know about what our Solar System can eject? For this, I turn for a moment to Greg Laughlin and Konstantin Batygin, who make this case in “On the Consequences of the Detection of an Interstellar Asteroid” (citation below):


Our own solar system has contributed many volatile-rich planetesimals to the galaxy. Specifically, within the framework of the so-called Nice model of early solar system evolution, (Tsiganis et al. 2005; Levison et al. 2008), a transient period of dynamical instability is triggered in response to interactions between the giant planets and a primordial disk comprising ∼ 30M⊕. In numerical realizations, nearly all of this material is expelled into the interstellar medium as the instability unfolds, leaving behind today’s severely mass-depleted Kuiper belt. Given the universality of N -body evolution, one can speculate that similar sequences of events are a common feature of planetary system evolution.


Jackson and team are frank in acknowledging that with only a single interstellar object to work with, we have to assume huge uncertainties in the constraints we apply to the mass of material typically ejected from planetary systems. That point hardly needs belaboring, but we press on with the data we have to work with, keeping in mind how much play there is in our estimates.


The case for binary systems and ejected material runs like this. ‘Oumuamua shows no evident activity, making the case that it is a rocky object shorn of volatiles, and hence one that was ejected from inside its parent star’s snowline. For a star of solar mass to eject an object from within its snowline requires a companion object with a mass greater than Saturn. But our radial velocity surveys show a low occurrence rate of giant planets (~10 percent) with orbital periods of 100 to 400 days. Here the authors cite the Laughlin/Batygin paper above, which argues that ‘Oumuamua, if it is indeed rocky, implies that extrasolar asteroid belts are massive.


Giant planets inside the snowline are relatively uncommon, but binary systems are abundant, and are known to be efficient at ejecting material. The authors draw the following conclusion:


…we expect that at most 10% of Sun-like single stars will host a planet capable of efficiently ejecting material interior to the ice line. Laughlin & Batygin (2017) and Raymond et al. (2017) thus argue that if 1I/‘Oumuamua is indeed rocky, then typical extrasolar asteroid belts must be unusually massive. Similarly, recent results from micro-lensing surveys (e.g. Suzuki et al. 2016; Mroz et al. (2017) suggest that giant planets at larger separations are also not common….While giant planets are relatively uncommon, tight binary systems are abundant (Duchene & Kraus 2013), and are extremely efficient at ejecting material (Smullen et al. 2016). They may therefore represent a dominant source of interstellar small bodies.


Jackson and team conducted 2000 N-body simulations to study close encounters and ejections, finding that the fraction of rocky or devolatilised material ejected by binaries is 36 percent — the ratio of icy to rocky objects is roughly 2:1. Moreover, these simulations show that the population of icy interstellar material comes primarily from low mass stars, while the population of rocky material is dominated by intermediate mass stars.


The best guess for ‘Oumuamua: A hot, high mass binary system ejecting rocky material during the formation era of its planets. As to the ejection process itself, the paper comments:


Physically, our picture is one of planetesimals migrating inwards during the early phases of planet formation, in the presence of a protoplanetary disk. Holman & Wiegert (1999) showed that any material in circumbinary orbit migrating inward will become unstable on short timescales once it passes a stability boundary ac,out, for which they provide an empirical fit to results from N-body simulations (their equation 3). This critical distance is a function of the binary mass ratio and eccentricity and ranges from around 2 to 4 times the binary separation…


Thus we have inward planetesimal migration followed by ejection from the binary system when the object passes the stability boundary. The authors’ models show that more than 75 percent of interstellar bodies originate from binary stars, a number that is even higher for rocky objects.


Even if a typical circumbinary only ejects as much material as the Solar system we would still expect close binaries to be the source of more than three quarters of interstellar bodies due to the relatively low abundance of single star systems with giant planets like the Solar system. Whereas in the Solar system the ejected material is overwhelmingly icy, we expect that around 36% of binaries may predominantly eject material that is rocky or substantially devolatilised, leading to similar expectations for the abundance of rocky/devolatilised bodies in the interstellar population.


“The same way we use comets to better understand planet formation in our own Solar System,” says Jackson, “maybe this curious object can tell us more about how planets form in other systems.” Of course it will take more than one such object to do the job, but we’re learning that future detections of interstellar objects are likely as estimates of their occurrence rise.


The paper is Jackson et al., “Ejection of rocky and icy material from binary star systems: Implications for the origin and composition of 1I/‘Oumuamua,” Monthly Notices of the Royal Astronomical Society 19 March 2018 (abstract). The Laughlin/Batygin paper is “On the Consequences of the Detection of an Interstellar Asteroid,” submitted to Research Notes of the AAS (abstract).


tzf_img_post




 •  0 comments  •  flag
Share on Twitter
Published on March 20, 2018 08:40

March 19, 2018

A Changing Landscape at Ceres

Ceres turns out to be a livelier place than we might have imagined. Continuing analysis of data from the Dawn spacecraft is showing us an object where surface changes evidently caused by temperature variations induced by the dwarf planet’s orbit are readily visible even in short time frames. Two new papers on the Dawn data are now out in Science Advances, suggesting variations in the amount of surface ice as well as newly exposed crustal material.


Andrea Raponi (Institute of Astrophysics and Planetary Science, Rome) led a team that discovered changes at Juling Crater, demonstrating an increase in ice on the northern wall of the 20-kilometer wide crater between April and October of 2016. Calling this ‘the first detection of change on the surface of Ceres,’ Raponi went on to say:


“The combination of Ceres moving closer to the sun in its orbit, along with seasonal change, triggers the release of water vapor from the subsurface, which then condenses on the cold crater wall. This causes an increase in the amount of exposed ice. The warming might also cause landslides on the crater walls that expose fresh ice patches.”



Image: This view from NASA’s Dawn mission shows where ice has been detected in the northern wall of Ceres’ Juling Crater, which is in almost permanent shadow. Dawn acquired the picture with its framing camera on Aug. 30, 2016, and it was processed with the help of NASA Ames Stereo Pipeline (ASP), to estimate the slope of the cliff. Credit: NASA/JPL-Caltech/UCLA/MPS/DLR/IDA/ASI/INAF.


Ceres was moving closer to perihelion in the period following these observations, so we would expect the temperature of areas in shadow to be increasing. Sublimating water ice that had previously accumulated on the cold walls of the crater would be a natural result, and while acknowledging other options — such as falls exposing water ice — the researchers favor a cyclical trend similar to cycles of water ice seen, for example, at Comet 67P/Churyumov–Gerasimenko. From the paper:


The linear relationship between ice abundance and solar flux… supports the possibility of solar flux as the main factor responsible for the observed increase. The water ice abundance on the wall is probably not constantly increasing over a longer time range. More likely, we are observing only part of a seasonal cycle of water sublimation and condensation, in which the observed increase should be followed by a decrease.



Image: This view from NASA’s Dawn mission shows the floor of Ceres’ Juling Crater. The crater floor shows evidence of the flow of ice and rock, similar to rock glaciers in Earth’s polar regions. Dawn acquired the picture with its framing camera on Aug. 30, 2016. Credit: NASA/JPL-Caltech/UCLA/MPS/DLR/IDA/ASI/INAF.


A short-term variation in surface water ice shows us an active body, a result that meshes with further observations from Dawn’s visible and infrared mapping spectrometer (VIR) showing variability in Ceres’ crust and the likelihood of newly exposed material. The work, led by Giacomo Carrozzo of the Institute of Astrophysics and Planetary Science, identifies twelve sites rich in sodium carbonates and takes a tight look at several areas where water is present as part of the carbonate structure.


Although carbonates had previously been found on Ceres, this is the first identification of hydrated carbonate there. The fact that water ice is not stable over long time periods on the surface of Ceres unless hidden in shadow, and that hydrated carbonate would dehydrate over timescales of a few million years, means that these sites have been exposed, says Carrozzo, to recent activity on the surface.



Image: This view from NASA’s Dawn mission shows Ceres’ tallest mountain, Ahuna Mons, 4 kilometers high and 17 kilometers wide. This is one of the few sites on Ceres at which a significant amount of sodium carbonate has been found, shown in green and red colors in the lower right image. The top and lower left images were collected by Dawn’s framing camera. The top image is a 3D view reconstructed with the help of topography data. Credit: NASA/JPL-Caltech/UCLA/MPS/DLR/IDA/ASI/INA.


Taken together, the water ice findings and the presence of hydrated sodium carbonates speak to the geological and chemical activity that continues on Ceres. As the paper notes:


The different chemical forms of the sodium carbonate, their fresh appearance, morphological settings, and the uneven distribution on Ceres indicate that the formation, exposure, dehydration, and destruction processes of carbonates are recurrent and continuous in recent geological time, implying a still-evolving body and modern processes involving fluid water.


The papers are Carrozzo et al., “Nature, formation, and distribution of carbonates on Ceres,” Science Advances Vol. 4, No. 3 (14 March 2018) e1701645 (abstract); and Raponi et al., “Variations in the amount of water ice on Ceres’ surface suggest a seasonal water cycle,” Science Advances Vol. 4, No. 2 (14 March 2018) eaao3757 (abstract).


tzf_img_post




 •  0 comments  •  flag
Share on Twitter
Published on March 19, 2018 11:24

March 16, 2018

Red Dwarfs: Their Impact on Biosignatures

We’re in the midst of a significant period defining the biosignatures life can produce and determining how we might identify them. Centauri Dreams regular Alex Tolley today looks at a paper offering a unique contribution to this effort. The work of Sarah Rugheimer and Lisa Kaltenegger, the paper looks at how exoplanet spectra change for different types of host star and different epochs of planetary evolution. As Alex points out, the effects are profound, especially given the fact that red dwarfs will be our testbed for biosignature detection as we probe planetary atmospheres during transits around nearby stars. How stellar class affects our analysis will affect our strategies especially as we probe early Earth atmosphere equivalents. What will we find, for example, at TRAPPIST-1?


By Alex Tolley



As the search for life on exoplanets ramps up, the question arises as to which types of stars represent the best targets. Based on distribution, M-Dwarfs are very attractive as they represent 3/4 of all stars in our galaxy. Their long lifetimes offer abundant opportunities for life to evolve, and to resist extinction as their stars increase in luminosity. On Earth, terrestrial life might last another one billion years before the level of CO2 in the atmosphere is forced to be reduced below photosynthetic requirements for plants to survive. All but lithophilic life might be extinct within 1 ½ billion years. An additional advantage for astronomers is that spectra of exoplanet atmospheres will be easier to distinguish around low luminosity stars. [6, 7]


From a purely numbers game, M-Dwarfs are most attractive targets:


“Temperate terrestrial planets transiting M-dwarf stars are often touted as the poor-astronomer’s Earth analog, since they are easier to detect and characterize than a true Earth twin. Based on what we currently know, however, M-Dwarf planets are the most common habitable worlds“ [1]



Image: Gliese 581 from a planet in its HZ. Credit: David Hardy.


That M-Dwarf rocky worlds may be the most common habitable world is due to:


“1. rocky planets are much more common in the temperate zones of M-Dwarfs (…) than in the temperate zones of Sun-like Stars (…)


2. small stars are more common than big stars (…)


3. the tidally-locked nature of these planets is not a challenge to climate and may double the width of the habitable zone (…)


4. the red stellar radiation results in a weaker ice-albedo feedback

and hence stabler climate (…), and (…)


5. the slow main sequence evolution of M-Dwarfs means that a geological thermostat is not strictly necessary to maintain habitable conditions for billions of years (…). Studying temperate terrestrial planets around M-Dwarfs is our best shot at understanding habitability writ large.” [1]


There are negatives for life around M-Dwarfs too. The closeness of the habitable zone (HZ) to the star results in tidal locking that may impact the stability of the atmosphere, as well as the intense flares that may strip the atmospheres from these worlds. However, these negatives for habitability and hence life may be compensated by the ubiquity of such worlds and the relative ease of studying them remotely. For lithophilic life, surface conditions largely can be ignored.


After the lifeless Hadean, the Archean and Proterozoic eons had life that was purely prokaryotic. During this time photosynthesis evolved that eventually resulted in an atmosphere with O2 and very little CO2 and CH4. This phase of life’s history covers the long period when Earth’s atmosphere changed from a largely reducing one of N2, CO2, and some CH4, to one that becomes oxidizing. The Phanerozoic, starting around 500 mya encompasses the period when O2 pressures increased to the level they are today and terrestrial, multicellular life blossomed in diversity.


If Earth’s history is any guide, life in our galaxy will be mostly unicellular bacteria, living in a reducing atmosphere. If that is a correct hypothesis, then most life in the galaxy will be non-photosynthetic, perhaps with biologies similar to the Archaea. A biosignature of such microbial life will still require looking for a disequilibrium in gases, mainly CO2 and CH4, rather than O2 and CH4 [2, 3]. Archaea include the extremophiles living in a diverse array of environments, including the lithosphere. Such organisms may well survive the harsher conditions of a tidally locked world, especially regarding the impact of flares.


The question then arises, if we look for a biosignature around stars of different spectral types, will the star’s type have an impact on the planet’s atmosphere, detectable spectral markers, and any potential biosignatures?


This question is examined in a paper by Rugheimer and Kaltenegger [5]. The authors modeled the spectra of atmospheres to simulate Earth-like worlds – rocky worlds large enough to hold an atmosphere and presumably with a mix of ocean and continents, rather than water worlds – orbiting in the HZ of different star types F, G, K and M. Their simulations cover the state of evolution of those worlds as if they were an Earth relocated to other stars, so that the spectra for different gas mixtures could be modeled.


The light of an M-Dwarf is shifted so that the UV component is much diminished. This affects the reactions of the gases in the atmosphere. Photolysis is reduced, reducing the loss of H20, which in turn, as a greenhouse gas, warms the surface more than with a hotter star. CH4 in particular is not lost and may even result in a runaway accumulation in some cases. The increase in H2O increases the cloud cover in the troposphere, which in turn increases the planet’s albedo. The increased IR component of the M-Dwarf’s output increases the surface temperature as well and may well further increase cloud formation.


The photolysis of water and the oxidation of CH4 is shown below. UV is required which results in the reduced loss of H2O and CH4 on exoplanets around M-Dwarfs.


H2O + hv (ƛ < 200 nm) -> H + OH

CH4 + OH -> CH3 + H2O


Similarly, UV is required to split O2 allowing O3 formation.


O2 + hv (ƛ < 240 nm) -> O + O

O + O2 -> O3


Previously, Kaltenegger [4] had modeled the atmospheres of Earth-like worlds around different stars and constructed synthetic spectra to determine the visibility of different biosignature gases in the visible and near-infrared.


Following on, Rugenheimer et al modeled gases for 4 different periods – 3.9, 2.0, 0.8 and 0 Ga for the 4 star types. The initial gas mixes are shown in Table 1.



Table1. Gas mixing ratios for 4 eons. N2 not shown.


Because the stars age at different rates, the periods are standardized to Earth. As M-Dwarfs age far more slowly than our sun, the different luminosities are modeled as if their planets are further out from their star earlier in its history to simulate the lower luminosity.


The result of the simulations shows that some markers will be difficult to observe under different spectral types of stars.


The impact of the star type is shown in Figure 1. Temperature and 5 gases are profiled with altitude, The M-Dwarfs show clear differences from the hotter star types. Of particular note are the higher H2O and CH4 atmosphere ratios, particularly at higher altitudes.



Figure 1. Planetary temperature vs. altitude profiles and mixing ratio profiles for H2O, O3, CH4, OH, and N2O (left to right) for a planet orbiting the grid of FGKM stellar models with a prebiotic atmosphere corresponding to 3.9 Ga (first row), the early rise of oxygen at 2.0 Ga (second row), the start of multicellular life on Earth at 0.8 Ga (third row), and the modern atmosphere (fourth row). Source: Rugheimer & Kaltenegger 2017 [5]


Figure 2 shows the simulated spectra for the star types. Because of the loss of shorter wavelengths with M-Dwarf stars, the O2 signatures are largely lost. This means that even should a planet around an M-Dwarf evolve photosynthesis and create an oxidizing atmosphere, this may not be detectable around such a world.



Figure 2. Disk-integrated VIS/NIR spectra at a resolution of 800 at the TOA for an Earth-like planet for the grid of stellar and geological epoch models assuming 60% Earth-analogue cloud coverage. For individual features highlighting the O2, O3, and H2O/CH4 bands in the VIS spectrum. Source: Rugheimer & Kaltenegger 2017 [5] [TOA = Top of the atmosphere – AMT]


In contrast, the strong markers for CO2 and CH4 are well represented in the spectrum for M type stars. This creates a complication for a biosignature for early life comparable to the Archean and early Proterozoic periods on Earth. An atmosphere of CO2 and CH4 assumes that the CH4 is due to methanogens being the dominant source of CH4, far outstripping geologic sources. On the Hadean Earth, CH4 outgassing should be rapidly eliminated by UV. During the Archean, the biogenic production of CH4 maintains the CH4 and therefore the disequilibrium biosignature. But on an M-Dwarf world, this CH4 photolysis is largely absent, resulting in a CO2/CH4 biosignature that is a false positive.


If photosynthesis evolves, the O2 signal can be detected at the longer wavelength of 760 nm, but only if there is no cloud cover, as shown in figure 3. For an M-Dwarf planet, clouds mask the O2 signal, and we expect more cloud cover due to the increased H2O on such worlds.



Figure 3. Disk-integrated spectra (R = 800) of the O2 feature at 0.76 m for clear sky in relative reflectivity (left) and the detectable reflected emergent flux for clear sky (middle) and 60% cloud cover (right). Source: Rugheimer & Kaltenegger 2017 [5]. Note the loss of detectable O2 feature for M-type stars – AMT


Fortunately, ozone (O3) can be detected strongly in the IR around 9500 nm, so we can hope to detect photosynthetic life when the O2 partial pressure increases. Figure 4 shows that the O3 signature can be detected in the Archean in the Phanerozoic, but not the Proterozoic.



Figure 4. Smoothed, disk-integrated IR spectra at the TOA for an Earth-like planet for the grid of stellar and geological epoch models assuming 60% Earth-analogue cloud coverage. For individual features highlighting the O3, H2O/CH4, and CO2 bands in the IR spectrum see Figs. 9, 10, and 11, respectively. Source: Rugheimer & Kaltenegger 2017 [5]


While current instruments cannot resolve spectra in sufficient detail to detect the needed signatures of gases, the authors conclude


“These spectra can be a useful input to design instruments and to optimize the observation strategy for direct imaging or secondary eclipse observations with EELT or JWST as well as other future mission design concepts such as LUVOIR/HDST.”


To conclude, the type of star complicates biosignature detection, especially the co-presence of CO2 and CH4 in Archean and early Proterozoic eons that dominate the history of life on Earth. Not only is the star’s light shifted, hiding shorter wavelength signals, but the light itself impacts the equilibrium composition of atmospheric gases which can lead to biosignature ambiguity.


While the ubiquity of M-Dwarf stars and the longevity of low O2 atmospheres due to the time to evolve photosynthesis on Earth and the delay before the atmosphere builds up its O2 partial pressure, favors M-Dwarf stars as targets for looking for early life, the potential of false positives for the Archaean and early Proterozoic equivalent eons complicates the search for life on these worlds using expected biosignatures for worlds around sol-like stars. There is still work to be done to resolve these issues.


References

1. N. B. Cowan et al “Characterizing Transiting Planet Atmospheres through 2025”

2015 PASP 127 311. DOI: https://doi.org/10.1086/680855


2. Tolley, A, “Detecting Early Life on Exoplanets”, 02/23/2018. https://www.centauri-dreams.org/2018/02/23/detecting-early-life-on-exoplanets/


3. Krissansen-Totton et al “Disequilibrium biosignatures over Earth history and implications for detecting exoplanet life” 2018 Science Advances Vol. 4, no. 1. DOI: 10.1126/sciadv.aao5747


4. Kaltenegger et al “Spectral Evolution of an Earth-like Planet”, The Astrophysical Journal, 658:598Y616, 2007 March 20 (abstract).


5. Rugheimer, Kaltenegger “Spectra of Earth-like Planets Through Geological Evolution Around FGKM Stars”, The Astrophysical Journal 854(1). DOI: 10.3847/1538-4357/aaa47a


6. Burrows, A. S ”Spectra as windows into exoplanet atmospheres,” 2014, PNAS, 111, 12601 (abstract)


7. Ehrenreich D “Transmission spectra of exoplanet atmospheres” 2011 http://www-astro.physik.tu-berlin.de/plato-2011/talks/PLATO_SC2011_S03T06_Ehrenreich.pdf


tzf_img_post




 •  0 comments  •  flag
Share on Twitter
Published on March 16, 2018 07:51

March 15, 2018

Maxing Out Kepler

What happens to a spacecraft at the end of its mission depends on where it’s located. We sent Galileo into Jupiter on September 21, 2003 not so much to gather data but because the spacecraft had not been sterilized before launch. A crash into one of the Galilean moons could potentially have compromised our future searches for life there, but a plunge into Jupiter’s atmosphere eliminated the problem.


Cassini met a similar fate at Saturn, and in both cases, the need to keep a fuel reserve available for that final maneuver was paramount. Now we face a different kind of problem with Kepler, a doughty spacecraft that has more than lived up to its promise despite numerous setbacks, but one that is getting perilously low on fuel. With no nearby world to compromise, Kepler’s challenge is to keep enough fuel in reserve to maximize its scientific potential before its thrusters fail, thus making it impossible for the spacecraft to be aimed at Earth for data transfer.


In an Earth-trailing orbit 151 million kilometers from Earth, Kepler’s fuel tank is expected to run dry within a few months, according to this news release from NASA Ames. The balancing act for its final observing run will be to reserve as much fuel as needed to aim the spacecraft, while gathering as much data as possible before the final maneuver takes place. Timing this will involve keeping a close eye on the fuel tank’s pressure and the performance of the Kepler thrusters, looking for signs that the end is near.



Image: K2 at work, in this image from NASA Ames.


Meanwhile, as we await the April launch of the Transiting Exoplanet Survey Satellite (TESS), we can reflect on Kepler’s longevity. The failure of its second reaction wheel ended the primary mission in 2013, but as we’ve discussed here on many occasions, the use of photon momentum to maintain its pointing meant that the craft could be reborn as K2, an extended mission that shifted its field of view to different portions of the sky on a roughly three-month basis.


As the mission team had assumed that Kepler was capable of about 10 of these observing campaigns, the fact that the mission is now on its 17th is another Kepler surprise. The current campaign, entered this month, will presumably be its last, but if we’ve learned anything about this spacecraft, it’s that we shouldn’t count it out. Let’s see how long the fuel will last.


tzf_img_post




 •  0 comments  •  flag
Share on Twitter
Published on March 15, 2018 10:50

March 14, 2018

Antimatter: The Heat Problem


My family has had a closer call with ALS than I would ever have wished for, so the news of Stephen Hawking’s death stays with me as I write this morning. I want to finish up my thoughts on antimatter from the last few days, but I have to preface that by noting how stunning Hawking’s non-scientific accomplishment was. In my family’s case, the ALS diagnosis turned out to be mistaken, but there was no doubt about Hawking’s affliction. How on Earth did he live so long with an illness that should have taken him mere years after it was identified?


Hawking’s name will, of course, continue to resonate in these pages — he was simply too major a figure not to be a continuing part of our discussions. With that in mind, and in a ruminative mood anyway, let me turn back to the 1950s, as I did yesterday in our look at Eugen Sänger’s attempt to create the design for an antimatter rocket. Because even as Sänger labored over the idea, one he had been pursuing since the 1930s, Les Shepherd was looking at the antimatter prospect, and coming up with aspects of the problem not previously identified.


Getting a Starship Up to Speed

Shepherd isn’t as well known as he should be to the public, but within the aerospace community he is something of a legend. A specialist in nuclear fusion, his activities within the International Academy of Astronautics (he was a founder) and the International Astronautical Federation (he was its president) were legion, but this morning I turn to “Interstellar Flight,” a Shepherd paper from 1952. This was published just a year before Sänger explained his antimatter rocket ideas to the 4th International Astronautical Congress in Zurich, later published in Space-Flight Problems (1953).



Remember that neither of these scientists knew about the antiproton as anything other than a theoretical construct, which meant that a ‘photon rocket’ in the Sänger mode just wasn’t going to work. But Shepherd saw that even if it could be made to function, antimatter propulsion ran into other difficulties. Producing and storing antimatter were known problems even then, but it was Shepherd who saw that “The most serious factor restricting journeys to the stars, indeed, is not likely to be the limitation on velocity but rather limitation on acceleration.”


This stems from the fact that the matter/antimatter annihilation is so mind-bogglingly powerful. Let me quote Shepherd on this, as the problem is serious:


…a photon rocket accelerating at 1 g would require to dissipate power in the exhaust beam at the fantastic rate of 3 million Megawatts/tonne. If we suppose that the photons take the form of black-body radiation and that there is 1 sq metre of radiating surface available per tonne of vehicle mass then we can obtain the necessary surface temperature from the Stefan-Boltzmann law…


Shepherd worked this out as:


5.7 x 10-8 T4 = 3 x 1012 watts/metre2


with T expressed in degrees Kelvin. So the crux of the problem is that we are producing an emitting surface with a temperature in the range of 100,000 K. The problem with huge temperatures is that we have to find some way of dissipating them. We’d like to get our rocket operating at 1 g acceleration so we could tour the galaxy, using relativistic time dilation to send a crew to the galactic center, for example, within a human lifetime. But we have to dispose of waste heat from the extraordinarily hot emitting surfaces of our spacecraft, because with numbers like these, even the most efficient engine is still going to produce waste heat.



Image: What I liked about the ‘Venture Star’ from James Cameron’s film Avatar was that the design included radiators, clearly visible in this image. How often have we seen the heat problem addressed in any Hollywood offering? Nice work.


Now we can look at Robert Frisbee’s design — an antimatter ’beamed-core’ starship forced by its nature to be thousands of kilometers long and, compared to its length, incredibly thin. Frisbee’s craft assumes, as I mentioned, a beamed-core design, with pions from the annihilation of protons and antiprotons being shaped into a stream of thrust by a magnetic nozzle; i.e., a superconducting magnet. The spacecraft has to be protected against the gamma rays produced in the annihilation process and it needs radiators to bleed off all the heat generated by the engine.


We also need system radiators for the refrigeration systems. Never forget that we’re storing antimatter within a fraction of a degree of absolute zero (-273 C), then levitating it using a magnetic field that takes advantage of the paramagnetism of frozen hydrogen. Thus:


…the width of the main radiator is fixed by the diameter of the superconductor magnet loop. This results in a very long main radiator (e.g., hundreds of km in length), but it does serve to minimize the radiation and dust shields by keeping the overall vehicle long and thin.


Frisbee wryly notes the need to consider the propellant feed in systems like this. After all, we’re trying to send antimatter pellets magnetically down a tube at least hundreds of kilometers long. The pellets are frozen at 1 K, but we’re doing this in an environment where our propellant feed is sitting next to a 1500 K radiator! Frisbee tries to get around this by converting the anti-hydrogen into antiprotons, feeding these down to the engine in the form of a particle beam.


Frisbee’s 40 light-year mission with a duration of 100 years is set up as a four-stage antimatter rocket massing millions of tons, with radiator length for the first stage climbing as high as 7500 kilometers, and computed radiator lengths for the later stages still in the hundreds of kilometers. Frisbee points out that the 123,000 TW of first-stage engine ‘jet’ power demands the dumping of 207,000 TW of 200 MeV gamma rays. Radiator technology will need an extreme upgrade.


And to drop just briefly back to antimatter production, check this out:


The full 4-stage vehicle requires a total antiproton propellant load of 39,300,000 MT. The annihilation (MC2) energy of this much antimatter (plus an equal amount of matter) corresponds to ~17.7 million years of current Human energy output. At current production efficiencies (10-9), the energy required to produce the antiprotons corresponds to ~17.7 quadrillion [1015] years of current Human energy output. For comparison, this is “only” 590 years of the total energy output of sun. Even at the maximum predicted energy efficiency of antiproton production (0.01%), we would need 177 billion years of current Human energy output for production. In terms of production rate, we only need about 4×1021 times the current annual antiproton production rate.


Impossible to build, I’m sure. But papers like these are immensely useful. They illustrate the consequences of taking known theory into the realm of engineering to see what is demanded. We need to know where the showstoppers are to continue exploring, hoping that at some point we find ways to mitigate them. Frisbee’s paper is available online, and repays a close reading. We could use the mind of a future Hawking to attack such intractable problems.


The Les Shepherd paper cited above is “Interstellar Flight,” JBIS, Vol. 11, 149-167, July 1952. The Frisbee paper is “How to Build an Antimatter Rocket for Interstellar Missions,” 39th AIAA/ASME/SAE/ASEE Joint Propulsion Conference and Exhibit, 20-23 July 2003 (full text).


tzf_img_post




 •  0 comments  •  flag
Share on Twitter
Published on March 14, 2018 08:07

Stephen Hawking (1942-2018)

The Tau Zero Foundation expresses it deepest sympathies to the family, friends and colleagues of Stephen Hawking. His death is a loss to the the world, to our scientific communities, and to all who value courage in the face of extreme odds.


tzf_img_post




 •  0 comments  •  flag
Share on Twitter
Published on March 14, 2018 04:52

March 13, 2018

Harnessing Antimatter for Propulsion

Antimatter’s staggering energy potential always catches the eye, as I mentioned in yesterday’s post. The problem is how to harness it. Eugen Sänger’s ‘photon rocket’ was an attempt to do just that, but the concept was flawed because when he was developing it early in the 1950s, the only form of antimatter known was the positron, the antimatter equivalent of the electron. The antiproton would not be confirmed until 1955. A Sänger photon rocket would rely on the annihilation of positrons and electrons, and therein lies a problem.



Sänger wanted to jack up his rocket’s exhaust velocity to the speed of light, creating a specific impulse of a mind-boggling 3 X 107 seconds. Specific impulse is a broad measure of engine efficiency, so that the higher the specific impulse, the more thrust for a given amount of propellant. Antimatter annihilation could create the exhaust velocity he needed by producing gamma rays, but positron/electron annihilation was essentially a gamma ray bomb, pumping out gamma rays in random directions.


Image: Austrian rocket scientist Eugen Sänger, whose early work on antimatter rockets identified the problems with positron/electron annihilation for propulsion.


What Sänger needed was thrust. His idea of an ‘electron gas’ to channel the gamma rays his photon rocket would produce never bore fruit; in fact, Adam Crowl has pointed out in these pages that the 0.511 MeV gamma rays generated in the antimatter annihilation would demand an electron gas involving densities seen only in white dwarf stars (see Re-thinking the Antimatter Rocket). No wonder Sänger was forced to abandon the idea.


The discovery of the antiproton opened up a different range of possibilities. When protons and antiprotons annihilate each other, they produce gamma rays and, usefully, particles called pi-mesons, or pions. I’m drawing on Greg Matloff’s The Starflight Handbook (Wiley, 1989) in citing the breakdown: Each proton/antiproton annihilation produces an average of 1.5 positively charged pions, 1.5 negatively charged pions and 2 neutral pions.


Note the charge. We can use this to deflect some of these pions, because while the neutral ones decay quickly, the charged pions take a bit longer before they decay into gamma rays and neutrinos. In this interval, Robert Forward saw, we can use a magnetic nozzle created through superconducting coils to shape a charged pion exhaust stream. The charged pions will decay, but by the time they do, they will be far behind the rocket. We thus have useful momentum from this fleeting interaction or, as Matloff points out, we could also use the pions to heat an inert propellant — hydrogen, water, methane — to produce a channeled thrust.



But while we now have a theoretical way to produce thrust with an antimatter reaction, we still have nowhere near the specific impulse Sänger hoped for, because our ‘beamed core’ antimatter rocket can’t harness all the neutral pions produced by the matter/antimatter annihilation. My friend Giovanni Vulpetti analyzed the problem in the 1980s, concluding that we can expect a pion rocket to achieve a specific impulse equivalent to 0.58c. He summed the matter up in a paper in the Journal of the British Interplanetary Society in 1999:


In the case of proton-antiproton, annihilation generates photons, massive leptons and mesons that decay by chain; some of their final products are neutrinos. In addition, a considerable fraction of the high-energy photons cannot be utilised as jet energy. Both carry off about one third of the initial hadronic mass. Thus, it is not possible to control such amount of energy.


Image: Italian physicist Giovanni Vulpetti, a major figure in antimatter studies through papers in Acta Astronautica, JBIS and elsewhere.


We’re also plagued by inefficiencies in the magnetic nozzle, a further limitation on exhaust velocity. But we do have, in the pion rocket, a way to produce thrust if we can get around antimatter’s other problems.


In the comments to yesterday’s post, several readers asked about creating anti-hydrogen (a positron orbiting an antiproton), a feat that has already been accomplished at CERN. In fact, Gerald Jackson and Steve Howe (Hbar Technologies) created an unusual storage solution for anti-hydrogen in their ‘antimatter sail’ concept for NIAC, which you can see described in their final NIAC report. In more recent work, Jackson has suggested the possibility of using anti-lithium rather than anti-hydrogen.


The idea is to store the frozen anti-hydrogen in a chip much like the integrated circuit chips we use every day in our electronic devices. A series of tunnels on the chip (think of the etching techniques we already use with electronics) lead to periodic wells where the anti-hydrogen pellets are stored, with voltage changes moving them from one well to another. The anti-hydrogen storage bottle draws on methods Robert Millikan and Harvey Fletcher used in the early 20th Century to measure the charge of the electron to produce a portable storage device.


The paramagnetism of frozen anti-hydrogen makes this possible, paramagnetism being the weak attraction of certain materials to an externally applied magnetic field. Innovative approaches like these are changing the way we look at antimatter storage. Let me quote Adam Crowl, from the Centauri Dreams essay I cited earlier:


The old concept of storing [antimatter] as plasma is presently seen as too power intensive and too low in density. Newer understanding of the stability of frozen hydrogen and its paramagnetic properties has led to the concept of magnetically levitating snowballs of anti-hydrogen at the phenomenally low 0.01 K. This should mean a near-zero vapour pressure and minimal loses to annihilation of the frozen antimatter.


But out of this comes work like that of JPL’s Robert Frisbee, who has produced an antimatter rocket design that is thousands of kilometers long, the result of the need to store antimatter as well as to maximize the surface area of the radiators needed to keep the craft functional. In Frisbee’s craft, antimatter is stored within a fraction of a degree of absolute zero (-273 C) and then levitated in a magnetic field. Imagine the refrigeration demands on the spacecraft in sustaining antimatter storage while also incorporating radiators to channel off waste heat.



Image: An antimatter rocket as examined by Robert Frisbee. This is Figure 6 from the paper cited below. Caption: Conceptual Systems for an Antimatter Propulsion System.


Radiators? I’m running out of space this morning, so we’ll return to antimatter tomorrow, when I want to acknowledge Les Shepherd’s early contributions to the antimatter rocket concept.


The paper by Giovanni Vulpetti I quoted above is “Problems and Perspectives in Interstellar Exploration,” JBIS Vol. 52, No. 9/10, available on Vulpetti’s website. For Frisbee’s work, see for example “How to Build an Antimatter Rocket for Interstellar Missions,” 39th AIAA/ASME/SAE/ASEE Joint Propulsion Conference and Exhibit, 20-23 July 2003 (full text).


tzf_img_post




 •  0 comments  •  flag
Share on Twitter
Published on March 13, 2018 08:42

March 12, 2018

Antimatter in Motion

Antimatter will never lose its allure when we’re talking about interstellar propulsion, even if the breakthroughs needed to harness it are legion. After all, a kilogram of antimatter, annihilating itself in contact with normal matter, yields roughly ten billion times the amount of energy released when a kilogram of TNT explodes. Per kilogram of fuel, we’re talking about 1,000 times more energy than nuclear fission, and 100 times the energy available through nuclear fusion.


Or we could put this into terms more suited for space. A single gram of antimatter, according to Frank Close’s book Antimatter (Oxford, 2010), could through its annihilation produce as much energy as the fuel from the tanks of two dozen Space Shuttles.


The catalog of energy comparisons could go on, each as marvelous as the last, but the reality is that antimatter is not only extremely difficult to produce in any quantity but even more challenging to store. Cram enough positrons or antiprotons into a magnetic bottle and the repulsive forces between them overcome the containing fields, creating a leak that in turn destroys the antimatter. How to store antimatter for propulsion remains a huge problem.


Here’s Close on the issue:


…`like charges repel’, so in order to contain the electric charge in a gram of pure antiprotons or of positrons, you would have to build a force field so powerful that were you to disrupt it, the explosive force as the charged particles flew apart would exceed anything that would have resulted from their annihilation.


As with so many issues regarding deep space, though, we tackle these things one step at a time. Thus recent news out of CERN draws my attention this morning. Bear in mind that between CERN and Fermilab we’re still talking about antimatter production levels that essentially have enough energy to light a single electric bulb for no more than a few minutes. But assuming we find ways to increase our production, perhaps through harvesting of naturally occurring antimatter, we’re learning some things about storage through a project called PUMA.


The acronym stands for ‘antiProton Unstable Matter Annihilation.’ The goal: To trap a record one billion antiprotons at CERN’s Extra Low ENergy Antiproton (ELENA) facility, a deceleration ring that works with CERN’s Antiproton Decelerator to slow antiprotons, reducing their energy by a factor of 50, from 5.3 MeV to just 0.1 MeV. ELENA should allow the number of antiprotons trapped to be increased by a factor of 10 to 100, a major increase in efficiency.



Image: The ELENA ring prior to the start of first beam in 2016. Credit: CERN.


The PUMA project aims to keep the antiprotons in storage for several weeks, allowing them to be loaded into a van and moved to a nearby ion-beam facility called ISOLDE (Isotope mass Separator On-Line), where they will be collided with radioactive ions as a way of examining exotic nuclear phenomena. The nature of the investigations is interesting — CERN has two experiments underway to study the effects of gravity on antimatter, for example — but it’s the issue of storage that draws my attention. How will CERN manage the feat?


This update from CERN lays out the essentials:


To trap the antiprotons for long enough for them to be transported and used at ISOLDE, PUMA plans to use a 70-cm-long “double-zone” trap inside a one-tonne superconducting solenoid magnet and keep it under an extremely high vacuum (10-17 mbar) and at cryogenic temperature (4 K). The so-called storage zone of the trap will confine the antiprotons, while the second zone will host collisions between the antiprotons and radioactive nuclei that are produced at ISOLDE but decay too rapidly to be transported and studied elsewhere.


Thus ELENA produces the antiprotons, while ISOLDE supplies the short-lived nuclei that CERN scientists intend to study, looking for new quantum phenomena that may emerge in the interactions between antiprotons and the nuclei. I’m taken with how Alexandre Obertelli (Darmstadt Technical University), who leads this work, describes it. “This project,” says the physicist, “might lead to the democratisation of the use of antimatter.” A striking concept, drawing on the fact that antimatter will be transported between two facilities.


Antiprotons traveling aboard a van to a separate site are welcome news. In today’s world, low-energy antiprotons are only being produced at CERN, but we’re improving our storage in ways that may make antimatter experimentation in other venues more practical. Bear in mind, too, that an experiment called BASE (Baryon Antibaryon Symmetry Experiment), also at CERN, has already proven that antiprotons can be kept in a storage reservoir for over a year.



Image: A potential future use for trapped antimatter. Here, a cloud of anti-hydrogen drifts towards a uranium-infused sail. Credit: Hbar Technologies, LLC/Elizabeth Lagana.


We’re a long way from propulsion, here, but I always point to the work of Gerald Jackson and Steve Howe (Hbar Technologies), who attack the problem from the other end. With antimatter scarce, how can we find ways to use it as a spark plug rather than a fuel, an idea the duo have explored in work for NASA’s Institute for Advanced Concepts. Here, milligrams of antimatter are released from a spacecraft onto a uranium-enriched five-meter sail. For all its challenges, antimatter’s promise is such that innovative concepts like these will continue to evolve. Have a look at Antimatter and the Sail for one of a number of my discussions of this concept.


tzf_img_post




 •  0 comments  •  flag
Share on Twitter
Published on March 12, 2018 08:02

Paul Gilster's Blog

Paul Gilster
Paul Gilster isn't a Goodreads Author (yet), but they do have a blog, so here are some recent posts imported from their feed.
Follow Paul Gilster's blog with rss.