The detection of X-rays coming from Pluto is challenging scientists to understand more about the space surrounding the best-known object in the outer solar system.
While NASA’s New Horizons spacecraft was speeding toward and beyond Pluto, NASA’s Chandra X-ray Observatory—in orbit back at Earth—was aimed several times on the dwarf planet and its moons, gathering data that the missions could compare. Each time Chandra pointed at Pluto—four times in all, from February 2014 through August 2015—it detected X-rays coming from the small planet.
That was somewhat surprising, given that Pluto—cold, rocky and without a magnetic field—has no natural mechanism for emitting X-rays.
“We’ve just detected, for the first time, X-rays coming from an object in our Kuiper Belt, and learned that Pluto is interacting with the solar wind in an unexpected and energetic fashion,” says Carey Lisse, an astrophysicist at Johns Hopkins University Applied Physics Laboratory, who led the Chandra observation team with APL’s New Horizons co-investigator Ralph McNutt. “We can expect other large Kuiper Belt objects to be doing the same.”
Pluto is the largest object in the Kuiper Belt, a vast population of small, distant bodies orbiting the sun. The belt extends from the orbit of Neptune, 30 times the distance of Earth from the sun, to about 50 times the Earth-sun distance.
Lisse, who first detected X-rays from a comet two decades ago, knew that X-rays from Pluto—though not likely—were possible. The interaction between gases surrounding planetary bodies and the solar wind—the constant streams of charged particles speeding out from the sun—can create X-rays, which are high-energy electromagnetic waves with a very short wavelength.
New Horizons scientists wanted to learn about the interaction between Pluto’s atmosphere and the solar wind. The spacecraft carries an instrument designed to measure that activity up-close, the aptly named Solar Wind around Pluto. Scientists are using SWAP data to craft a picture of Pluto with a very mild, close-in bow shock, where the solar wind first “meets” Pluto (similar to a shock wave that forms ahead of a supersonic aircraft) and a small wake behind the planet. The immediate mystery is that Chandra’s readings on the brightness of the X-rays are much higher than would be expected from the solar wind interacting with Pluto’s atmosphere.
“Before our observations, scientists thought it was highly unlikely that we’d detect X-rays from Pluto, causing a strong debate as to whether Chandra should observe it at all,” says coauthor Scott Wolk of the Harvard-Smithsonian Center for Astrophysics. “Prior to Pluto, the most distant solar system body with detected X-ray emission was Saturn’s rings and disk.”
Although Pluto is releasing enough gas from its surprisingly stable atmosphere to make the observed X-rays, in simple models for the intensity of the solar wind at the distance of Pluto, there isn’t enough solar wind flowing directly at Pluto to make them.
Researchers suggest several possibilities for the enhanced X-ray emission from Pluto. One is that there is a much wider and longer tail of gas trailing Pluto than New Horizons found with SWAP. Another is that interplanetary magnetic fields are focusing more particles than expected from the solar wind into the region around Pluto. A third is that the low density of the solar wind in the outer solar system could allow formation of a doughnut, or torus, of neutral gas centered on Pluto’s orbit.
That the Chandra measurements don’t quite match up with New Horizons up-close observations is the benefit—and beauty—of an opportunity like the New Horizons flyby.
“When you have a chance at a once in a lifetime flyby like New Horizons at Pluto, you want to point every piece of glass—every telescope on and around Earth—at the target,” McNutt says. “The measurements come together and give you a much more complete picture you couldn’t get at any other time, from anywhere else.”
The study is published in the journal Icarus. APL designed, built, and operates New Horizons for NASA’s Science Mission Directorate. The Smithsonian Astrophysical Observatory controls Chandra’s science and flight operations.
Astrophysicists have proposed a clever new way to shed light on the mystery of dark matter, the undiscovered stuff believed to make up most of the universe.
The irony is they want to try to pin down the nature of one unexplained phenomenon by using another, looking for dark matter with enigmatic cosmic emanations known as “fast radio bursts.”
Scientists argue that these brief but extremely bright flashes of radio-frequency radiation can help them determine if dark matter is really a particular kind of ancient black hole.
Fast radio bursts, or FRBs, provide a direct way of detecting these black holes, which have a specific mass, says Julian Muñoz, a graduate student at Johns Hopkins University and lead author of a new paper published in the journal Physical Review Letters.
Muñoz wrote the paper along with Ely D. Kovetz, a postdoctoral fellow; Marc Kamionkowski, a professor of physics and astronomy; and Liang Dai, who recently finished studying at Johns Hopkins and is now at the Institute for Advanced Study.
The paper builds on a hypothesis offered months ago that a gravity wave detected after a collision of black holes had actually unmasked dark matter, a substance not yet identified but believed to make up 85 percent of the mass of the universe.
The speculative study took as a point of departure the fact that the colliding objects detected by the CalTech/MIT LIGO experiment were roughly the predicted mass of “primordial” black holes. Unlike black holes from imploded stars, primordial black holes are believed to have formed in the collapse of huge expanses of gas during the birth of the universe.
The existence of primordial black holes has not been proved, but they have been suggested as a possible solution to the riddle of dark matter. With little evidence of them to examine, however, the hypothesis had not gained much traction.
The LIGO findings, however, raised the question again, especially as the black holes LIGO detected conform to the mass predicted for dark matter.
For the new study, scientists calculated how often primordial black holes would form binary pairs and collide. The team came up with a collision rate that fits LIGO data.
Key to the argument is that the black holes LIGO detected fall between 29 and 36 times the mass of the sun. The new paper considers the question of how to test the hypothesis that dark matter consists of black holes of roughly 30 solar masses.
That’s where the fast radio bursts come in. First observed only a few years ago, these powerful flashes last only fractions of a second. Their origins are unknown, but are believed to lie in galaxies outside the Milky Way.
If that’s true, Kamionkowski says, the radio waves would travel great distances before they’re observed on Earth. If a burst passed dark matter on the way, Einstein’s theory of general relativity says, it would be deflected. If it passed close enough, it could be split into two rays shooting off in the same direction—creating two images of one source.
The new study shows that if the dark matter is a black hole 30 times the mass of the sun, the two images will arrive a few milliseconds apart, one as an echo of the other.
“The echoing of FRBs is a very direct probe of dark matter,” Muñoz says. “While gravitational waves might ‘indicate’ that dark matter is made of black holes, there are other ways to produce very-massive black holes with regular astrophysics, so it would be hard to convince oneself that we are detecting dark matter. However, gravitational lensing of fast radio bursts has a very unique signature, with no other astrophysical phenomenon that could reproduce it.”
If primordial black holes are dark matter, “it is expected that several of the thousands of FRBs to be detected in the next few years will have such echoes,” Kamionkowski says.
So far, only about 20 fast radio bursts have been recorded since 2001. But a new Canadian telescope expected to begin operation this year seems promising for spotting radio bursts.
“Once the thing is working up to their planned specifications, they should collect enough FRBs to begin the tests we propose,” Kamionkowski says. Results could be available in three to five years. The team’s proposed methodology is published in the journal Physical Review Letters.
With thousands of exoplanets discovered to date, it’s no wonder that we regularly come across “Earth-like worlds” around distant stars. But let’s face it: while it’s exciting that there are planets out there that may be able to harbour life, they are so far away that we will not be able to visit them anytime soon.
So wouldn’t it be amazing if we discovered a planet just like the Earth – in our own neighbourhood? Well, a new study led by some of my colleagues at Queen Mary University of London has finally done just that. Proxima Centauri – a “red dwarf” star that’s some 14% the size of the sun and around half the temperature – is the closest star to our solar system at 4.24 light years away. Until now, we weren’t sure if it had any planets in orbit, but the new study, published in Nature, reports the discovery of a potentially habitable world that we may actually be able to send tiny robots to in the next few decades.
The team behind the discovery is called Pale Red Dot – an observational campaign of the High Accuracy Radial velocity Planet Searcher (HARPS), an instrument on the European Southern Observatory’s 3.6-metre La Silla telescope in Chile’s Atacama Desert. It measured the spectra of light from Proxima, essentially the fingerprints that reveal what the star is made of, and looked for changes in the frequency of those lines. Small shifts in this starlight can be used to work out tiny movements of the star in response to an orbiting planet’s gravitational pull.
Pale Red Dot’s measurements were made each night for around three months at the beginning of 2016. And the results revealed the tell-tale signature of a planet, now labelled “Proxima b” as per naming conventions.
Chances of finding life on Proxima b
From the data gathered, the team has determined quite a lot about the planet’s properties. It orbits Proxima every 11.2 days, which places it a tenth of the distance from its star as Mercury is from the sun. While that would be an extremely unpleasant place to be in our solar system, at Proxima that’s just within the estimated “habitable zone” – an area where it is plausible that liquid water could exist on the surface of a planet. Proxima b is also at least 30% heavier than our world and if it is in fact rocky its surface gravity might only be 10% more than we’re used to.
So what are the possibilities for life on Proxima b? That’s hard to say just yet. We don’t know if the planet has an atmosphere at all, let alone what it might be made of. Over the next few years we might be able to figure that out using Hubble or the upcoming James Webb Space Telescope.
These might also reveal if any of the ingredients for life are present. But Proxima b may not be quite as hospitable as Earth. Red dwarf stars are incredibly violent so Proxima Centauri could bombard the planet with radiation. The proximity of Proxima b to the star could also mean that the planet is “tidally locked”, with one side always facing the star in perpetual day and the other in unending darkness. This would result in extremes of temperature: hot desert and barren rock versus frozen wasteland.
The true test would be to go there. Using conventional space technology (either manned or unmanned) and some clever slingshot manoeuvres, it would take at least 15,000 years to reach Proxima Centauri. But the ambitious Starshot Project aims to send tiny robots to this star system, propelled by powerful Earth-based lasers. They are estimating that it would only take about 20 years to get there in this manner, travelling at a speed of approximately 60,000 km per second (or 135m miles per hour). Those robots could relay back data about the system, and potentially even closeup pictures of Proxima b.
It certainly seems possible that we could find something out of this world within our lifetime.
On July 24, 2016, NASA’s Interface Region Imaging Spectrograph, or IRIS, captured a mid-level solar flare: a sudden flash of bright light on the solar limb – the horizon of the sun – as seen at the beginning of this video. Solar flares are powerful explosions of radiation. During flares, a large amount of magnetic energy is released, heating the sun’s atmosphere and releasing energized particles out into space. Observing flares such as this helps the IRIS mission study how solar material and energy move throughout the sun’s lower atmosphere, so we can better understand what drives the constant changes we can see on our sun.
As the video continues, solar material cascades down to the solar surface in great loops, a flare-driven event called post-flare loops or coronal rain. This material is plasma, a gas in which positively and negatively charged particles have separated, forming a superhot mix that follows paths guided by complex magnetic forces in the sun’s atmosphere. As the plasma falls down, it rapidly cools – from millions down to a few tens of thousands of kelvins. The corona is much hotter than the sun’s surface; the details of how this happens is a mystery that scientists continue to puzzle out. Bright pixels that appear at the end of the video aren’t caused by the solar flare, but occur when high-energy particles bombard IRIS’s charge-coupled device camera – an instrument used to detect photons.
As scientists worldwide celebrated that confirmation of Albert Einstein’s prediction that such waves exist, a team from Johns Hopkins University was diving into calculations based on data from LIGO, the Laser Interferometer Gravitational-Wave Observatory.
Their results suggest that it’s conceivable that dark matter–the nature of which has long been a mystery–might consist of black holes of a special type.
“We consider the possibility that the black hole binary detected by LIGO may be a signature of dark matter,” write the scientists in their summary, referring to the black hole pair as a “binary.” What follows are five pages of annotated mathematical equations showing how the researchers took the mass of the two objects LIGO detected as a point of departure.
Their bottom line: these objects could be part of the mysterious substance known to make up about 85 percent of the mass of the universe.
The astrophysicists are cautious, however.
“We are not proposing this is the dark matter,” says one of the authors, Marc Kamionkowski, a professor of physics and astronomy. “We’re not going to bet the house. It’s a plausibility argument.”
Primordial black holes
A matter of scientific speculation since the 1930s, dark matter has recently been studied with greater precision; more evidence has emerged since the 1970s, though always indirectly. While dark matter itself cannot yet be seen, its gravitational effects can be.
For example, the influence of dark matter is believed to explain inconsistencies in the rotation of nearby visible matter in galaxies.
The Johns Hopkins team, led by postdoctoral fellow Simeon Bird, was struck by the mass of the black holes detected by LIGO, an observatory that consists of two expansive L-shaped detection systems anchored to the ground. One is in Louisiana and the other in Washington State.
Black hole masses are measured in terms of multiples of our sun. The colliding objects that generated the gravity wave detected by LIGO–a joint project of the California Institute of Technology and the Massachusetts Institute of Technology–were 36 and 29 solar masses. Those are too large to fit predictions of the size of most stellar black holes, the ultra-dense structures that form when stars collapse. But they are also too small to fit predictions for the size of supermassive black holes at the center of galaxies.
The two LIGO-detected objects do, however, fit within the expected range of mass of a postulated third type called “primordial” black holes.
Primordial black holes are believed to have formed not from dying stars but from the collapse of large expanses of gas during the birth of the universe. While their existence has not been established with certainty, primordial black holes have in the past been suggested as a possible solution to the dark matter mystery.
Birth of the universe
Because there’s so little evidence of them, though, the “dark matter is primordial black holes” hypothesis has not gained a large following among scientists.
LIGO’s findings, however, raise the prospect anew, especially as the objects detected in that experiment conform to the mass predicted for dark matter.
Scientists in the past have suggested that conditions at the birth of the universe would have produced lots of primordial black holes distributed roughly evenly in the universe, clustering in halos around galaxies. All this would make them good candidates to be dark matter.
“If you have a lot of 30-mass events, that begs an explanation.”
The team calculated how often these primordial black holes would form binary pairs and, eventually, collide. Taking into account the size and elongated shape believed to characterize primordial black hole binary orbits, the team came up with a collision rate that conforms to the LIGO findings.
More observations from LIGO and other evidence would be needed to support the hypothesis, including further detections like the one announced in February. That could suggest greater abundance of objects of that signature mass.
“If you have a lot of 30-mass events, that begs an explanation,” says coauthor Ely D. Kovetz, a postdoctoral fellow in physics and astronomy. “That the discovery of gravitational waves could be connected to dark matter is creating lots of excitement among astrophysicists,” he adds.
Here’s a great, brief video we found that explains how supermassive black holes are formed, or at least one theory of how that happens. As the video explains, this type of black holes are paradoxically the blackest objects in the universe as well as being the brightest due to the way that the are formed.
If you’ve ever wondered what astronomers and cosmologists mean when they are talking about supermassive black holes, wonder no more! This video will explain it all:
Our gratitude to the PHD Comics YouTube channel for creating this awesome video!
Scientists have observed gravitational waves—ripples in the fabric of spacetime—for the second time, surpassing the expectations of LIGO researchers and clearly demonstrating the increased capabilities of Advanced LIGO.
Gravitational waves carry information about their origins and about the nature of gravity that cannot otherwise be obtained, and physicists have concluded that these gravitational waves were produced during the final moments of the merger of two black holes—14 and 8 times the mass of the sun—to produce a single, more massive spinning black hole that is 21 times the mass of the sun.
“LIGO is bringing us a new way to observe some of the darkest yet most energetic events in our universe.”
“It is very significant that these black holes were much less massive than those observed in the first detection,” says Gabriela Gonzalez, LIGO Scientific Collaboration (LSC) spokesperson and professor of physics and astronomy at Louisiana State University. “Because of their lighter masses compared to the first detection, they spent more time—about one second—in the sensitive band of the detectors. It is a promising start to mapping the populations of black holes in our universe.”
During the merger, which occurred approximately 1.4 billion years ago, a quantity of energy roughly equivalent to the mass of the sun was converted into gravitational waves. The detected signal comes from the last 27 orbits of the black holes before their merger. Based on the arrival time of the signals—with the Livingston (Louisiana) detector measuring the waves 1.1 milliseconds before the Hanford (Washington) detector—the position of the source in the sky can be roughly determined.
“In the near future, Virgo, the European interferometer, will join a growing network of gravitational wave detectors, which work together with ground-based telescopes that follow-up on the signals,” notes Fulvio Ricci, the Virgo Collaboration spokesperson, a physicist at Istituto Nazionale di Fisica Nucleare (INFN) and professor at Sapienza University of Rome. “The three interferometers together will permit a far better localization in the sky of the signals.”
Two events in 4 months
The first detection of gravitational waves, announced on February 11, 2016, confirmed a major prediction of Albert Einstein’s 1915 general theory of relativity, and marked the beginning of the new field of gravitational-wave astronomy.
The second discovery “has truly put the ‘O’ for Observatory in LIGO,” says Caltech’s Albert Lazzarini, deputy director of the LIGO Laboratory. “With detections of two strong events in the four months of our first observing run, we can begin to make predictions about how often we might be hearing gravitational waves in the future.
“LIGO is bringing us a new way to observe some of the darkest yet most energetic events in our universe.”
“We are starting to get a glimpse of the kind of new astrophysical information that can only come from gravitational wave detectors,” says MIT’s David Shoemaker, who led the Advanced LIGO detector construction program.
Both discoveries were made possible by the enhanced capabilities of Advanced LIGO, a major upgrade that increases the sensitivity of the instruments compared to the first generation LIGO detectors, enabling a large increase in the volume of the universe probed.
“With the advent of Advanced LIGO, we anticipated researchers would eventually succeed at detecting unexpected phenomena, but these two detections thus far have surpassed our expectations,” says NSF Director France A. Córdova. “NSF’s 40-year investment in this foundational research is already yielding new information about the nature of the dark universe.”
Advanced LIGO’s next data-taking run will begin this fall. By then, further improvements in detector sensitivity are expected to allow LIGO to reach as much as 1.5 to 2 times more of the volume of the universe. The Virgo detector is expected to join in the latter half of the upcoming observing run.
LIGO research is carried out by the LIGO Scientific Collaboration (LSC), a group of more than 1,000 scientists from universities around the United States and in 14 other countries. More than 90 universities and research institutes in the LSC develop detector technology and analyze data; approximately 250 students are strong contributing members of the collaboration. The LSC detector network includes the LIGO interferometers and the GEO600 detector.
The LIGO Observatories are funded by the National Science Foundation (NSF), and were conceived, built, and are operated by Caltech and MIT. A paper about the discovery is forthcoming in Physical Review Letters.
A search for the galaxy’s youngest planets has turned up one unlike any other—a newborn “hot Jupiter” whose outer layers are being torn away by the star it orbits every 11 hours.
“A handful of known planets are in similarly small orbits, but because this star is only 2 million years old this is one of the most extreme examples,” says Christopher Johns-Krull, an astronomer at Rice University.
“We don’t yet have absolute proof this is a planet because we don’t yet have a firm measure of the planet’s mass, but our observations go a long way toward verifying this really is a planet,” says Johns-Krull. He is the lead author of a new study in the Astrophysical Journal that makes a case for a tightly orbiting gas giant around the star PTFO8-8695 in the constellation Orion.
“We compared our evidence against every other scenario we could imagine, and the weight of the evidence suggests this is one of the youngest planets yet observed.”
The suspected planet orbits a star about 1,100 light years from Earth and is at most twice the mass of Jupiter.
“We don’t know the ultimate fate of this planet,” Johns-Krull says. “It likely formed farther away from the star and has migrated in to a point where it’s being destroyed. We know there are close-orbiting planets around middle-aged stars that are presumably in stable orbits. What we don’t know is how quickly this young planet is going to lose its mass and whether it will lose too much to survive.”
Astronomers have discovered more than 3,300 exoplanets, but almost all of them orbit middle-aged stars like the sun. On May 26, Johns-Krull and colleagues announced the discovery of “CI Tau b,” the first exoplanet found to orbit a star so young that it still retains a disk of circumstellar gas.
Finding such young planets is challenging because there are relatively few candidate stars that are young enough and bright enough to view in sufficient detail with existing telescopes. The search is further complicated by the fact that young stars are often active, with visual outbursts and dimmings, strong magnetic fields and enormous starspots that can make it appear that planets exist where they do not.
Is the planet real?
PTFO8-8695 b was identified as a candidate planet in 2012 by the Palomar Transit Factory’s Orion survey. The planet’s orbit sometimes causes it to pass between its star and our line of sight from Earth, therefore astronomers can use a technique known as the transit method to determine both the presence and approximate radius of the planet based on how much the star dims when the planet “transits,” or passes in front of the star.
“In 2012, there was no solid evidence for planets around 2 million-year-old stars,” says Lisa Prato, an astronomer at the Lowell Observatory. “Light curves and variations of this star presented an intriguing technique to confirm or refute such a planet.
“The other thing that was very intriguing about it was that the orbital period was only 11 hours. That meant we wouldn’t have to come back night after night after night, year after year after year. We could potentially see something happen in one night. So that’s what we did. We just sat on the star for a whole night.”
A spectroscopic analysis of the light coming from the star revealed excess emission in the H-alpha spectral line, a type of light emitted from highly energized hydrogen atoms. The team found that the H-alpha light is emitted in two components, one that matches the very small motion of the star and another than seems to orbit it.
“We saw one component of the hydrogen emission start on one side of the star’s emission and then move over to the other side,” Prato says. “When a planet transits a star, you can determine the orbital period of the planet and how fast it is moving toward you or away from you as it orbits. So, we said, ‘If the planet is real, what is the velocity of the planet relative to the star?’ And it turned out that the velocity of the planet was exactly where this extra bit of H-alpha emission was moving back and forth.”
Transit observations revealed that the planet is only about 3 to 4 percent the size of the star, but the H-alpha emission from the planet appears to be almost as bright as the emission coming from the star, Johns-Krull says.
“There’s no way something confined to the planet’s surface could produce that effect. The gas has to be filling a much larger region where the gravity of the planet is no longer strong enough to hold on to it. The star’s gravity takes over, and eventually the gas will fall onto the star.”
Other researchers from Rice and from California Institute of Technology, the University of Texas at Austin, NASA, and Spain’s National Institute of Aerospace Technology are coauthors of the work that was funded by NASA and the National Science Foundation and is published in the Astrophysical Journal.
We all intuitively understand the basics of time. Every day we count its passage and use it to schedule our lives.
We also use time to navigate our way to the destinations that matter to us. In school we learned that speed and time will tell us how far we went in traveling from point A to point B; with a map we can pick the most efficient route – simple.
But what if point A is the Earth, and point B is Mars – is it still that simple? Conceptually, yes. But to actually do it we need better tools – much better tools.
At NASA’s Jet Propulsion Laboratory, I’m working to develop one of these tools: the Deep Space Atomic Clock, or DSAC for short. DSAC is a small atomic clock that could be used as part of a spacecraft navigation system. It will improve accuracy and enable new modes of navigation, such as unattended or autonomous.
In its final form, the Deep Space Atomic Clock will be suitable for operations in the solar system well beyond Earth orbit. Our goal is to develop an advanced prototype of DSAC and operate it in space for one year, demonstrating its use for future deep space exploration.
Speed and time tell us distance
To navigate in deep space, we measure the transit time of a radio signal traveling back and forth between a spacecraft and one of our transmitting antennae on Earth (usually one of NASA’s Deep Space Network complexes located in Goldstone, California; Madrid, Spain; or Canberra, Australia).
We know the signal is traveling at the speed of light, a constant at approximately 300,000 km/sec (186,000 miles/sec). Then, from how long our “two-way” measurement takes to go there and back, we can compute distances and relative speeds for the spacecraft.
For instance, an orbiting satellite at Mars is an average of 250 million kilometers from Earth. The time the radio signal takes to travel there and back (called its two-way light time) is about 28 minutes. We can measure the travel time of the signal and then relate it to the total distance traversed between the Earth tracking antenna and the orbiter to better than a meter, and the orbiter’s relative speed with respect to the antenna to within 0.1 mm/sec.
We collect the distance and relative speed data over time, and when we have a sufficient amount (for a Mars orbiter this is typically two days) we can determine the satellite’s trajectory.
Measuring time, way beyond Swiss precision
Fundamental to these precise measurements are atomic clocks. By measuring very stable and precise frequencies of light emitted by certain atoms (examples include hydrogen, cesium, rubidium and, for DSAC, mercury), an atomic clock can regulate the time kept by a more traditional mechanical (quartz crystal) clock. It’s like a tuning fork for timekeeping. The result is a clock system that can be ultra stable over decades.
The precision of the Deep Space Atomic Clock relies on an inherent property of mercury ions – they transition between neighboring energy levels at a frequency of exactly 40.5073479968 GHz. DSAC uses this property to measure the error in a quartz clock’s “tick rate,” and, with this measurement, “steers” it towards a stable rate. DSAC’s resulting stability is on par with ground-based atomic clocks, gaining or losing less than a microsecond per decade.
Continuing with the Mars orbiter example, ground-based atomic clocks at the Deep Space Network error contribution to the orbiter’s two-way light time measurement is on the order of picoseconds, contributing only fractions of a meter to the overall distance error. Likewise, the clocks’ contribution to error in the orbiter’s speed measurement is a minuscule fraction of the overall error (1 micrometer/sec out of the 0.1 mm/sec total).
The distance and speed measurements are collected by the ground stations and sent to teams of navigators who process the data using sophisticated computer models of spacecraft motion. They compute a best-fit trajectory that, for a Mars orbiter, is typically accurate to within 10 meters (about the length of a school bus).
Sending an atomic clock to deep space
The ground clocks used for these measurements are the size of a refrigerator and operate in carefully controlled environments – definitely not suitable for spaceflight. In comparison, DSAC, even in its current prototype form as seen above, is about the size of a four-slice toaster. By design, it’s able to operate well in the dynamic environment aboard a deep-space exploring craft.
One key to reducing DSAC’s overall size was miniaturizing the mercury ion trap. Shown in the figure above, it’s about 15 cm (6 inches) in length. The trap confines the plasma of mercury ions using electric fields. Then, by applying magnetic fields and external shielding, we provide a stable environment where the ions are minimally affected by temperature or magnetic variations. This stable environment enables measuring the ions’ transition between energy states very accurately.
The DSAC technology doesn’t really consume anything other than power. All these features together mean we can develop a clock that’s suitable for very long duration space missions.
Because DSAC is as stable as its ground counterparts, spacecraft carrying DSAC would not need to turn signals around to get two-way tracking. Instead, the spacecraft could send the tracking signal to the Earth station or it could receive the signal sent by the Earth station and make the tracking measurement on board. In other words, traditional two-way tracking can be replaced with one-way, measured either on the ground or on board the spacecraft.
So what does this mean for deep space navigation? Broadly speaking, one-way tracking is more flexible, scalable (since it could support more missions without building new antennas) and enables new ways to navigate.
DSAC advances us beyond what’s possible today
The Deep Space Atomic Clock has the potential to solve a bunch of our current space navigation challenges.
Places like Mars are “crowded” with many spacecraft: Right now, there are five orbiters competing for radio tracking. Two-way tracking requires spacecraft to “time-share” the resource. But with one-way tracking, the Deep Space Network could support many spacecraft simultaneously without expanding the network. All that’s needed are capable spacecraft radios coupled with DSAC.
One-way uplink transmissions from the Deep Space Network are very high-powered. They can be received by smaller spacecraft antennas with greater fields of view than the typical high-gain, focused antennas used today for two-way tracking. This change allows the mission to conduct science and exploration activities without interruption while still collecting high-precision data for navigation and science. As an example, use of one-way data with DSAC to determine the gravity field of Europa, an icy moon of Jupiter, can be achieved in a third of the time it would take using traditional two-way methods with the flyby mission currently under development by NASA.
Collecting high-precision one-way data on board a spacecraft means the data are available for real-time navigation. Unlike two-way tracking, there is no delay with ground-based data collection and processing. This type of navigation could be crucial for robotic exploration; it would improve accuracy and reliability during critical events – for example, when a spacecraft inserts into orbit around a planet. It’s also important for human exploration, when astronauts will need accurate real-time trajectory information to safely navigate to distant solar system destinations.
Countdown to DSAC launch
The DSAC mission is a hosted payload on the Surrey Satellite TechnologyOrbital Test Bed spacecraft. Together with the DSAC Demonstration Unit, an ultra stable quartz oscillator and a GPS receiver with antenna will enter low altitude Earth orbit once launched via a SpaceX Falcon Heavy rocket in early 2017.
While it’s on orbit, DSAC’s space-based performance will be measured in a yearlong demonstration, during which Global Positioning System tracking data will be used to determine precise estimates of OTB’s orbit and DSAC’s stability. We’ll also be running a carefully designed experiment to confirm DSAC-based orbit estimates are as accurate or better than those determined from traditional two-way data. This is how we’ll validate DSAC’s utility for deep space one-way radio navigation.
In the late 1700s, navigating the high seas was forever changed by John Harrison’s development of the H4 “sea watch.” H4’s stability enabled seafarers to accurately and reliably determine longitude, which until then had eluded mariners for thousands of years. Today, exploring deep space requires traveling distances that are orders of magnitude greater than the lengths of oceans, and demands tools with ever more precision for safe navigation. DSAC is at the ready to respond to this challenge.
Todd Ely, Principal Investigator on Deep Space Atomic Clock Technology Demonstration Mission, Jet Propulsion Laboratory, NASA
Satellites used to be the exclusive playthings of rich governments and wealthy corporations. But increasingly, as space becomes more democratized, these sophisticated technologies are coming within reach of ordinary people. Just like drones before them, miniature satellites are beginning to fundamentally transform our conceptions of who gets to do what up above our heads.
As a recent report from the National Academy of Sciences highlights, these satellites hold tremendous potential for making satellite-based science more accessible than ever before. However, as the cost of getting your own satellite in orbit plummets, the risks of irresponsible use grow.
The question here is no longer “Can we?” but “Should we?” What are the potential downsides of having a slice of space densely populated by equipment built by people not traditionally labeled as “professionals”? And what would the responsible and beneficial development and use of this technology actually look like?
Some of the answers may come from a nonprofit organization that has been building and launching amateur satellites for nearly 50 years.
The technology we’re talking about
Having your own personal satellite launched into orbit might sound like an idea straight out of science fiction. But over the past few decades a unique class of satellites has been created that fits the bill: CubeSats.
The “Cube” here simply refers to the satellite’s shape. The most common CubeSat (the so-called “1U” satellite) is a 10 cm (roughly 4 inches) cube, so small that a single CubeSat could easily be mistaken for a paperweight on your desk. These mini, modular satellites can fit in a launch vehicle’s formerly “wasted space.” Multiples can be deployed in combination for more complex missions than could be achieved by one CubeSat alone.
Within their compact bodies these minute satellites are able to house sensors and communications receivers/transmitters that enable operators to study the Earth from space, as well as space around the Earth.
They’re primarily designed for Low Earth Orbit (LEO) – an easily accessible region of space from around 200 to 800 miles above the Earth, where human-tended missions like the Hubble Space Telescope and the International Space Station (ISS) hang out. But they can attain more distant orbits; NASA plans for most of its future Earth-escaping payloads (to the moon and Mars especially) to carry CubeSats.
Because they’re so small and light, it costs much less to get a CubeSat into Earth orbit than a traditional communication or GPS satellite. For instance, a research group here at Arizona State University recently claimed their developmental “femtosats” (especially small CubeSats) could cost as little as US$3,000 to put in orbit. This decrease in cost is allowing researchers, hobbyists and even elementary school groups to put simple instruments into LEO, by piggybacking onto rocket launches, or even having them deployed from the ISS.
Since then, NASA, the National Reconnaissance Office and even Boeing have all launched and operated CubeSats. There are more than 130 currently operational in orbit. The NASA Educational Launch of Nano Satellite (ELaNa) program, which offers free launches for educational groups and science missions, is now open to U.S. nonprofit corporations as well.
Clearly, satellites are not just for rocket scientists anymore.
Thinking inside the box
The National Academy of Sciences report emphasizes CubeSats’ importance in scientific discovery and the training of future space scientists and engineers. Yet it also acknowledges that widespread deployment of LEO CubeSats isn’t risk-free.
The greatest concern the authors raise is space debris – pieces of “junk” that orbit the earth, with the potential to cause serious damage if they collide with operational units, including the ISS.
More broadly, the report authors focus on factors that might impede greater use of CubeSat technologies. These include regulations around earth-space radio communications, possible impacts of International Traffic in Arms Regulations (which govern import and export of defense-related articles and services in the U.S.), and potential issues around extra-terrestrial contamination.
But what about the rest of us? How can we be sure that hobbyists and others aren’t launching their own “spy” satellites, or (intentionally or not) placing polluting technologies into LEO, or even deploying low-cost CubeSat networks that could be hijacked and used nefariously?
As CubeSat researchers are quick to point out, these are far-fetched scenarios. But they suggest that now’s the time to ponder unexpected and unintended possible consequences of more people than ever having access to their own small slice of space. In an era when you can simply buy a CubeSat kit off the shelf, how can we trust the satellites over our heads were developed with good intentions by people who knew what they were doing?
Some “expert amateurs” in the satellite game could provide some inspiration for how to proceed responsibly.
Guidance from some experienced amateurs
In 1969, the Radio Amateur Satellite Corporation (AMSAT) was created in order to foster ham radio enthusiasts’ participation in space research and communication. It continued the efforts, begun in 1961, by Project OSCAR – a U.S.-based group that built and launched the very first nongovernmental satellite just four years after Sputnik.
As an organization of volunteers, AMSAT was putting “amateur” satellites in orbit decades before the current CubeSat craze. And over time, its members have learned a thing or two about responsibility.
Here, open-source development has been a central principle. Within the organization, AMSAT has a philosophy of open sourcing everything – making technical data on all aspects of their satellites fully available to everyone in the organization, and when possible, the public. According to a member of the team responsible for FOX 1-A, AMSAT’s first CubeSat:
This means that it would be incredibly difficult to sneak something by us … there’s no way to smuggle explosives or an energy emitter into an amateur satellite when everyone has access to the designs and implementation.
However, they’re more cautious about sharing info with nonmembers, as the organization guards against others developing the ability to hijack and take control of their satellites.
This form of “self-governance” is possible within long-standing amateur organizations that, over time, are able to build a sense of responsibility to community members, as well as society more generally.
How does responsible development evolve?
But what happens when new players emerge, who don’t have deep roots within the existing culture?
Hobbyist and student “new kids on the block” are gaining access to technologies without being part of a longstanding amateur establishment. They are still constrained by funders, launch providers and a tapestry of regulations – all of which rein in what CubeSat developers can and cannot do. But there is a danger they’re ill-equipped to think through potential unintended consequences.
What these unintended consequences might be is admittedly far from clear. Certainly, CubeSat developers would argue it’s hard to imagine these tiny satellites causing substantial physical harm. Yet we know innovators can be remarkably creative with taking technologies in unexpected directions. Think of something as seemingly benign as the cellphone – we have microfinance and text-based social networking at one end of the spectrum, improvised explosive devices at the other.
This is where a culture of social responsibility around CubeSats becomes important – not simply for ensuring that physical risks are minimized (and good practices are adhered to), but also to engage with a much larger community in anticipating and managing less obvious consequences of the technology.
This is not an easy task. Yet the evidence from AMSAT and other areas of technology development suggest that responsible amateur communities can and do emerge around novel technologies.
The challenge here, of course, is ensuring that what an amateur community considers to be responsible, actually is. Here’s where there needs to be a much wider public conversation that extends beyond government agencies and scientific communities to include students, hobbyists, and anyone who may potentially stand to be affected by the use of CubeSat technology.