Great Barrier Reef bleaching would be almost impossible without climate change

By Andrew King, University of Melbourne; David Karoly, University of Melbourne; Mitchell Black, University of Melbourne; Ove Hoegh-Guldberg, The University of Queensland, and Sarah Perkins-Kirkpatrick, UNSW Australia.

The worst bleaching event on record has affected corals across the Great Barrier Reef in the last few months. As of the end of March, a whopping 93% of the reef has experienced
bleaching. This event has led scientists and high-profile figures such as Sir David Attenborough to call for urgent action to protect the reef from annihilation.

Coral Bleaching at Lizard Island on the Great Barrier Reef. © XL Catlin Seaview Survey

There is indisputable evidence that climate change is harming the reef. Yet, so far, no one has assessed how much climate change might be contributing to bleaching events such as the one we have just witnessed.

Unusually warm sea surface temperatures are strongly associated with bleaching. Because climate models can simulate these warm sea surface temperatures, we can investigate how climate change is altering extreme warm conditions across the region.

Daily sea surface temperature anomalies in March 2016 show unusual warmth around much of Australia. Author provided using OSSTIA data from UK Met Office Hadley Centre.

We examined the Coral Sea region (shown above) to look at how climate change is altering sea surface temperatures in an area that is experiencing recurring coral bleaching. This area has recorded a big increase in temperatures over the past century, with March 2016 being the warmest on record.

March sea surface temperatures were the highest on record this year in the Coral Sea, beating the previous 2015 record. Source: Bureau of Meteorology.

Examining the human influence

To find out how climate change is changing the likelihood of coral bleaching, we can look at how warming has affected the likelihood of extremely hot March sea temperature records. To do so, we use climate model simulations with and without human influences included.

If we see more very hot March months in simulations with a human influence, then we can say that climate change is having an effect, and we can attribute that change to the human impact on the climate.

This method is similar to analyses we have done for land regions, such as our investigations of recent Australian weather extremes.

We found that climate change has dramatically increased the likelihood of very hot March months like that of 2016 in the Coral Sea. We estimate that there is at least a 175 times increase in likelihood of hot March months because of the human influence on the climate.

The decaying El Niño event may also have affected the likelihood of bleaching events. However, we found no substantial influence for the Coral Sea region as a whole. Sea surface temperatures in the Coral Sea can be warmer than normal for different reasons, including changes in ocean currents (often related to La Niña events) and increased sunshine duration (generally associated with El Niño conditions).

Overall, this means that the influence of El Niño on the Coral Sea as a whole is weak. There have been severe bleaching events in past El Niño, neutral and La Niña years.

We estimate that climate change has increased temperatures in the hottest March months by just over 1℃. As the effects of climate change worsen we would expect this warming effect to increase, as has been pointed out elsewhere.

March 2016 was clearly extreme in the observed weather record, but using climate models we estimate that by 2034 temperature anomalies like March 2016 will be normal. Thereafter events like March 2016 will be cooler than average.

Overall, we’re observing rapid warming in the Coral Sea region that can only be understood if we include human influences. The human effect on the region through climate change is clear and it is strengthening. Surface temperatures like those in March 2016 would be extremely unlikely to occur in a world without humans.

As the seas warm because of our effect on the climate, bleaching events in the Great Barrier Reef and other areas within the Coral Sea are likely to become more frequent and more devastating.

Action on climate change may reduce the likelihood of future bleaching events, although not for a few decades as we have already built in warming through our recent greenhouse gas emissions.

A note on peer review

We have analysed this coral bleaching event in near-real time, which means the results we present here have not been through peer review.

Recently, we have started undertaking these event attribution analyses immediately after the extreme event has occurred or even before it has finished. As we are using a method that has been previously peer-reviewed, we can have confidence in our results.

It is important, however, that these studies go through a peer-review process and these results will be submitted soon. In the meantime we have published a short methods document which provides more detail.

Our results are also consistent with previous studies (see also here and here).

The ConversationAndrew King, Climate Extremes Research Fellow, University of Melbourne; David Karoly, Professor of Atmospheric Science, University of Melbourne; Mitchell Black, PhD Candidate, University of Melbourne; Ove Hoegh-Guldberg, Director, Global Change Institute, The University of Queensland, and Sarah Perkins-Kirkpatrick, Research Fellow, UNSW Australia

This article was originally published on The Conversation. Read the original article.

Now, Check Out:

To fight Zika, let’s genetically modify mosquitoes – the old-fashioned way

By Jeffrey Powell, Yale University.

The near panic caused by the rapid spread of the Zika virus has brought new urgency to the question of how best to control mosquitoes that transmit human diseases. Aedes aegypti mosquitoes bite people across the globe, spreading three viral diseases: dengue, chikungunya and Zika. There are no proven effective vaccines or specific medications to treat patients after contracting these viruses.

Mosquito control is the only way, at present, to limit them. But that’s no easy task. Classical methods of control such as insecticides are falling out of favor – they can have adverse environmental effects as well as increase insecticide resistance in remaining mosquito populations. New mosquito control methods are needed – now.

The time is ripe, therefore, to explore a long-held dream of vector biologists, including me: to use genetics to stop or limit the spread of mosquito-borne diseases. While gene editing technologies have advanced dramatically in the last few decades, it is my belief that we’ve overlooked older, tried and true methods that could work just as well on these insects. We can accomplish the goal of producing mosquitoes incapable of transmitting human pathogens using the same kinds of selective breeding techniques people have been using for centuries on other animals and plants.

Technicians from Oxitec inspect genetically modified Aedes aegypti mosquitoes in Campinas, Brazil. <br. Paulo Whitaker/Reuters

Techniques on the table

One classic strategy for reducing insect populations has been to flood populations with sterile males – usually produced using irradiation. When females in the target population mate with these males, they produce no viable offspring – hopefully crashing population numbers.

The modern twist on this method has been to generate transgenic males that carry a dominant lethal gene that essentially makes them sterile; offspring sired by these males die late in the larval stage, eliminating future generations. This method has been promulgated by the biotech company Oxitec and is currently used in Brazil.

Rather than just killing mosquitoes, a more effective and lasting strategy would be to genetically change them so they can no longer transmit a disease-causing microbe.

The powerful new CRISPR gene editing technique could be used to make transgenes (genetic material from another species) take over a wild population. This method works well in mosquitoes and is potentially a way to “drive” transgenes into populations. CRISPR could help quickly spread a gene that confers resistance to transmission of a virus – what scientists call refractoriness.

But CRISPR has been controversial, especially as applied to human beings, because the transgenes it inserts into an individual can be passed on to its offspring. No doubt using CRISPR to create and release genetically modified mosquitoes into nature would stir up controversy. The U.S. Director of National Intelligence, James Clapper, has gone so far as to dub CRISPR a potential weapon of mass destruction.

But are transgenic technologies necessary to genetically modify mosquito populations?

Examples of successful artificial selection of various traits through the years. In the center is a cartoon of the ‘block’ scientists would like to select for in mosquitoes so they can’t pass on the virus.
Jeff Powell, Author provided
Examples of successful artificial selection of various traits through the years. In the center is a cartoon of the ‘block’ scientists would like to select for in mosquitoes so they can’t pass on the virus.
Jeff Powell, Author provided

Selective breeding the old-fashioned way

Genetic modification of populations has been going on for centuries with great success. This has occurred for almost all commercially useful plants and animals that people use for food or other products, including cotton and wool. Selective breeding can produce immense changes in populations based on naturally occurring variation within the species.

Artificial selection using this natural variation has proven effective over and over again, especially in the agricultural world. By choosing parents with desirable traits (chickens with increased egg production, sheep with softer wool) for several consecutive generations, a “true breeding” strain can be produced that will always have the desired traits. These may look very different from the ancestor – think of all the breeds of dogs derived from an ancestor wolf.

To date, only limited work of this sort has been done on mosquitoes. But it does show that it’s possible to select for mosquitoes with reduced ability to transmit human pathogens. So rather than introducing transgenes from other species, why not use the genetic variation naturally present in mosquito populations?

Deriving strains of mosquitoes through artificial selection has several advantages over transgenic approaches.

  • All the controversy and potential risks surrounding transgenic organisms (GMOs) are avoided. We’re only talking about increasing the prevalence in the population of the naturally occurring mosquito genes we like.
  • Selected mosquitoes derived directly from the target population would likely be more competitive when released back to their corner of the wild. Because the new refractory strain that can’t transmit the virus carries only genes from the target population, it would be specifically adapted to the local environment. Laboratory manipulations to produce transgenic mosquitoes are known to lower their fitness.
  • By starting with the local mosquito population, scientists could select specifically for refractoriness to the virus strain infecting people at the moment in that locality. For example, there are four different “varieties” of the dengue virus called serotypes. To control the disease, the selected mosquitoes would need to be refractory to the serotype active in that place at that time.
  • It may be possible to select for strains of mosquitoes that are unable to transmit multiple viruses. Because the same Aedes aegypti mosquito species transmits dengue, chikungunya and Zika, people living in places that have this mosquito are simultaneously at risk for all three diseases. While it has not yet been demonstrated, there is no reason to think that careful, well-designed selective breeding couldn’t develop mosquitoes unable to spread all medically relevant viruses.

Fortunately, Ae. aegypti is the easiest mosquito to rear in captivity and has a generation time of about 2.5 weeks. So unlike classical plant and animal breeders dealing with organisms with generations in years, 10 generations of selection of this mosquito would take only months.

Researchers are working out mass rearing techniques for Aedes mosquitoes – their generation time is only 2.5 weeks. IAEA Imagebank, CC BY-NC-ND

This is not to imply there may not be obstacles in using this approach. Perhaps the most important is that the genes that make it hard for these insects to transmit disease may also make individual insects weaker or less healthy than the target natural population. Eventually the lab-bred mosquitoes and their offspring could be out-competed and fade from the wild population. We might need to continuously release refractory mosquitoes – that is, the ones that aren’t good at transmitting the disease in question – to overcome selection against the desirable refractory genes.

And mosquito-borne pathogens themselves evolve. Viruses may mutate to evade any genetically modified mosquito’s block. Any plan to genetically modify mosquito populations needs to have contingency plans in place for when viruses or other pathogens evolve. New strains of mosquitoes can be quickly selected to combat the new version of the virus – no costly transgenic techniques necessary.

Today, plant and animal breeders are increasingly using new gene manipulation techniques to further improve economically important species. But this is only after traditional artificial selection has been taken about as far as it can to improve breeds. Many mosquito biologists are proposing to go directly to the newest fancy transgenic methodologies that have never been shown to actually work in natural populations of mosquitoes. They are skipping over a proven, cheaper and less controversial approach that should at least be given a shot.

The ConversationJeffrey Powell, Professor, Yale University

This article was originally published on The Conversation. Read the original article.

Featured Photo Credit: Jaime Saldarriaga/Reuters

Now, Check Out:

How Astronomers Determined Whether This Object is an Exoplanet or a Brown Dwarf

Our galaxy may have billions of free-floating planets that exist in the darkness of space without any companion planets or even a host sun.

But scientists say it’s possible that some of those lonely worlds aren’t actually planets but rather light-weight stars called brown dwarfs.

Take, for example, the newfound object called WISEA 1147. It’s estimated to be between roughly five to 10 times the mass of Jupiter.

WISEA 1147 is one of the few free-floating worlds where astronomers can begin to point to its likely origins as a brown dwarf and not a planet. Because the object was found to be a member of the TW Hydrae family of very young stars, astronomers know that it is also very young—only 10 million years old.

A sky map shows the location of the TW Hydrae family, or association, of stars, which lies about 175 light-years from Earth and is centered in the Hydra constellation. (Credit: NASA/JPL-Caltech)
A sky map shows the location of the TW Hydrae family, or association, of stars, which lies about 175 light-years from Earth and is centered in the Hydra constellation. (Credit: NASA/JPL-Caltech)

And because planets require at least 10 million years to form, and probably longer to get themselves kicked out of a star system, WISEA 1147 is likely a brown dwarf. Brown dwarfs form like stars but lack the mass to fuse atoms at their cores and shine with starlight.

“With continued monitoring, it may be possible to trace the history of WISEA 1147 to confirm whether or not it formed in isolation,” says Adam Schneider of the University of Toledo in Ohio, lead author of a new study in the Astrophysical Journal.


Of the billions of possible free-floating worlds thought to populate our galaxy, some may be very low-mass brown dwarfs, while others may in fact be bona fide planets, kicked out of nascent solar systems. At this point, the fraction of each population remains unknown.

Tracing the origins of free-floating worlds, and determining whether they are planets or brown dwarfs, is a difficult task, precisely because they are so isolated.

“We are at the beginning of what will become a hot field—trying to determine the nature of the free-floating population and how many are planets versus brown dwarfs,” says study coauthor Davy Kirkpatrick of NASA’s Infrared Processing and Analysis Center, or IPAC, at the California Institute of Technology.+4

Astronomers found WISEA 1147 by sifting through images taken of the entire sky by NASA’s Wide-field Infrared Survey Explorer (WISE) in 2010, and the Two Micron All Sky Survey, or 2MASS, about a decade earlier. They were looking for nearby, young brown dwarfs.

One way to tell if something lies nearby is to check to see if it’s moved significantly relative to other stars over time. The closer an object, the more it will appear to move against a backdrop of more distant stars. By analyzing data from both sky surveys taken about 10 years apart, the close objects jump out.


Finding low-mass objects and brown dwarfs is also well suited to WISE and 2MASS, both of which detect infrared light. Brown dwarfs aren’t bright enough to be seen with visible-light telescopes, but their heat signatures light up when viewed in infrared images.

The brown dwarf WISEA 1147 was brilliantly “red” in the 2MASS images (where the color red had been assigned to longer infrared wavelengths), which means that it’s dusty and young.

“The features on this one screamed out, ‘I’m a young brown dwarf,’” says Schneider.

After more analysis, the astronomers realized that this object belongs to the TW Hydrae association, which is about 150 light-years from Earth and only about 10 million years old. That makes WISEA 1147, with a mass between about five and 10 times that of Jupiter, one of the youngest and lowest-mass brown dwarfs ever found.

Interestingly, a second, very similar low-mass member of the TW Hydrae association was announced just days later (2MASS 1119-11) by a separate group led by Kendra Kellogg of Western University in Ontario, Canada.

Another reason that astronomers want to study these isolated worlds is that they resemble planets but are easier to study. Planets around other stars, called exoplanets, are barely perceptible next to their brilliant stars. By studying objects like WISEA 1147, which has no host star, astronomers can learn more about their compositions and weather patterns.

“We can understand exoplanets better by studying young and glowing low-mass brown dwarfs,” says Schneider. “Right now, we are in the exoplanet regime.”

Other authors of the study include James Windsor and Michael Cushing of the University of Toledo, and Ned Wright of UCLA, who was also the principal investigator of the WISE mission. Their study is published in the Astrophysical Journal.

Source: Republished from as a derivative work under the Attribution 4.0 International license. Original article posted to Futurity by .

Featured Image Credit:  NASA/JPL-Caltech

Now, Check Out:

This Robot ‘Mermaid’ can Grab Shipwreck Treasures [Video]

A robot called OceanOne with artificial intelligence and haptic feedback systems gives human pilots an unprecedented ability to explore the depths of the oceans.

Oussama Khatib held his breath as he swam through the wreck of La Lune, over 300 feet below the Mediterranean. The flagship of King Louis XIV sank here in 1664, 20 miles off the southern coast of France, and no human had touched the ruins—or the countless treasures and artifacts the ship once carried—in the centuries since.

With guidance from a team of skilled deep-sea archaeologists who had studied the site, Khatib, a professor of computer science at Stanford, spotted a grapefruit-size vase. He hovered precisely over the vase, reached out, felt its contours and weight, and stuck a finger inside to get a good grip. He swam over to a recovery basket, gently laid down the vase, and shut the lid. Then he stood up and high-fived the dozen archaeologists and engineers who had been crowded around him.

This entire time Khatib had been sitting comfortably in a boat, using a set of joysticks to control OceanOne, a humanoid diving robot outfitted with human vision, haptic force feedback and an artificial brain—in essence, a virtual diver.

OceanOne, the "robot mermaid" on a dive. Credit: Frederic Osada, Teddy Seguin/DRASSM
OceanOne, the “robot mermaid” on a dive. Credit: Frederic Osada, Teddy Seguin/DRASSM

When the vase returned to the boat, Khatib was the first person to touch it in hundreds of years. It was in remarkably good condition, though it showed every day of its time underwater: The surface was covered in ocean detritus, and it smelled like raw oysters. The team members were overjoyed, and when they popped bottles of champagne, they made sure to give their heroic robot a celebratory bath.

The expedition to La Lune was OceanOne’s maiden voyage. Based on its astonishing success, Khatib hopes that the robot will one day take on highly skilled underwater tasks too dangerous for human divers, as well as open up a whole new realm of ocean exploration.

“OceanOne will be your avatar,” Khatib says. “The intent here is to have a human diving virtually, to put the human out of harm’s way. Having a machine that has human characteristics that can project the human diver’s embodiment at depth is going to be amazing.”


The concept for OceanOne was born from the need to study coral reefs deep in the Red Sea, far below the comfortable range of human divers. No existing robotic submarine can dive with the skill and care of a human diver, so OceanOne was conceived and built from the ground up, a successful marriage of robotics, artificial intelligence, and haptic feedback systems.

OceanOne looks something like a robo-mermaid. Roughly five feet long from end to end, its torso features a head with stereoscopic vision that shows the pilot exactly what the robot sees, and two fully articulated arms. The “tail” section houses batteries, computers, and eight multi-directional thrusters.

The body looks far unlike conventional boxy robotic submersibles, but it’s the hands that really set OceanOne apart. Each fully articulated wrist is fitted with force sensors that relay haptic feedback to the pilot’s controls, so the human can feel whether the robot is grasping something firm and heavy, or light and delicate. (Eventually, each finger will be covered with tactile sensors.)

The bot’s brain also reads the data and makes sure that its hands keep a firm grip on objects, but that they don’t damage things by squeezing too tightly. In addition to exploring shipwrecks, this makes it adept at manipulating delicate coral reef research and precisely placing underwater sensors.

“You can feel exactly what the robot is doing,” Khatib says. “It’s almost like you are there; with the sense of touch you create a new dimension of perception.”

The pilot can take control at any moment, but most frequently won’t need to lift a finger. Sensors throughout the robot gauge current and turbulence, automatically activating the thrusters to keep the robot in place. And even as the body moves, quick-firing motors adjust the arms to keep its hands steady as it works. Navigation relies on perception of the environment, from both sensors and cameras, and these data run through smart algorithms that help OceanOne avoid collisions. If it senses that its thrusters won’t slow it down quickly enough, it can quickly brace for impact with its arms, an advantage of a humanoid body build.


The humanoid form also means that when OceanOne dives alongside actual humans, its pilot can communicate through hand gestures during complex tasks or scientific experiments. Ultimately, though, Khatib designed OceanOne with an eye toward getting human divers out of harm’s way. Every aspect of the robot’s design is meant to allow it to take on tasks that are either dangerous—deep-water mining, oil-rig maintenance, or underwater disaster situations like the Fukushima Daiichi power plant—or simply beyond the physical limits of human divers.

“We connect the human to the robot in very intuitive and meaningful way. The human can provide intuition and expertise and cognitive abilities to the robot,” Khatib says. “The two bring together an amazing synergy. The human and robot can do things in areas too dangerous for a human, while the human is still there.”

Khatib was forced to showcase this attribute while recovering the vase. As OceanOne swam through the wreck, it wedged itself between two cannons. Firing the thrusters in reverse wouldn’t extricate it, so Khatib took control of the arms, motioned for the bot to perform a sort of pushup, and OceanOne was free.

Next month, OceanOne will return to the Stanford campus, where Khatib and his students will continue iterating on the platform. The prototype robot is a fleet of one, but Khatib hopes to build more units, which would work in concert during a dive.

The expedition to La Lune was made possible in large part thanks to the efforts of Michel L’Hour, the director of underwater archaeology research in France’s Ministry of Culture. Previous remote studies of the shipwreck conducted by L’Hour’s team made it possible for OceanOne to navigate the site. Vincent Creuze of the Universite de Montpellier in France commanded the support underwater vehicle that provided third-person visuals of OceanOne and held its support tether at a safe distance.

In addition to Stanford, Meka Robotics and the King Abdullah University of Science and Technology (KAUST) in Saudi Arabia supported the robot’s development.

Source: Republished from as a derivative work under the Attribution 4.0 International license. Original article posted to Futurity by .

Featured Photo Credit: Frederic Osada, Teddy Seguin/DRASSM

Now, Check Out:

How ancient warm periods can help predict future climate change

By Gordon Inglis, University of Bristol and Eleni Anagnostou, University of Southampton.

Several more decades of increased carbon dioxide emissions could lead to melting ice sheets, mass extinctions and extreme weather becoming the norm. We can’t yet be certain of the exact impacts, but we can look to the past to predict the future.

We could start with the last time Earth experienced CO2 levels comparable to those expected in the near future, a period 56m to 34m years ago known as the Eocene.

The Eocene began as a period of extreme warmth around 10m years after the final dinosaurs died. Alligators lived in the Canadian Arctic while palm trees grew along the East Antarctic coastline. Over time, the planet gradually cooled, until the Eocene was brought to a close with the formation of a large ice sheet on Antarctica.

During the Eocene, carbon dioxide (CO2) concentrations in the atmosphere were much higher than today, with estimates usually ranging between 700 and 1,400 parts per million (ppm). As these values are similar to those anticipated by the end of this century (420 to 935ppm), scientists are increasingly using the Eocene to help predict future climate change.

We’re particularly interested in the link between carbon dioxide levels and global temperature, often referred to as “equilibrium climate sensitivity” – the temperature change that results from a doubling of atmospheric CO2, once fast climate feedbacks (such as water vapour, clouds and sea ice) have had time to act.

To investigate climate sensitivity during the Eocene we generated new estimates of CO2 throughout the period. Our study, written with colleagues from the Universities of Bristol, Cardiff and Southampton, is published in Nature.

Reconstruction of the 40m year old planktonic foraminifer Acarinina mcgowrani. Richard Bizley ( and Paul Pearson, Cardiff University,  CC BY

As we can’t directly measure the Eocene’s carbon dioxide levels, we have to use “proxies” preserved within sedimentary rocks. Our study utilises planktonic foraminifera, tiny marine organisms which record the chemical composition of seawater in their shells. From these fossils we can figure out the acidity level of the ocean they lived in, which is in turn affected by the concentration of atmospheric CO2.

We found that CO2 levels approximately halved during the Eocene, from around 1,400ppm to roughly 770ppm, which explains most of the sea surface cooling that occurred during the period. This supports previously unsubstantiated theories that carbon dioxide was responsible for the extreme warmth of the early Eocene and that its decline was responsible for the subsequent cooling.

We then estimated global mean temperatures during the Eocene (again from proxies such as fossilised leaves or marine microfossils) and accounted for changes in vegetation, the position of the continents, and the lack of ice sheets. This yields a climate sensitivity value of 2.1°C to 4.6°C per doubling of CO2. This is similar to that predicted for our own warm future (1.5 to 4.5°C per doubling of CO2).

Our work reinforces previous findings which looked at sensitivity in more recent time intervals. It also gives us confidence that our Eocene-like future is well mapped out by current climate models.

Fossil foraminifera from Tanzania – their intricate shells capture details of the ocean 33-50m years ago. Paul Pearson, Cardiff University, CC BY

Rich Pancost, a paleoclimate expert and co-author on both studies, explains: “Most importantly, the collective research into Earth history reveals that the climate can and has changed. And consequently, there is little doubt from our history that transforming fossil carbon underground into carbon dioxide in the air – as we are doing today – will significantly affect the climate we experience for the foreseeable future.”

Our work also has implications for other elements of the climate system. Specifically, what is the impact of higher CO2 and a warmer climate upon the water cycle? A recent study investigating environmental change during the early Eocene – the warmest interval of the past 65m years – found an increase in global precipitation and evaporation rates and an increase in heat transport from the equator to the poles. The latter is consistent with leaf fossil evidence from the Arctic which suggests that high precipitation rates were common.

However, changes in the water cycle are likely to vary between regions. For example, low to mid latitudes likely became drier overall, but with more intense, seasonal rainfall events. Although very few studies have investigated the water cycle of the Eocene, understanding how this operates during past warm climates could provide insights into the mechanisms which will govern future changes.

The ConversationGordon Inglis, Postdoctoral Research Associate in Organic Geochemistry, University of Bristol and Eleni Anagnostou, Postdoctoral Research Fellow, Ocean and Earth Science, University of Southampton

This article was originally published on The Conversation. Read the original article.

Featured Image Credit:  Eocene fauna of North America (mural), Jay Matternes / Smithsonian Museum

Now, Check Out:

This Ancient Society may have Invented Authority [Video]

Authoritarianism is an issue of special gravitas this year, given claims of heavy-handedness in US presidential politics and widening conflicts against dictatorships in Syria and elsewhere. But why do we as a people let a single person or small group make decisions for everyone else?

A 3,000-year-old archaeological site in the Andes of Peru may hold the answer, says John Rick, associate professor of anthropology at Stanford University.

“More than 5,000 and certainly 10,000 years ago, nowhere in the world was anyone living under a concerted authority. Today we expect that. It is the essence of our organization. ‘Take me to your leader. Who’s in charge here?’ So where did that come from?”

Currently a fellow at the Stanford Humanities Center, Rick is gathering the large amount of evidence from more than two decades of fieldwork at the ancient site of Chavín de Huántar, where that culture developed from roughly 900 BCE to 200 BCE.

He will present his research on Chavín and how authority-minded systems arose in human society in an upcoming book, Innovation, Religion and the Development of the Andean Formative Period, that will explore the role of religion in the shaping hierarchical societies in the New World, especially the Andes.


Chavín was a religious center run by an elaborate priesthood. Located north of Lima, in the Andes Mountains, it sat at the mouth of two large rivers that once held religious significance for the region. During its existence, the Chavín priesthood subjected visitors to an incredible range of routines, some of which involved manipulating light, water, and sound.

The priesthood deliberately worked with underground spaces, architectural stonework, a system of water canals, psychoactive drugs, and animal iconography to augment its demonstrations of authority.

“I was fascinated with the evidence we have for this idea of manipulation of people who went through ritual experiences in these structures,” he says.

The priesthood sought to increase its level of authority, Rick says. “They needed to create a new world, one in which the settings, objects, actions, and senses all argue for the presence of intrinsic authority—both from the religious leaders and from a realm of greater powers they portray themselves as related to.”

Prior archaeological research on Chavín has suggested that the site attracted people because it was a cult of devotion. However, Rick and colleagues believe something else was going on.

They found very little evidence that common people were involved in worshipping at Chavín. Instead they have surmised that visitors were elite pilgrims, local leaders from far-flung parts of the Central Andes. These people were looking for justification to elevate their own status and their positions of control in society.

After their experiences at Chavín, they would use the experiences to more adroitly disseminate messages of authority to their own people, Rick says. “They’re basically in a process of developing a hierarchy, a real social structure that has strong political power at the top,” Rick says.


Architecture was critical to producing this effect. The researchers estimate that there are 2 kilometers of underground labyrinthine, gallery-like spaces, which were clearly designed to confine and manipulate those who entered them.

This was a removal from one world and the creation of another. As a result, the rituals were dramatic and effective in changing ideas about the nature of human authoritative relationships.

Stone was another key element. Leaders at Chavín often recorded their actions by engraving their deeds in stone. While other ancient sites used wood, papier-mâché or textiles, those at Chavín revealed their strategies in the ground and rock itself.

The priesthood also manipulated visitors with psychoactive drugs. Evidence is found in the portrayal of psychoactive plants in stone engravings, with clear illustrations of paraphernalia and the drugs’ effects on humans.

Rick believes Chavín was a place where human psychology was explored and experiments were being conducted to test how people would react to certain stimuli.

Yet another tool of authoritarian manipulation was water, through a sophisticated hydraulic system and underwater canals at the site. Despite water’s danger in flooding, the Chavín priesthood clearly attempted to control water visibly.

“They were playing with this stuff,” Rick says. “They were using water pressure 3,000 years ago to elevate water, to bring it up where it shouldn’t be. They’re using it as an agent to wash away offerings,” he says.

The water control, is a powerful demonstration of human agency over nature, Rick says. He conjures up a picture of what it must have been like for pilgrims to visit Chavín and its dark, underground spaces, undergo strange experiences, and observe the seeming abilities of priests to wield supernatural powers.

Ancient places like Chavín reflect a major change in the way human beings would treat each other, Rick says. Such places gave rise to “complex, highly authoritarian, communications-driven, sometimes charismatically led societies” in human civilization.

Source: Republished from as a derivative work under the Attribution 4.0 International license. Original article posted to Futurity by .

Featured Image Credit: inyucho, via flickr, CC BY 2.0

Now, Check Out:

Chernobyl: new tomb will make site safe for 100 years [Video]

By Claire Corkhill, University of Sheffield.

Thirty years after the Chernobyl nuclear accident, there’s still a significant threat of radiation from the crumbling remains of Reactor 4. But an innovative, €1.5 billion super-structure is being built to prevent further releases, giving an elegant engineering solution to one of the ugliest disasters known to man.

Since the disaster that directly killed at least 31 people and released large quantities of radiation, the reactor has been encased in a tomb of steel-reinforced concrete. Usually buildings of this kind can be protected from corrosion and environmental damage through regular maintenance. But because of the hundreds of tonnes of highly radioactive material inside the structure, maintenance hasn’t been possible.

Water dripping from the sarcophagus roof has become radioactive and leaks into the soil on the reactor floor, birds have been sighted in the roof space. Every day, the risk of the sarcophagus collapsing increases, along with the risk of another widespread release of radioactivity to the environment.

Thanks to the sarcophagus, up to 80% of the original radioactive material left after the meltdown remains in the reactor. If it were to collapse, some of the melted core, a lava-like material called corium, could be ejected into the surrounding area in a dust cloud, as a mixture of highly radioactive vapour and tiny particles blown in the wind. The key substances in this mixture are iodine-131, which has been linked to thyroid cancer, and cesium-137, which can be absorbed into the body, with effects ranging from radiation sickness to death depending on the quantity inhaled or ingested.

Metal tomb. Arne Müseler/Wikimedia, CC BY-SA

With repair of the existing sarcophagus deemed impossible because of the radiation risks, a new structure designed to last 100 years is now being built. This “new safe confinement” will not only safely contain the radioactivity from Reactor 4, but also enable the sarcophagus and the reactor building within to be safely taken apart. This is essential if potential future releases of radioactivity, 100 years or more into the future, are to be prevented.

Construction of the steel arch-shaped structure began in 2010 and is currently scheduled for completion in 2017. At 110 metres tall with a span of 260 metres, the confinement structure will be large enough to house St Paul’s Cathedral or two Statues of Liberty on top of one another. But the major construction challenges are not down to size alone.

The close-fitting arch structure is designed to completely entomb Reactor 4. It will be hermetically sealed to prevent the release of radioactive particles should the structures beneath collapse. Triple-layered, radiation-resistant panels made from polycarbonate-coated stainless steel will clad the arch to provide shielding that will be crucial for allowing people to safely return to the area in ongoing resettlement programmes.

Innovative engineering solutions

Operating a building site at the world’s most radioactively hazardous site has inevitably led to a number of engineering innovations. Before work could start, a construction site was prepared 300 metres west of the reactor building, so workers could build the structure without being exposed to radiation. Hundreds of tonnes of radioactive soil had to be removed from the area, and great slabs of concrete laid to provide extra radiation protection.

Inconveniently for a 110 metre-high construction, working above 30 metres is impossible – the higher you go, the closer you get to the top of the exposed reactor core, where radiation dose rates are high enough to pose a significant threat to life. The solution? Build from the top down. After each section of the structure was built, starting with the top of the arch, it was hoisted into the air, 30 metres at a time, and then horizontal supports were added. This was done using jacks that were once used to raise the Russian nuclear submarine, the Kursk, from the bottom of the Barents Sea. The process was repeated until the giant structure reached 110 metres into the air. The two halves of the arch were also constructed separately and have recently been joined together.

The next challenge is to make sure the confinement structure lasts 100 years. In the old sarcophagus, “roof rain” condensation formed when the inside surface of the roof was cooler than the atmosphere outside, corroding any metal structures it came into contact with. To prevent this in the new structure, a complex ventilation system will heat the inner part of the confinement structure roof to avoid any temperature or humidity differences.

Finally, a state-of-the-art solution is required to move the confinement structure, which weighs more than 30,000 tonnes, from its construction site to the final resting place above Reactor 4. The giant building will slide 300 metres along rail tracks, furnished with specially developed Teflon bearings, which will minimise friction and allow accurate positioning.

Future safety

Once the new structure finally confines the radiation, deconstruction of the previous sarcophagus and Reactor 4 within can begin bit by bit. This will be done using a remotely operated heavy-duty crane and robotic tools suspended from the new confinement roof. However, the high levels of radioactivity may damage these remote systems, much like the robots that entered the stricken Fukushima core and “died trying” to capture the damage on camera.

At the very least, building a new confinement structure buys the Ukrainian government more time to develop new radiation-resistant clean-up solutions and undertake the clean-up as safely as possible, all while the radioactive material is decaying. This is an enforced lesson in patience. Only constant innovation in engineering, robotics and materials will allow nuclear disaster sites like Chernobyl and Fukushima to be made safe, once and for all.

The ConversationClaire Corkhill, Research Fellow in nuclear waste disposal, University of Sheffield

This article was originally published on The Conversation. Read the original article.

Featured Photo Credit: Tim Porter/Wikimedia, CC BY-SA

Now, Check Out:

Massive Star Clusters Zapping Earth with Most Cosmic Rays

Most of the cosmic rays arriving at Earth from our galaxy come from nearby clusters of massive stars.

The finding is based on new observations from the Cosmic Ray Isotope Spectrometer (CRIS), an instrument aboard NASA’s Advanced Composition Explorer (ACE) spacecraft.

The distance between the galactic cosmic rays’ point of origin and Earth is limited by the survival of a very rare type of cosmic ray that acts like a tiny clock. The cosmic ray is a radioactive isotope of iron, 60Fe, which has a half-life of 2.6 million years. In that time, half of these iron nuclei decay into other elements.

In the 17 years CRIS has been in space, it detected about 300,000 galactic cosmic-ray nuclei of ordinary iron, but just 15 of the radioactive 60Fe .

“Our detection of radioactive cosmic-ray iron nuclei is a smoking gun indicating that there has been a supernova in the last few million years in our neighborhood of the galaxy,” says Robert Binns, research professor of physics at Washington University in St. Louis and lead author of the paper published in Science.

“The new data also show the source of galactic cosmic rays is nearby clusters of massive stars, where supernova explosions occur every few million years,” says Martin Israel, professor of physics at Washington University and a coauthor of the paper.

The radioactive iron is believed to be produced in core-collapse supernovae—violent explosions that mark the death of massive stars—which occur primarily in clusters of massive stars called OB associations.

There are more than 20 such associations close enough to Earth to be the source of the cosmic rays, including subgroups of the nearby Scorpius and Centaurus constellations, such as Upper Scorpius (83 stars), Upper Centaurus Lupus (134 stars), and Lower Centaurus Crux (97 stars). Because of their size and proximity, these are the likely sources of the radioactive iron nuclei CRIS detected, the scientists say.


The 60Fe results add to a growing body of evidence that galactic cosmic rays are created and accelerated in OB associations.

Earlier CRIS measurements of nickel and cobalt isotopes show there must be a delay of at least 100,000 years between creation and acceleration of galactic cosmic-ray nuclei, Binns says.


This time lag also means that the nuclei synthesized in a supernova are not accelerated by that supernova, but by the shock wave from a second nearby supernova, Israel says, one that occurs quickly enough that a substantial fraction of the 60Fe from the first supernova has not yet decayed.

Together, these time constraints mean the second supernova must occur between 100,000 and a few million years after the first supernova. Clusters of massive stars are one of the few places in the universe where supernovae occur often enough, and close enough together, to bring this off.

“So our observation of 60Fe lends support to the emerging model of cosmic-ray origin in OB associations,” Israel adds.


Although the supernovae in a nearby OB association that created the 60Fe CRIS observed happened long before people were around to observe suddenly brightening stars (novae), they also may have left traces in Earth’s oceans and on the moon.

In 1999, astrophysicists proposed that a supernova explosion in Scorpius might explain the presence of excessive radioactive iron in 2.2 million-year-old ocean crust. Two research papers recently published in Nature bolster this case.

One research group examined 60Fe deposition worldwide, and argued that there might have been a series of supernova explosions, not just one. The other simulated by computer the evolution of Scorpius-Centaurus association in an attempt to nail down the sources of the 60Fe.

Lunar samples also show elevated levels of 60Fe consistent with supernova debris arriving at the moon about 2 million years ago. And here, too, there is recent corroboration. A paper just published in Physical Review Letters describes an analysis of nine core samples brought back by the Apollo crews.


Cosmic rays were discovered before World War I but named in the 1920s by the famous physicist Robert Millikin, who called them “rays” because he thought they were a form of high-energy electromagnetic radiation.

But in the early 1930s, researchers measured cosmic-ray intensity at 69 locations around the Earth. Variations in the intensity with magnetic latitude showed that cosmic rays were deflected by the Earth’s magnetic field, and must therefore be charged particles (the nuclei of atoms stripped of their electrons) rather than electromagnetic radiation.

Of these nuclei, 90 percent are hydrogen nuclei (protons), 9 percent are helium nuclei and only one percent are the nuclei of heavier elements. But that one percent provides the best clues to how the particles are created.

Although energetic particles coming from our sun are sometimes called cosmic rays, astrophysicists prefer to call these comparatively low-energy particles SEPs, or solar energetic particles.

They reserve the term “cosmic ray” for particles coming from outside our solar system, either from our galaxy or beyond. The source of some extremely rare particles is still unknown.

The team’s research is published in Science.

Source: Republished form as a derivative work under the Attribution 4.0 International license. Original article posted to Futurity by .

Featured Image Credit: Judy Schmidt, flickr, CC BY 2.0

Now, Check Out:

New Study: Volcanoes Ruled Out as Dinosaur Extinction Cause

Volcanic eruptions did not lead to the extinction of the dinosaurs, according to a new study. The research also suggests Earth’s oceans can absorb large amounts of carbon dioxide—as long as it’s released gradually over an extremely long time.

Scientists have long argued over the cause of the Cretaceous-Palaeogene extinction event, during which three-quarters of all plant and animal species, including the dinosaurs, went extinct roughly 65 million years ago. Most researchers favor the idea that a catastrophic, sudden mechanism such as an asteroid hit triggered the mass die-off, while others say a gradual rise in CO2 emissions from volcanoes in what is now India may have been the cause.

Now, they say they may have a more definitive answer.

“One way that has been suggested that volcanism could have caused extinction is by ocean acidification, where the ocean absorbs CO2 and becomes more acidic as a result, just as it is doing today with fossil fuel-derived CO2,” says Michael Henehan, a postdoctoral associate at Yale University.

“What we wanted to do was gather all the evidence that’s been collected from ocean sediments from this time and add a few new records of our own, and consider what evidence there is for ocean acidification at this time.”

For the study, researchers analyzed sediments from the deep sea, looking for signs of dissolution that would indicate more acidic oceans. The researchers found that the onset of volcanism did cause a brief ocean acidification event. Critically, though, the pH drop caused by CO2 release was effectively neutralized well before the mass extinction event.

“Combining this with temperature observations that others have made about this time, we think there is a conclusive case that although Deccan volcanism caused a short-lived global warming event and some ocean acidification, the effects were cancelled out by natural carbon cycling processes long before the mass extinction that killed the dinosaurs,” Henehan says.

This is not to say that CO2 released by volcanoes did not prompt climate effects, the researchers note: Rather, the gases were released over such a long timescale their effect could not have caused a sudden, species die-off.

The study also has implications for understanding modern climate change. They said it adds to an increasing body of work that suggests restricting CO2 release to much slower and lower levels over thousands of years can allow the oceans to adapt and avoid the worst possible consequences of ocean acidification.

“However, if you cause big disturbances over rapid timescales, closer to the timescales of current human, post-industrial CO2 release, you can produce not only big changes in oceanic ecosystems, but also profound and long-lasting changes in the way the ocean stores and regulates CO2,” Henehan says.

The work also suggests that disruption of marine ecosystems can have profound effects on Earth’s climate.

“The direct effects of an asteroid impact, like massive tsunamis or widespread fires, would have lasted only for a relatively short time,” says postdoctoral associate and coauthor Donald Penman.

“However, the loss of ecologically important groups of organisms following impact caused changes to the global carbon cycle that took millions of years to recover. This could be seen as a warning for our future: We need to be careful not to drive key functional organisms to extinction, or we could be feeling the effects for a very long time.”

Other researchers from Yale, the University of St. Andrews, and the University of Bristol are coauthors of the work, which is published in the Philosophical Transactions of the Royal Society B,

Source: Republished from as a derivative work under the Attribution 4.0 International license. Original article posted to Futurity by .

Featured Image Credit:  Dave Kryzaniak/Flickr

Now, Check Out:

We don’t talk much about nanotechnology risks anymore, but that doesn’t mean they’re gone

By Andrew Maynard, Arizona State University.

Back in 2008, carbon nanotubes – exceptionally fine tubes made up of carbon atoms – were making headlines. A new study from the U.K. had just shown that, under some conditions, these long, slender fiber-like tubes could cause harm in mice in the same way that some asbestos fibers do.

As a collaborator in that study, I was at the time heavily involved in exploring the risks and benefits of novel nanoscale materials. Back then, there was intense interest in understanding how materials like this could be dangerous, and how they might be made safer.

Fast forward to a few weeks ago, when carbon nanotubes were in the news again, but for a very different reason. This time, there was outrage not over potential risks, but because the artist Anish Kapoor had been given exclusive rights to a carbon nanotube-based pigment – claimed to be one of the blackest pigments ever made.

The worries that even nanotech proponents had in the early 2000s about possible health and environmental risks – and their impact on investor and consumer confidence – seem to have evaporated.

So what’s changed?

Artist Anish Kapoor is known for the rich pigments he uses in his work. Andrew Winning/Reuters

Carbon nanotube concerns, or lack thereof

The pigment at the center of the Kapoor story is a material called Vantablack S-VIS, developed by the British company Surrey NanoSystems. It’s a carbon nanotube-based spray paint so black that surfaces coated with it reflect next to no light.

The original Vantablack was a specialty carbon nanotube coating designed for use in space, to reduce the amount of stray light entering space-based optical instruments. It was this far remove from any people that made Vantablack seem pretty safe. Whatever its toxicity, the chances of it getting into someone’s body were vanishingly small. It wasn’t nontoxic, but the risk of exposure was minuscule.

In contrast, Vantablack S-VIS is designed to be used where people might touch it, inhale it, or even (unintentionally) ingest it.

To be clear, Vantablack S-VIS is not comparable to asbestos – the carbon nanotubes it relies on are too short, and too tightly bound together to behave like needle-like asbestos fibers. Yet its combination of novelty, low density and high surface area, together with the possibility of human exposure, still raise serious risk questions.

For instance, as an expert in nanomaterial safety, I would want to know how readily the spray – or bits of material dislodged from surfaces – can be inhaled or otherwise get into the body; what these particles look like; what is known about how their size, shape, surface area, porosity and chemistry affect their ability to damage cells; whether they can act as “Trojan horses” and carry more toxic materials into the body; and what is known about what happens when they get out into the environment.

These are all questions that are highly relevant to understanding whether a new material might be harmful if used inappropriately. And yet they’re notable in their absence in media coverage around the Vantablack S-VIS. The original use was seemingly safe and got people wondering about impacts. The new use appears more risky and yet hasn’t started conversations around safety. What happened to public interest in possible nanotech risks?

Federal funding around nanotech safety

By 2008, the U.S. federal government was plowing nearly US$60 million a year into researching the health and environmental impacts of nanotechnology. This year, U.S. federal agencies are proposing to invest $105.4 million in research to understand and address potential health and environmental risks of nanotechnology. This is a massive 80 percent increase compared to eight years ago, and reflects ongoing concerns that there’s still a lot we don’t know about the potential risks of purposely designed and engineered nanoscale materials.

It could be argued that maybe investment in nanotechnology safety research has achieved one of its original intentions, by boosting public confidence in the safety of the technology. Yet ongoing research suggests that, even if public concerns have been allayed, privately they are still very much alive.

I suspect the reason for lack of public interest is simple. It’s more likely that nanotechnology safety isn’t hitting the public radar because journalists and other commentators just don’t realize they should shining a spotlight on it.

Responsibility around risk

With the U.S.’s current level of investment, it seems reasonable to assume there are many scientists across the country who know a thing or two about nanotechnology safety. And who, if confronted with an application designed to spray carbon nanotubes onto surfaces that might subsequently be touched, rubbed or scraped, might hesitate to give it an unqualified thumbs up.

Let’s hear what the researchers know and are concerned about.
Surrey NanoSystems, CC BY-ND

Yet in the case of Vantablack S-VIS, there’s been a conspicuous absence of such nanotechnology safety experts in media coverage.

This lack of engagement isn’t too surprising – publicly commenting on emerging topics is something we rarely train, or even encourage, our scientists to do.

And yet, where technologies are being commercialized at the same time their safety is being researched, there’s a need for clear lines of communication between scientists, users, journalists and other influencers. Otherwise, how else are people to know what questions they should be asking, and where the answers might lie?

In 2008, initiatives existed such as those at the Center for Biological and Environmental Nanotechnology (CBEN) at Rice University and the Project on Emerging Nanotechnologies (PEN) at the Woodrow Wilson International Center for Scholars (where I served as science advisor) that took this role seriously. These and similar programs worked closely with journalists and others to ensure an informed public dialogue around the safe, responsible and beneficial uses of nanotechnology.

In 2016, there are no comparable programs, to my knowledge – both CBEN and PEN came to the end of their funding some years ago.

This, I would argue, needs to change. Developers and consumers alike have a greater need than ever to know what they should be asking to ensure responsible nanotech products, and to avoid unanticipated harm to health and the environment.

Some of the onus here lies with scientists themselves to make appropriate connections with developers, consumers and others. But to do this, they need the support of the institutions they work in, as well as the organizations who fund them. This is not a new idea – there is of course a long and ongoing debate about how to ensure academic research can benefit ordinary people.

Yet the fact remains that new technologies all too easily slip under the radar of critical public evaluation, simply because few people know what questions they should be asking about risks and benefits.

Talking publicly about what’s known and what isn’t about potential risks – and the questions that people might want to ask – goes beyond maintaining investor and consumer confidence which, to be honest, depends more on a perception of safety rather than actual dealing with risk. Rather, it gets to the very heart of what it means to engage in socially responsible research and innovation.

The ConversationAndrew Maynard, Director, Risk Innovation Lab, Arizona State University

This article was originally published on The Conversation. Read the original article.

Featured Photo Credit: Surrey NanoSystems, CC BY-ND

Now, Check Out: