Can we harness bacteria to help clean up future oil spills?

By Nina Dombrowski, University of Texas at Austin and Brett J. Baker, University of Texas at Austin.

In 2010 the Deepwater Horizon oil spill released an estimated 4.2 million barrels of oil into the Gulf of Mexico – the largest offshore spill in U.S. history. The spill caused widespread damage to marine species, fisheries and ecosystems stretching from tidal marshes to the deep ocean floor.

Emergency responders used multiple strategies to remove oil from the Gulf: They skimmed it from the water’s surface, burned it and used chemical dispersants to break it into small droplets. However, experts struggled to account for what had happened to much of the oil. This was an important question, because it was unclear how much of the released oil would break down naturally within a short time. If spilled oil persisted and sank to the ocean floor, scientists expected that it would cause more extensive harm to the environment.

Before the Deepwater Horizon spill, scientists had observed that marine bacteria were very efficient at removing oil from seawater. Therefore, many experts argued that marine microbes would consume large quantities of oil from the BP spill and help the Gulf recover.

In a recent study, we used DNA analysis to confirm that certain kinds of marine bacteria efficiently broke down some of the major chemical components of oil from the spill. We also identified the major genetic pathways these bacteria used for this process, and other genes, which they likely need to thrive in the Gulf.

Altogether, our results suggest that some bacteria can not only tolerate but also break up oil, thereby helping in the cleanup process. By understanding how to support these natural occurring microbes, we may also be able to better manage the aftermath of oil spills.

Finding the oil-eaters

Observations in the Gulf appeared to confirm that microbes broke down a large fraction of the oil released from BP’s damaged well. Before the spill, waters in the Gulf of Mexico contained a highly diverse range of bacteria from several different phyla, or large biological families. Immediately after the spill, these bacterial species became less diverse and one phylum increased substantially in numbers. This indicated that many bacteria were sensitive to high doses of oil, but a few types were able to persist.

We wanted to analyze these observations more closely by posing the following questions: Could we show that these bacteria removed oil from the spill site and thereby helped the environment recover? Could we decipher the genetic code of these bacteria? And finally, could we use this genetic information to understand their metabolisms and lifestyles?

Individual puzzle pieces of DNA making up a bacterial genome. Each color represents an individual genome and each dot depicts one piece of DNA.
To address these questions, we used new technologies that enabled us to sequence the genetic code of the active bacterial community that was present in the Gulf of Mexico’s water column, without having to grow them in the laboratory. This process was challenging because there are millions of bacteria in every drop of seawater. As an analogy, imagine looking through a large box that contains thousands of disassembled jigsaw puzzles, and trying to extract the pieces belonging to each individual puzzle and reassemble it.

We wanted to identify bacteria that could degrade two types of compounds that are the major constituents of crude oil: alkanes and aromatic hydrocarbons. Alkanes are relatively easy to degrade – even sunlight can break them down – and have low toxicity. In contrast, aromatic hydrocarbons are much harder to remove from the environment. They are generally much more harmful to living organisms, and some types cause cancer.

Microscopy image of oil-eating bacteria. Tony Gutierrez, Heriot-Watt University

We successfully identified bacteria that degraded each of these compounds, and were surprised to find that many different bacteria fed on aromatic hydrocarbons, even though these are much harder to break down. Some of these bacteria, such as Colwellia, had already been identified as factors in the degradation of oil from the Deepwater Horizon spill, but we also found many new ones.

This included Neptuniibacter, which had not previously been known as an important oil-degrader during the spill, and Alcanivorax, which had not been thought to be capable of degrading aromatic hydrocarbons. Taken together, our results indicated that many different bacteria may act together as a community to degrade complex oil mixtures.

Neptuniibacter also appears to be able to break down sulfur. This is noteworthy because responders used 1.84 million gallons of dispersants on and under the water’s surface during the Deepwater Horizon cleanup effort. Dispersants are complex chemical mixtures but mostly consist of molecules that contain carbon and sulfur.

Their long-term impacts on the environment are still largely unknown. But some studies suggest that Corexit, the main dispersant used after the Deepwater Horizon spill, can be harmful to humans and marine life. If this proves true, it would be helpful to know whether some marine microbes can break down dispersant as well as oil.

Cleaning an oiled gannet, Theodore, Alabama, June 17, 2010.
Deepwater Horizon Response/Flickr, CC BY-ND

Looking more closely into these microbes’ genomes, we were able to detail the pathways that each appeared to use in order to degrade its preferred hydrocarbon in crude oil. However, no single bacterial genome appeared to possess all the genes required to completely break down the more stable aromatic hydrocarbons alone. This implies that it may require a diverse community of microbes to break down these compounds step by step.

Back into the ocean

Offshore drilling is a risky activity, and we should expect that oil spills will happen again. However, it is reassuring to see that marine ecosystems have the ability to degrade oil pollutants. While human intervention will still be required to clean up most spills, naturally occurring bacteria have the ability to remove large amounts of oil components from seawater, and can be important players in the oil cleanup process.

To maximize their role, we need to better understand how we can support them in what they do best. For example, adding dispersant changed the makeup of microbial communities in the Gulf of Mexico during the spill: the chemicals were toxic to some bacteria but beneficial for others. With a better understanding of how human intervention affects these bacteria, we may be able to support optimal bacteria populations in seawater and reap more benefit from their natural oil-degrading abilities.

The ConversationNina Dombrowski, Postdoctoral Fellow, University of Texas at Austin and Brett J. Baker, Assistant Professor of Marine Science, University of Texas at Austin

This article was originally published on The Conversation. Read the original article.

Now, Check Out:

We’re (not) running out of water — a better way to measure water scarcity

By Kate Brauman, University of Minnesota.

Water crises seem to be everywhere. In Flint, the water might kill us. In Syria, the worst drought in hundreds of years is exacerbating civil war. But plenty of dried-out places aren’t in conflict. For all the hoopla, even California hasn’t run out of water.

There’s a lot of water on the planet. Earth’s total renewable freshwater adds up to about 10 million cubic kilometers. That number is small, less than one percent, compared to all the water in oceans and ice caps, but it’s also large, something like four trillion Olympic-sized swimming pools. Then again, water isn’t available everywhere: across space, there are deserts and swamps; over time, seasons of rain and years of drought.

Also, a water crisis isn’t about how much water there is – a desert isn’t water-stressed if no one is using the water; it’s just an arid place. A water shortage happens when we want more water than we have in a specific place at a specific time.

So determining whether a given part of the world is water-stressed is complicated. But it’s also important: we need to manage risk and plan strategically. Is there a good way to measure water availability and, thereby, identify places that could be vulnerable to water shortages?

Because it measures whether we have enough, the ratio of water use to water availability is a good way to quantify water shortage. Working with a group of collaborators, some of whom run a state-of-the-art global water resources model and some of whom work on the ground in water-scarce places, I quantified just how much of our water we’re using on a global basis. It was less straightforward than it sounds.

Water consumption, water availability

We use water for drinking and cleaning and making clothes and cars. Mostly, however, we use water to grow food. Seventy percent of the water we pull from rivers, streams and aquifers, and nearly 90 percent of the water we “use up,” is for irrigation.

How much water we use hinges on what you mean by “use.” Tallying the water we withdraw from rivers, lakes and aquifers makes sense for homes and farms, because that’s how much water runs through our taps or sprinkles onto farm fields.

But an awful lot of that water flows down the drain. So it can be, and probably is, used again. In the U.S., wastewater from most homes flows to treatment plants. After it’s cleaned, it’s released to rivers or lakes that are likely someone else’s water source. My tap water in Minneapolis comes from the Mississippi River, and all the water I flush goes through a wastewater treatment plant and back into the Mississippi River, the drinking water source for cities all the way to New Orleans.

Water-saving products, such as low-flow faucets and appliances, reduce the amount of what that is used on site, most of which is sent back into watersheds. The water in a home consumed, through evaporation for instance, remains the same. Kate Brauman, Author provided

With most water “saving” technologies, less water is taken out of a river, but that also means that less water is put back into the river. It makes a difference to your water bill – you had to pump less water! However, your neighbor in the town downstream doesn’t care if that water ran through your tap before it got to her. She cares only about how much total water there is in the stream. If you took out less but also put back less so the total didn’t change, it doesn’t make a difference to her.

So in our analysis, we decided to count all the water that doesn’t flow downstream, called water consumption. Consumed water isn’t gone, but it’s not around for us to use again on this turn of the water cycle.

For example, when a farmer irrigates a field, some of the water evaporates or moves through plants into the atmosphere and is no longer available to be used by a farm downhill. We tallied that water, not the runoff (which might go to that town downstream, or to migrating birds!).

Our model calculated water consumption by people and agriculture all over the world. It turns out that if a lot of water is being consumed in a watershed, meaning that it’s used and can’t be immediately reused, it’s being used for irrigation. But irrigated agriculture is super-concentrated – 75 percent of water consumption by irrigation occurs in just 6 percent of all the watersheds in the world. So in many watersheds, not much water is consumed at all – often it’s fed back into the watershed after it’s used.

On the other side of the ledger, we had to keep track of how much water is available. Water availability fluctuates, with flood peaks and dry seasons, so we counted up available water each month, not just in average years but during wet and dry years as well. And we counted groundwater as well as surface water from rivers, lakes and wetlands.

In many places, rainfall and snowfall replenish groundwater each year. But in other places, like the High Plains aquifer in the central United States, groundwater reserves were formed long ago and effectively aren’t recharged. This fossil groundwater is a finite resource, so using it is fundamentally unsustainable; for our measure of water shortage, we considered only renewable groundwater and surface water.

Water shortage or water stress?

We analyzed how much of the available renewable water in a watershed we’re using up for over 15,000 watersheds worldwide for each month in wet and in dry years. With those data in hand, my colleagues and I started trying to interpret it. We wanted to identify parts of the world facing water stress all the time, during dry seasons, or only in drought years.

But it turns out that identifying and defining water stress is tough, too. Just because a place is using up a lot of its water – maybe a city pulls most of the water out of a river every summer – that doesn’t necessarily mean it is water-stressed. Culture, governance and infrastructure determine whether a limit on water availability is problematic. And this context influences whether consuming 55 percent of available water is demonstrably worse than using 50 percent, or whether two short months of water shortage is twice as bad as one. Demarcating water scarcity transforms water shortage into a value-laden evaluation of water stress.

An example of a more detailed and localized measure of freshwater scarcity risk that uses data from dry seasons and dry years. Blue areas have the lowest areas of risk because they use less than five percent of their annually renewable water. The darkest areas use more than 100 percent of their renewable freshwater because they tap groundwater that isn’t replenished. Kate Braumen, Author provided

To evaluate whether a watershed is stressed, we considered the common use-to-availability thresholds of 20 percent and 40 percent to define moderate and severe water scarcity. Those levels are most often attributed to Malin Falkenmark, who did groundbreaking work assessing water for people. In doing our research, we did some digging and found Waclaw Balcerski, however. His 1964 study (published in a Hungarian water resources journal) of postwar Europe showed the cost of building water infrastructure increased in countries withdrawing more than 20 percent of their available water. Interesting, but hardly a universal definition of water stress.

A nuanced picture

In the end, we sidestepped definitions of stress and opted to be descriptive. In our study, we decided to report the fraction of renewable water used up by people annually, seasonally, and in dry years.

What does this metric reveal? You’re probably in trouble if you’re using up 100 percent of your water, or even 75 percent, since there’s no room for error in dry years and there’s no water in your river for fish or boats or swimmers. But only local context can illuminate that.

We found that globally, just two percent of watersheds use more than 75 percent of their total renewable water each year. Most of these places depend on fossil groundwater and irrigate heavily; they will run out of water.

More of the places we recognize as water-limited are seasonally depleted (nine percent of watersheds), facing regular periods of water shortage. Twenty-one percent of the world’s watersheds are depleted in dry years; these are the places where it’s easy to believe there’s plenty of water to do what we like, yet people struggle semi-regularly with periods of shortage.

We also found that 68 percent of watersheds have very low depletion; when those watersheds experience water stress, it is due to access, equality and governance.

To our surprise, we found that no watersheds were moderately depleted, defined as watersheds that in an average year are using up half their water. But it turns out that all of those watersheds are heavily depleted sometimes – they have months when nearly all the water is consumed and months when little is used.

Managing water to meet current and future demand is critical. Biophysical indicators, such as the ones we looked at, can’t tell us where a water shortage is stressful to society or ecosystems, but a good biophysical indicator can help us make useful comparisons, target interventions, evaluate risk and look globally to find management models that might work at home.

The ConversationKate Brauman, Lead Scientist Institute on the Environment, University of Minnesota

This article was originally published on The Conversation. Read the original article.

Now, Check Out:

Putting CO2 away for good by turning it into stone

By Martin Stute, Columbia University.

We seriously need to do something about CO2 emissions. Besides shifting to renewable energy sources and increasing energy efficiency, we need to start putting some of the CO2 away before it reaches the atmosphere. Perhaps the impacts of human-induced climate change will be so severe that we might even have to capture CO2 from the air and convert it into useful products such as plastic materials or put it someplace safe.

A group of scientists from several European countries and the United States including myself met in the middle, in Iceland, to figure out how CO2 could be put away safely – in the ground. In a recently published study, we demonstrated that two years after injecting CO2 underground at our pilot test site in Iceland, almost all of it has been converted into minerals.

The injection well that pumped waste CO2 and hydrogen sulfide gas from a geothermal well underground. Martin Stute, Author provided

Mineralization

Iceland is a very green country; almost all of its electricity comes from renewable sources including geothermal energy. Hot water from rocks beneath the surface is converted into steam which drives a turbine to generate electricity. However, geothermal power plants there do emit CO2 (much less than a comparable coal-fired power plant) because the hot steam from deep wells that runs the turbines also contains CO2 and sometimes hydrogen sulfide (H2S). Those gases usually just get released into the air.

Is there another place we could put these gases?

Conventional carbon sequestration deposits CO2 into deep saline aquifers or into depleted oil and natural gas reservoirs. CO2 is pumped under very high pressure into these formations and, since they held gases and fluids already over millions of year in place, the probability of CO2 leaking out is minuscule, as many studies have shown.

In a place like Iceland with its daily earthquakes cracking the volcanic rocks (basalts), this approach would not work. The CO2 could bubble up through cracks and leak back into the atmosphere.

However, basalt also has a great advantage: it reacts with CO2 and converts it into carbonate minerals. These carbonates form naturally and can be found as white spots in the basalt. The reactions also have been demonstrated in laboratory experiments.

Dissolving CO2 in water

For the first test, we used pure CO2 and pumped it through a pipe into an existing well that tapped an aquifer containing fresh water at about 1,700 feet of depth. Six months later we injected a mixture of CO2 and hydrogen sulfide piped in from the turbines of the power plant. Through a separate pipe we also pumped water into the well.

In the well, we released the CO2 through a sparger – a device for introducing gases into liquids similar to a bubble stone in an aquarium – into water. The CO2 dissolved completely within a couple of minutes in the water because of the high pressure at depth. That mixture then entered the aquifer.

We also added tiny quantities of tracers (gases and dissolved substances) that allow us to differentiate the injected water and CO2 from what’s already in the aquifer. The CO2 dissolved in water was then carried away by the slowly flowing groundwater.

Downstream, we had installed monitoring wells that allowed us to collect samples to figure out what happened to the CO2. Initially, we saw some of the CO2 and tracers coming through. After a few months, though, the tracers kept arriving but very little of the injected CO2 showed up.

Where was it going? Our pump in the monitoring well stopped working periodically, and when we brought it to the surface, we noticed that it was covered by white crystals. We analyzed the crystals and found they contained some of the tracers we had added and, best of all, they turned out to be mostly carbonate minerals! We had turned CO2 into rocks.

The CO2 dissolved in water had reacted with the basalt in the aquifer and more than 95 percent of the CO2 precipitated out as solid carbonate minerals – and it all happened much faster than anticipated, in less than two years.

The fracture in this basalt rock shows the white calcium carbonate crystals that form from the injection of CO2 with water at the test site.
Annette K. Mortensen, CC BY

This is the safest way to put CO2 away. By dissolving it in water, we already prevent CO2 gas from bubbling up toward the surface through cracks in the rocks. Finally, we convert it into stone that cannot move or dissolve under natural conditions.

One downside of this approach is that water needs to be injected alongside the CO2. However, because of the very rapid removal of the CO2 from the water in mineral form, this water could be pumped back out of the ground downstream and reused at the injection site.

Will it work elsewhere?

Ours was a small-scale pilot study, and the question is whether these reactions would continue into the future or pores and cracks in the subsurface basalt stone would eventually clog up and no longer be able to convert CO2 to carbonate.

Our Iceland geothermal power plant has increased the amount of gas injected several times in the years since our experiment was started using a different nearby location. No clogging has been encountered yet, and the plan is to soon inject almost all waste gases into the basalt. This process will also prevent the toxic and corrosive gas hydrogen sulfide from going into the atmosphere, which currently still can be detected at low levels near the power plant because of its characteristic rotten egg smell.

The very reactive rocks found in Iceland are quite common on Earth; about 10 percent of the continents and almost all of the ocean floors are made of basalt. This technology, in other words, is not limited to emissions from geothermal power plants but could also be used for other CO2 sources, such as fossil fuel power plants.

The commercial viability of the process still needs to be established in different locations. Carbon mineralization adds costs to a power plant’s operation, so this, like any form of carbon sequestration, needs an economic incentive to make it feasible.

People like to live near coasts, and many power plants have been built near their customers. Perhaps this technology could be used to put away CO2 emissions in coastal areas in nearby offshore basalt formations. Of course, there would be no shortage of water to co-inject with the CO2.

If we are forced to lower atmospheric CO2 levels in the future because we underestimate the damaging effects of climate change, we could perhaps use wind or solar-powered devices on an ocean platform to capture CO2 from the air and then inject the CO2 into basalt formations underneath.

Carbon mineralization, as demonstrated in Iceland, could be part of the solution of our carbon problem.

The ConversationMartin Stute, Professor of Environmental Science, Columbia University

This article was originally published on The Conversation. Read the original article.

Now, Check Out:

The things people ask about the scientific consensus on climate change

By John Cook, The University of Queensland.

It’s been almost a month since the paper I co-authoured on the synthesis of research into the scientific consensus on climate change was published. Surveying the many studies into scientific agreement, we found that more than 90% of climate scientists agree that humans are causing global warming.

It’s a topic that has generated much interest and discussion, culminating in American Democrat Senator Sheldon Whitehouse highlighting our study on the US Senate floor this week.

My co-authors and I even participated in an Ask Me Anything (AMA) session on the online forum Reddit, answering questions about the scientific consensus.

While my own research indicates that explaining the scientific consensus isn’t that effective with those who reject climate science, it does have a positive effect for people who are open to scientific evidence.

Among this “undecided majority”, there was clearly much interest with the session generating 154,000 page views and our AMA briefly featuring on the Reddit homepage (where it was potentially viewed by 14 million people).

Here is an edited selection of some of the questions posed by Reddit readers and our answers.

Q: Why is this idea of consensus so important in climate science? Science isn’t democracy or consensus, the standard of truth is experiment.

If this were actually true, wouldn’t every experiment have to reestablish every single piece of knowledge from first principles before moving on to something new? That’s obviously not how science actually functions.

Consensus functions as a scaffolding allowing us to continue to build knowledge by addressing things that are actually unknown.

Q: Does that 97% all agree to what degree humans are causing global warming?

Different studies use different definitions. Some use the phrase “humans are causing global warming” which carries the implication that humans are a dominant contributor to global warming. Others are more explicit, specifying that humans are causing most global warming.

Within some of our own research, several definitions are used for the simple reason that different papers endorse the consensus in different ways. Some are specific about quantifying the percentage of human contribution, others just say “humans are causing climate change” without specific quantification.

We found that no matter which definition you used, you always found an overwhelming scientific consensus.

Q: It’s very difficult to become/remain a well-respected climate scientist if you don’t believe in human-caused climate change. Your papers don’t get published, you don’t get funding, and you eventually move on to another career. The result being that experts either become part of the 97% consensus, or they cease to be experts.

Ask for evidence for this claim and enjoy the silence (since they won’t have any).

As a scientist, the pressure actually is mostly reversed: you get rewarded if you prove an established idea wrong.

I’ve heard from contrarian scientists that they don’t have any trouble getting published and getting funded, but of course that also is only anecdotal evidence.

You can’t really disprove this thesis, since it has shades of conspiratorial thinking to it, but the bottom line is there’s no evidence for it and the regular scientific pressure is to be adversarial and critical towards other people’s ideas, not to just repeat what the others are saying.

Q: What’s the general reasoning of the other 3%?

Interesting question. It is important and diagnostic that there is no coherent theme among the reasoning of the other 3%. Some say “there is no warming”, others blame the sun, cosmic rays or the oceans.

Those opinions are typically mutually contradictory or incoherent: Stephan Lewandowsky has written elsewhere about a few of the contradictions.

Q: Do we have any insight on what non-climate scientists have to say about climate change being caused by CO2?

In a paper published last year, Stuart Carlton and colleagues surveyed biophysical scientists across many disciplines at major research universities in the US.

They found that about 92% of the scientists believed in anthropogenic climate change and about 89% of respondents disagreed with the statement: “Climate change is independent of CO2 levels”. In other words, about 89% of respondents felt that climate change is affected by CO2.

Q: It could be argued that climate scientists may be predisposed to seeing climate change as more serious, because they want more funding. What’s your perspective on that?

Any climate scientist who could convincingly argue that climate change is not a threat would:

  1. be famous
  2. get a Nobel prize
  3. plus a squintillion dollars in funding
  4. a dinner date with the Queen
  5. lifelong gratitude of billions of people.

So if there is any incentive, it’s for a scientist to show that climate change is not a threat.

Q: I was discussing politics with my boss the other day, and when I got to the topic of global warming he got angry, said it’s all bullshit, and that the climate of the planet has been changing for millennia. Where should I go to best understand all of the facts?

Skeptical Science has a list of common myths and what the science says.

But often facts are not enough, especially when people are angry and emotional. The Skeptical Science team has made a free online course that addresses both the facts and the psychology of climate denial.

You can also access the individual Denial101 videos.

Also, remember that you may not convince him, but if you approach him rationally and respectfully you may influence other people who hear your discussion.

The ConversationJohn Cook, Climate Communication Research Fellow, Global Change Institute, The University of Queensland

This article was originally published on The Conversation. Read the original article.

Featured Photo Credit: Shutterstock/Kuznetsov Dmitry

Now, Check Out: 

Sea-level rise has claimed five whole islands in the Pacific: first scientific evidence

By Simon Albert, The University of Queensland; Alistair Grinham, The University of Queensland; Badin Gibbes, The University of Queensland; Javier Leon, University of the Sunshine Coast, and John Church, CSIRO.

Sea-level rise, erosion and coastal flooding are some of the greatest challenges facing humanity from climate change.

Recently at least five reef islands in the remote Solomon Islands have been lost completely to sea-level rise and coastal erosion, and a further six islands have been severely eroded.

These islands lost to the sea range in size from one to five hectares. They supported dense tropical vegetation that was at least 300 years old. Nuatambu Island, home to 25 families, has lost more than half of its habitable area, with 11 houses washed into the sea since 2011.

This is the first scientific evidence, published in Environmental Research Letters, that confirms the numerous anecdotal accounts from across the Pacific of the dramatic impacts of climate change on coastlines and people.

All that remains of one of the completely eroded islands. Simon Albert, Author provided

A warning for the world

Previous studies examining the risk of coastal inundation in the Pacific region have found that islands can actually keep pace with sea-level rise and sometimes even expand.

However, these studies have been conducted in areas of the Pacific with rates of sea level rise of 3-5 mm per year – broadly in line with the global average of 3 mm per year.

For the past 20 years, the Solomon Islands have been a hotspot for sea-level rise. Here the sea has risen at almost three times the global average, around 7-10 mm per year since 1993. This higher local rate is partly the result of natural climate variability.

These higher rates are in line with what we can expect across much of the Pacific in the second half of this century as a result of human-induced sea-level rise. Many areas will experience long-term rates of sea-level rise similar to that already experienced in Solomon Islands in all but the very lowest-emission scenarios.

Natural variations and geological movements will be superimposed on these higher rates of global average sea level rise, resulting in periods when local rates of rise will be substantially larger than that recently observed in Solomon Islands. We can therefore see the current conditions in Solomon Islands as an insight into the future impacts of accelerated sea-level rise.

We studied the coastlines of 33 reef islands using aerial and satellite imagery from 1947-2015. This information was integrated with local traditional knowledge, radiocarbon dating of trees, sea-level records, and wave models.

Waves add to damage

Wave energy appears to play an important role in the dramatic coastal erosion observed in Solomon Islands. Islands exposed to higher wave energy in addition to sea-level rise experienced greatly accelerated loss compared with more sheltered islands.

Twelve islands we studied in a low wave energy area of Solomon Islands experienced little noticeable change in shorelines despite being exposed to similar sea-level rise. However, of the 21 islands exposed to higher wave energy, five completely disappeared and a further six islands eroded substantially.

The human story

These rapid changes to shorelines observed in Solomon Islands have led to the relocation of several coastal communities that have inhabited these areas for generations. These are not planned relocations led by governments or supported by international climate funds, but are ad hoc relocations using their own limited resources.

Many homes are close to sea level on the Solomons. Simon Albert, Author provided

The customary land tenure (native title) system in Solomon Islands has provided a safety net for these displaced communities. In fact, in some cases entire communities have left coastal villages that were established in the early 1900s by missionaries, and retraced their ancestral movements to resettle old inland village sites used by their forefathers.

In other cases, relocations have been more ad hoc, with indivdual families resettling small inland hamlets over which they have customary ownership.

In these cases, communities of 100-200 people have fragmented into handfuls of tiny family hamlets. Sirilo Sutaroti, the 94-year-old chief of the Paurata tribe, recently abandoned his village. “The sea has started to come inland, it forced us to move up to the hilltop and rebuild our village there away from the sea,” he told us.

In addition to these village relocations, Taro, the capital of Choiseul Province, is set to become the first provincial capital in the world to relocate residents and services in response to the impact of sea-level rise.

The global effort

Interactions between sea-level rise, waves, and the large range of responses observed in Solomon Islands – from total island loss to relative stability – shows the importance of integrating local assessments with traditional knowledge when planning for sea-level rise and climate change.

Linking this rich knowledge and inherent resilience in the people with technical assessments and climate funding is critical to guiding adaptation efforts.

Melchior Mataki who chairs the Solomon Islands’ National Disaster Council, said: “This ultimately calls for support from development partners and international financial mechanisms such as the Green Climate Fund. This support should include nationally driven scientific studies to inform adaptation planning to address the impacts of climate change in Solomon Islands.”

Last month, the Solomon Islands government joined 11 other small Pacific Island nations in signing the Paris climate agreement in New York. There is a sense of optimism among these nations that this signifies a turning point in global efforts.

However, it remains to be seen how the hundreds of billions of dollars promised through global funding models such as the Green Climate Fund can support those most at need in remote communities, like those in Solomon Islands.The Conversation

Simon Albert, Senior Research Fellow, School of Civil Engineering, The University of Queensland; Alistair Grinham, Senior research fellow, The University of Queensland; Badin Gibbes, Senior Lecturer, School of Civil Engineering, The University of Queensland; Javier Leon, Lecturer, University of the Sunshine Coast, and John Church, CSIRO Fellow, CSIRO

This article was originally published on The Conversation. Read the original article.

Featured Photo Credit: Javier Leon, Author provided

Now, Check Out:

Should you be worried about PFOA in drinking water? Here’s what we know

By Veronica Vieira, University of California, Irvine.

Over the past few months, several communities in upstate New York and New England have detected PFOA – perfluorooctanoic acid, or C8, a chemical linked to a range of health issues from cancer to thyroid disease – in their drinking water.

PFOA is a fluorinated compound that is absorbed into our bodies through inhalation or ingestion. The chemical can then accumulate in our blood serum, kidneys and liver.

The stain-resistant and water-repellant properties of PFOA make it effective in products that act as coatings. Common consumer products with PFOA include Scotchgard, Gore-Tex and Teflon.

The qualities that make PFOA effective in consumer products also lead to its persistence in the environment. The chemical has been detected in the serum of the general U.S. population and throughout the home and workplace.

A growing number of communities near industries or military bases that used PFOA have confirmed that their drinking water has been contaminated. What does the science say about the potential health effects of PFOA exposure?

The case against DuPont

PFOA has been produced in the U.S. since 1947. However, very little was known about the human health risks of the chemical until 2001. The Environmental Protection Agency (EPA) learned that DuPont had been hiding information about the environmental presence of PFOA near the Washington Works plant in Parkersburg, West Virginia. That began what would turn out to be a decade of PFOA-related research in this community.

PFOA was used by DuPont to manufacture Teflon beginning in 1951. During the process, PFOA was released into the air and discharged into the Ohio River. The chemical then entered the groundwater that supplies the local drinking water through hydrologic interaction with the contaminated Ohio River and rainfall recharge of groundwater through the contaminated soil.

Contamination in the Ohio River near a DuPont plant led to a class action lawsuit and years of research into the health effects of PFOA. Ed Devereaux/flickr, CC BY-NC

Six public water districts in West Virginia and Ohio and hundreds of private wells were contaminated. Monitoring data show PFOA continued to increase even after a drastic reduction in emissions beginning in 2001. This is due to accumulation in the soil and slow transit time into the groundwater.

As part of a settlement from a large class action lawsuit against DuPont, a C8 science panel was established to determine potential health effects resulting from PFOA exposure. A one-year cross-sectional survey (2005-2006), known as the C8 Health Project, was conducted among approximately 70,000 residents with contaminated drinking water. The health survey included serum samples, medical information and residential histories.

Measured mean PFOA public drinking water levels at the time of the survey ranged from 0.03 micrograms per liter (µg/L) in Mason, West Virginia, to 3.49 µg/L in Little Hocking, Ohio. Private drinking water was measured at levels as high as 22.1 µg/L.

Residents in this community, where PFOA was being used in manufacturing, had much higher serum levels than the U.S. overall. The median measured serum PFOA level was 28.2 µg/L with a range of 0.2 to 22,412 µg/L. For comparison, PFOA levels in the serum of the general U.S. population was 3.92 µg/L in 2005-2006.

The link between PFOA and health

The C8 science panel worked with a team of researchers to analyze all the health data collected from the community participants. Their goal was to determine if there is a probable link between PFOA exposure and any human disease.

Dozens of exposure and health studies were conducted. Researchers developed an environmental model to measure the geographic extent and magnitude of PFOA contamination from the DuPont facility over several decades. This model allowed researchers to investigate health impacts of past exposures.

Information on PFOA production, groundwater flow and well-pumping rates were used to determine PFOA levels in the drinking water systems. In addition, maps of water distribution pipes identified who had been exposed.

After several years of epidemiologic analyses, the scientists issued their reports, identifying probable links to six health outcomes: pregnancy-induced hypertension; high cholesterol; ulcerative colitis, an inflammatory bowel disease; thyroid disease; testicular cancer; and kidney cancer. Medical monitoring is now underway and residents have started filing personal injury lawsuits against DuPont.

Widespread PFOA contamination

But communities in West Virginia and Ohio are not the only ones that have been affected by PFOA contamination. Last summer, research and advocacy group the Environmental Working Group reported 94 water systems in 27 states had detectable levels of PFOA in their drinking water.

In Hoosick Falls, New York, earlier this year, high PFOA levels led to community concern about drinking water safety. A federal class action lawsuit was later filed against Saint-Gobain Performance Plastics and Honeywell International for related PFOA contamination at their manufacturing site.

The nonprofit Healthy Hoosick Water Inc. in January this year brought an EPA panel together at the Hoosick Falls School District to answer questions from community members concerned about PFOA contamination in the municipal water supply. ©HFSCD used with permission, Author provided

Nearby towns of Petersburgh, New York, and North Bennington, Vermont, have also detected PFOA in their drinking water. Meanwhile, drinking water contamination in Decatur, Alabama, and Cottage Grove, Minnesota, has been blamed on 3M, the primary manufacturer of PFOA.

But PFOA contamination is not just affecting communities near chemical plants. PFOA is also a component in firefighting materials used in military exercises.

Communities in Buck and Montgomery counties in Pennsylvania and Kent and New Castle counties in Delaware have detected PFOA in their drinking water, likely resulting from military activities. The Department of Defense plans to investigate 664 military sites to assess PFOA contamination from firefighting foam.

Ongoing research

The response from local officials about the safety of drinking water has been mixed. That is most likely because the EPA does not regulate PFOA levels under the Safe Drinking Water Act.

EPA’s recommended health advisory level from 2009 for drinking water is 0.4 µg/L (400 parts per trillion). There has been concern that this level is not protective enough.

In the aftermath of Hoosick Falls, the EPA in January recommended that the community not drink water with PFOA in excess of 0.1 µg/L.

In the years since the original C8 studies, more health concerns related to PFOA have emerged. Recent epidemiologic studies showed low-level exposures have been also associated with decreased antibody levels among adults living near the DuPont Washington Works facility. This has been observed among children living in a fishing community as well.

The EPA, which issued a health effects document on PFOA in 2014, is currently reviewing the existing body of PFOA data in response to the recent drinking water contamination in Hoosick Falls. Additional health studies are needed, especially in communities with known drinking water contamination. It is important to confirm results from prior studies and consider other health outcomes.

While there is some uncertainty regarding what levels are considered safe, it is certain that more communities living near military bases and former PFOA industries will be affected. Although PFOA was phased out in 2015, the contamination will persist for years to come.

The ConversationVeronica Vieira, Associate Professor of Public Health, University of California, Irvine

This article was originally published on The Conversation. Read the original article.

Featured Photo Credit: dougtone/flickr, CC BY-SA

Now, Check Out:

Great Barrier Reef bleaching would be almost impossible without climate change

By Andrew King, University of Melbourne; David Karoly, University of Melbourne; Mitchell Black, University of Melbourne; Ove Hoegh-Guldberg, The University of Queensland, and Sarah Perkins-Kirkpatrick, UNSW Australia.

The worst bleaching event on record has affected corals across the Great Barrier Reef in the last few months. As of the end of March, a whopping 93% of the reef has experienced
bleaching. This event has led scientists and high-profile figures such as Sir David Attenborough to call for urgent action to protect the reef from annihilation.

image-20160428-30970-14f59p1
Coral Bleaching at Lizard Island on the Great Barrier Reef. © XL Catlin Seaview Survey

There is indisputable evidence that climate change is harming the reef. Yet, so far, no one has assessed how much climate change might be contributing to bleaching events such as the one we have just witnessed.

Unusually warm sea surface temperatures are strongly associated with bleaching. Because climate models can simulate these warm sea surface temperatures, we can investigate how climate change is altering extreme warm conditions across the region.

Daily sea surface temperature anomalies in March 2016 show unusual warmth around much of Australia. Author provided using OSSTIA data from UK Met Office Hadley Centre.

We examined the Coral Sea region (shown above) to look at how climate change is altering sea surface temperatures in an area that is experiencing recurring coral bleaching. This area has recorded a big increase in temperatures over the past century, with March 2016 being the warmest on record.

March sea surface temperatures were the highest on record this year in the Coral Sea, beating the previous 2015 record. Source: Bureau of Meteorology.

Examining the human influence

To find out how climate change is changing the likelihood of coral bleaching, we can look at how warming has affected the likelihood of extremely hot March sea temperature records. To do so, we use climate model simulations with and without human influences included.

If we see more very hot March months in simulations with a human influence, then we can say that climate change is having an effect, and we can attribute that change to the human impact on the climate.

This method is similar to analyses we have done for land regions, such as our investigations of recent Australian weather extremes.

We found that climate change has dramatically increased the likelihood of very hot March months like that of 2016 in the Coral Sea. We estimate that there is at least a 175 times increase in likelihood of hot March months because of the human influence on the climate.

The decaying El Niño event may also have affected the likelihood of bleaching events. However, we found no substantial influence for the Coral Sea region as a whole. Sea surface temperatures in the Coral Sea can be warmer than normal for different reasons, including changes in ocean currents (often related to La Niña events) and increased sunshine duration (generally associated with El Niño conditions).

Overall, this means that the influence of El Niño on the Coral Sea as a whole is weak. There have been severe bleaching events in past El Niño, neutral and La Niña years.

We estimate that climate change has increased temperatures in the hottest March months by just over 1℃. As the effects of climate change worsen we would expect this warming effect to increase, as has been pointed out elsewhere.

March 2016 was clearly extreme in the observed weather record, but using climate models we estimate that by 2034 temperature anomalies like March 2016 will be normal. Thereafter events like March 2016 will be cooler than average.

Overall, we’re observing rapid warming in the Coral Sea region that can only be understood if we include human influences. The human effect on the region through climate change is clear and it is strengthening. Surface temperatures like those in March 2016 would be extremely unlikely to occur in a world without humans.

As the seas warm because of our effect on the climate, bleaching events in the Great Barrier Reef and other areas within the Coral Sea are likely to become more frequent and more devastating.

Action on climate change may reduce the likelihood of future bleaching events, although not for a few decades as we have already built in warming through our recent greenhouse gas emissions.


A note on peer review

We have analysed this coral bleaching event in near-real time, which means the results we present here have not been through peer review.

Recently, we have started undertaking these event attribution analyses immediately after the extreme event has occurred or even before it has finished. As we are using a method that has been previously peer-reviewed, we can have confidence in our results.

It is important, however, that these studies go through a peer-review process and these results will be submitted soon. In the meantime we have published a short methods document which provides more detail.

Our results are also consistent with previous studies (see also here and here).

The ConversationAndrew King, Climate Extremes Research Fellow, University of Melbourne; David Karoly, Professor of Atmospheric Science, University of Melbourne; Mitchell Black, PhD Candidate, University of Melbourne; Ove Hoegh-Guldberg, Director, Global Change Institute, The University of Queensland, and Sarah Perkins-Kirkpatrick, Research Fellow, UNSW Australia

This article was originally published on The Conversation. Read the original article.

Now, Check Out:

How ancient warm periods can help predict future climate change

By Gordon Inglis, University of Bristol and Eleni Anagnostou, University of Southampton.

Several more decades of increased carbon dioxide emissions could lead to melting ice sheets, mass extinctions and extreme weather becoming the norm. We can’t yet be certain of the exact impacts, but we can look to the past to predict the future.

We could start with the last time Earth experienced CO2 levels comparable to those expected in the near future, a period 56m to 34m years ago known as the Eocene.

The Eocene began as a period of extreme warmth around 10m years after the final dinosaurs died. Alligators lived in the Canadian Arctic while palm trees grew along the East Antarctic coastline. Over time, the planet gradually cooled, until the Eocene was brought to a close with the formation of a large ice sheet on Antarctica.

During the Eocene, carbon dioxide (CO2) concentrations in the atmosphere were much higher than today, with estimates usually ranging between 700 and 1,400 parts per million (ppm). As these values are similar to those anticipated by the end of this century (420 to 935ppm), scientists are increasingly using the Eocene to help predict future climate change.

We’re particularly interested in the link between carbon dioxide levels and global temperature, often referred to as “equilibrium climate sensitivity” – the temperature change that results from a doubling of atmospheric CO2, once fast climate feedbacks (such as water vapour, clouds and sea ice) have had time to act.

To investigate climate sensitivity during the Eocene we generated new estimates of CO2 throughout the period. Our study, written with colleagues from the Universities of Bristol, Cardiff and Southampton, is published in Nature.

Reconstruction of the 40m year old planktonic foraminifer Acarinina mcgowrani. Richard Bizley (www.bizleyart.com) and Paul Pearson, Cardiff University,  CC BY

As we can’t directly measure the Eocene’s carbon dioxide levels, we have to use “proxies” preserved within sedimentary rocks. Our study utilises planktonic foraminifera, tiny marine organisms which record the chemical composition of seawater in their shells. From these fossils we can figure out the acidity level of the ocean they lived in, which is in turn affected by the concentration of atmospheric CO2.

We found that CO2 levels approximately halved during the Eocene, from around 1,400ppm to roughly 770ppm, which explains most of the sea surface cooling that occurred during the period. This supports previously unsubstantiated theories that carbon dioxide was responsible for the extreme warmth of the early Eocene and that its decline was responsible for the subsequent cooling.

We then estimated global mean temperatures during the Eocene (again from proxies such as fossilised leaves or marine microfossils) and accounted for changes in vegetation, the position of the continents, and the lack of ice sheets. This yields a climate sensitivity value of 2.1°C to 4.6°C per doubling of CO2. This is similar to that predicted for our own warm future (1.5 to 4.5°C per doubling of CO2).

Our work reinforces previous findings which looked at sensitivity in more recent time intervals. It also gives us confidence that our Eocene-like future is well mapped out by current climate models.

Fossil foraminifera from Tanzania – their intricate shells capture details of the ocean 33-50m years ago. Paul Pearson, Cardiff University, CC BY

Rich Pancost, a paleoclimate expert and co-author on both studies, explains: “Most importantly, the collective research into Earth history reveals that the climate can and has changed. And consequently, there is little doubt from our history that transforming fossil carbon underground into carbon dioxide in the air – as we are doing today – will significantly affect the climate we experience for the foreseeable future.”

Our work also has implications for other elements of the climate system. Specifically, what is the impact of higher CO2 and a warmer climate upon the water cycle? A recent study investigating environmental change during the early Eocene – the warmest interval of the past 65m years – found an increase in global precipitation and evaporation rates and an increase in heat transport from the equator to the poles. The latter is consistent with leaf fossil evidence from the Arctic which suggests that high precipitation rates were common.

However, changes in the water cycle are likely to vary between regions. For example, low to mid latitudes likely became drier overall, but with more intense, seasonal rainfall events. Although very few studies have investigated the water cycle of the Eocene, understanding how this operates during past warm climates could provide insights into the mechanisms which will govern future changes.

The ConversationGordon Inglis, Postdoctoral Research Associate in Organic Geochemistry, University of Bristol and Eleni Anagnostou, Postdoctoral Research Fellow, Ocean and Earth Science, University of Southampton

This article was originally published on The Conversation. Read the original article.

Featured Image Credit:  Eocene fauna of North America (mural), Jay Matternes / Smithsonian Museum

Now, Check Out:

Chernobyl: new tomb will make site safe for 100 years [Video]

By Claire Corkhill, University of Sheffield.

Thirty years after the Chernobyl nuclear accident, there’s still a significant threat of radiation from the crumbling remains of Reactor 4. But an innovative, €1.5 billion super-structure is being built to prevent further releases, giving an elegant engineering solution to one of the ugliest disasters known to man.

Since the disaster that directly killed at least 31 people and released large quantities of radiation, the reactor has been encased in a tomb of steel-reinforced concrete. Usually buildings of this kind can be protected from corrosion and environmental damage through regular maintenance. But because of the hundreds of tonnes of highly radioactive material inside the structure, maintenance hasn’t been possible.

Water dripping from the sarcophagus roof has become radioactive and leaks into the soil on the reactor floor, birds have been sighted in the roof space. Every day, the risk of the sarcophagus collapsing increases, along with the risk of another widespread release of radioactivity to the environment.

Thanks to the sarcophagus, up to 80% of the original radioactive material left after the meltdown remains in the reactor. If it were to collapse, some of the melted core, a lava-like material called corium, could be ejected into the surrounding area in a dust cloud, as a mixture of highly radioactive vapour and tiny particles blown in the wind. The key substances in this mixture are iodine-131, which has been linked to thyroid cancer, and cesium-137, which can be absorbed into the body, with effects ranging from radiation sickness to death depending on the quantity inhaled or ingested.

Metal tomb. Arne Müseler/Wikimedia, CC BY-SA

With repair of the existing sarcophagus deemed impossible because of the radiation risks, a new structure designed to last 100 years is now being built. This “new safe confinement” will not only safely contain the radioactivity from Reactor 4, but also enable the sarcophagus and the reactor building within to be safely taken apart. This is essential if potential future releases of radioactivity, 100 years or more into the future, are to be prevented.

Construction of the steel arch-shaped structure began in 2010 and is currently scheduled for completion in 2017. At 110 metres tall with a span of 260 metres, the confinement structure will be large enough to house St Paul’s Cathedral or two Statues of Liberty on top of one another. But the major construction challenges are not down to size alone.

The close-fitting arch structure is designed to completely entomb Reactor 4. It will be hermetically sealed to prevent the release of radioactive particles should the structures beneath collapse. Triple-layered, radiation-resistant panels made from polycarbonate-coated stainless steel will clad the arch to provide shielding that will be crucial for allowing people to safely return to the area in ongoing resettlement programmes.

Innovative engineering solutions

Operating a building site at the world’s most radioactively hazardous site has inevitably led to a number of engineering innovations. Before work could start, a construction site was prepared 300 metres west of the reactor building, so workers could build the structure without being exposed to radiation. Hundreds of tonnes of radioactive soil had to be removed from the area, and great slabs of concrete laid to provide extra radiation protection.

Inconveniently for a 110 metre-high construction, working above 30 metres is impossible – the higher you go, the closer you get to the top of the exposed reactor core, where radiation dose rates are high enough to pose a significant threat to life. The solution? Build from the top down. After each section of the structure was built, starting with the top of the arch, it was hoisted into the air, 30 metres at a time, and then horizontal supports were added. This was done using jacks that were once used to raise the Russian nuclear submarine, the Kursk, from the bottom of the Barents Sea. The process was repeated until the giant structure reached 110 metres into the air. The two halves of the arch were also constructed separately and have recently been joined together.

The next challenge is to make sure the confinement structure lasts 100 years. In the old sarcophagus, “roof rain” condensation formed when the inside surface of the roof was cooler than the atmosphere outside, corroding any metal structures it came into contact with. To prevent this in the new structure, a complex ventilation system will heat the inner part of the confinement structure roof to avoid any temperature or humidity differences.

Finally, a state-of-the-art solution is required to move the confinement structure, which weighs more than 30,000 tonnes, from its construction site to the final resting place above Reactor 4. The giant building will slide 300 metres along rail tracks, furnished with specially developed Teflon bearings, which will minimise friction and allow accurate positioning.

Future safety

Once the new structure finally confines the radiation, deconstruction of the previous sarcophagus and Reactor 4 within can begin bit by bit. This will be done using a remotely operated heavy-duty crane and robotic tools suspended from the new confinement roof. However, the high levels of radioactivity may damage these remote systems, much like the robots that entered the stricken Fukushima core and “died trying” to capture the damage on camera.

At the very least, building a new confinement structure buys the Ukrainian government more time to develop new radiation-resistant clean-up solutions and undertake the clean-up as safely as possible, all while the radioactive material is decaying. This is an enforced lesson in patience. Only constant innovation in engineering, robotics and materials will allow nuclear disaster sites like Chernobyl and Fukushima to be made safe, once and for all.

The ConversationClaire Corkhill, Research Fellow in nuclear waste disposal, University of Sheffield

This article was originally published on The Conversation. Read the original article.

Featured Photo Credit: Tim Porter/Wikimedia, CC BY-SA

Now, Check Out:

Before fusion: a human history of fire

By sStephen Pyne, Arizona State University.

We humans are fire creatures. Tending fire is a species trait, a capacity we alone possess – and one we are not likely to tolerate willingly in any other species. But then we live on Earth, the only true fire planet, the only one we know of that burns living landscapes. Fire is where, uniquely, our special capabilities and Earth’s bioenergy flows converge. That has made us the keystone species for fire on Earth. Our environmental power is literally a fire power.

We developed small guts and large heads because we could cook food. We went to the top of the food chain because we could cook landscapes. Then we went from burning living landscapes to burning fossilized, lithic ones and became a geologic force that has begun to cook the planet. Our firepower underwrites that tangle of anthropogenic meddlings summed up as “global change.” The Anthropocene might equally be called the Pyrocene.

An Australian Aboriginal family on the move, with the boy on the right carrying fire, one tool humans have used for millennia to control the environment. Watercolor (1790), attributed to Philip Gidley King. State Library of New South Wales, Banks Papers – Series 36a.03, digital ID: a2225003, Author provided

The Pyrocene threatens to overwhelm Earth with fire as the Pleistocene did with ice. It has forced us to reexamine the nature of our firepower, which has taken two forms. One involves open burning on the landscape. We tweak natural fire regimes to better suit our purposes. We set fires for hunting, foraging, protection against wildfire, even warfare. We burn slashed woods and drained peatlands for farming. We kindle pastures to improve fodder and browse. We burn fallow, of any and all kinds. Over the past century we have sought, with equal intensity, to remove fire from protected forests and parks. The pyrogeography of the planet is sculpted by the fires we apply and withhold, and the landscapes we have fashioned, which in turn shape the fires they exhibit.

Our other firepower comes from closed combustion. We put fire into special chambers – hearths, forges, furnaces, engines, candle wicks, dynamos – to generate light, heat and power. These mechanical keepers of the flame have enormously leveraged our firepower. Matthew Boulton, James Watt’s business partner in promoting the steam engine, put it with brutal pithiness: “I sell here, sir, what all the world desires to have – Power.”

As fire industrialized, as biotas, terrain, air and lightning were disaggregated and refined into fuel, oxygen and spark to produce maximum effects, fire began to vanish from daily life and landscapes. The two narratives of fire – open and closed – once overlapped. We domesticated landscapes by passing the equivalent of the hearth fire over them. Now we use closed combustion to substitute for or suppress outright those free-burning flames.

Shifting our understanding of fire

Today, as measured by emissions, even allowing for the massive incineration of tropical peat in Indonesia, we burn far more by closed combustion than by open. Particularly in urban and industrial societies, more and more combustion comes from confined fires than from open flames on landscapes. In modern cities free-burning fire is progressively banned, even for ceremonial purposes. The Burning Man festival had to relocate from San Francisco’s Baker Beach to Black Rock, a salt playa in Nevada. Candles are banished from university dormitories.

Most of humanity’s fire history has pivoted around a quest for combustibles, for new and more abundant sources of stuff to burn. As we exhausted one cache of combustibles, we moved to another, eventually drafting fossil biomass from the geologic past. Slash-and-burn agriculture is an apt metaphor for humanity’s fevered quest for fire generally.

Now we face a question of sinks – of the capacity of ecological systems, including Earth itself, to absorb all the effluent. So, too, our understanding of fire’s place in planetary history is inverting. We used to understand fire as a subset of natural history, particularly of climate. Now natural history, including climate, is becoming a subset of fire history.

Leaving behind Promethean fire

‘Prometheus Carrying Fire,’ oil on canvas (1637), by Jan Cossiers.

The open and closed narratives of fire, once linked, have diverged. The story of closed combustion is Promethean, stolen from the gods and brought under human control. It speaks to fire abstracted from its setting, perhaps by violence, and certainly held in defiance of an existing order. Promethean fire provides the motive power behind most of our technology.

The narrative of open burning is a more primeval story that speaks to fire as a companion on our journey, as part of how we exercise stewardship of our natural habitat. We are the agent that brokers fire for the biosphere, who more than any other organism shapes the patterning of fire on the land.

Overall, thanks to Promethean fire, we now have too much of the wrong kind of fire, and it has led to a quest for alternative forms of energy that do not rely on combustion. The move toward carbon-neutral energy promises to unbundle the source of our power from our grip on the torch. Recent developments in nuclear fusion, which has long promised a full replacement for burning, have inspired calls for a “Wright brothers moment” to show the world what is possible. Together fusion and solar power promise to replace the human need for controlled flames, to decouple Promethean from primeval fire.

Such is the power of fire in our imagination, however, that we continue to speak loosely of such alternatives as “fire,” as earlier times lumped together all natural phenomena that radiated heat and light. Well into the 18th century, the Enlightenment saw central fires in the Earth that boiled over as volcanoes, celestial fires in the guise of stars and comets, solar fire blazing from the sun, electrical fires crackling as lightning. Fire was, and remains, a potent source of metaphor.

But fusion and solar power are not combustion. They represent a decarbonization of energy to the point that it is no longer fire. We can all breathe easier (literally) when Promethean fire shrinks, and perhaps vanishes.

Returning fire to nature

That still leaves primeval fire, an emergent property of the living world that has flourished since the first plants colonized continents. It will not go away. Rather, its removal, even its attempted removal, can be profoundly disruptive. We need a lot more primeval fire of the right sort. Paradoxically, the more we find surrogates for closed combustion, the more we can embrace open burning.

We have to sort out good fire from bad. That’s exactly what our species monopoly makes possible and what our firepower demands of us. We can begin by reversing the Promethean story, by taking fire out of our machines and putting it back into its indigenous setting. Faux fires like solar power, nuclear fission and fusion can nudge that project along by taking its place and fulfilling our modern energy needs. A triumph of fusion energy won’t mean the end of fire. It will simply liberate it from its enforced captivity and relocate it into landscapes where it can do the ecological work that it alone can do.

The ConversationStephen Pyne, Regents Professor in the School of Life Sciences, Arizona State University

This article was originally published on The Conversation. Read the original article.

Featured Image Credit:  NASA/SDO (AIA)

Now, Check Out: