Arctic sea ice appeared to have reached its annual lowest extent on Sept. 10, NASA and the NASA-supported National Snow and Ice Data Center (NSIDC) at the University of Colorado at Boulder reports.
An analysis of satellite data showed that at 1.60 million square miles (4.14 million square kilometers), the 2016 Arctic sea ice minimum extent is effectively tied with 2007 for the second lowest yearly minimum in the satellite record. Since satellites began monitoring sea ice in 1978, researchers have observed a steep decline in the average extent of Arctic sea ice for every month of the year.
The sea ice cover of the Arctic Ocean and surrounding seas helps regulate the planet’s temperature, influences the circulation of the atmosphere and ocean, and impacts Arctic communities and ecosystems. Arctic sea ice shrinks every year during the spring and summer until it reaches its minimum yearly extent. Sea ice regrows during the frigid fall and winter months, when the sun is below the horizon in the Arctic.
This summer, the melt of Arctic sea ice surprised scientists by changing pace several times. The melt season began with a record low yearly maximum extent in March and a rapid ice loss through May. But in June and July, low atmospheric pressures and cloudy skies slowed down the melt. Then, after two large storms went across the Arctic basin in August, sea ice melt picked up speed through early September.
“It’s pretty remarkable that this year’s sea ice minimum extent ended up the second lowest, after how the melt progressed in June and July,” said Walt Meier, a sea ice scientist with NASA’s Goddard Space Flight Center in Greenbelt, Md. “June and July are usually key months for melt because that’s when you have 24 hours a day of sunlight – and this year we lost melt momentum during those two months.”
But in August, two very strong cyclones crossed the Arctic Ocean along the Siberian coast. These storms didn’t have as much of an immediate impact on the sea ice as the great cyclone of 2012, but in late August and early September there was “a pretty fast ice loss in the Chukchi and Beaufort seas that might be a delayed effect from the storms,” Meier said.
Meier also said that decades ago, the melt season would slow down by the middle of August, when the sun starts setting in the Arctic.
“In the past, we had this remaining sea ice pack that was mostly thick, old ice. But now everything is more jumbled up, which makes it less resistant to melt, so even late in the season you can get weather conditions that give it a final kick,” Meier said.
Arctic sea ice cover has not fared well during other months of the year either. A recently published study that ranked 37 years of monthly sea ice extents in the Arctic and Antarctic found that there has not been a record high in Arctic sea ice extents in any month since 1986. During that same time period, there have been 75 new record lows.
In this animation, the Earth rotates slowly as the Arctic sea ice advances over time from March 24, 2016, to Sept. 10, 2016, when the sea ice reached its annual minimum extent. The 2016 Arctic minimum sea ice extent is the second lowest minimum extent on the satellite record.
Credits: NASA Goddard’s Scientific Visualization Studio/C. Star
“When you think of the temperature records, it’s common to hear the statement that even when temperatures are increasing, you do expect a record cold here or there every once in a while,” said Claire Parkinson, main author of the study and a senior climate scientist at Goddard. “To think that in this record of Arctic sea ice that goes back to the late 1970s, since 1986 there hasn’t been a single record high in any month of the year, and yet, over that same period, there have been 75 record lows. It’s just an incredible contrast.”
“It is definitely not just September that’s losing sea ice. The record makes it clear that the ice is not rebounding to where it used to be, even in the midst of the winter,” Parkinson said.
Parkinson’s analysis, which spans from 1979 to 2015 found that in the Antarctic, where the trends are toward more rather than less sea ice, there have only been six record monthly record lows after 1986, and 45 record highs.
“The Antarctic numbers are pretty amazing, except when you compare them with the Arctic’s, which are much more amazing,” Parkinson said.
Source: News release on NASA.gov republished under public domain rights and in accordance with the NASA Media Guidelines.
It’s literally epoch-defining news. A group of experts tasked with considering the question of whether we have officially entered the Anthropocene – the geological age characterised by humans’ influence on the planet – has delivered its answer: yes.
The British-led Working Group on the Anthropocene (WGA) told a geology conference in Cape Town that, in its considered opinion, the Anthropocene epoch began in 1950 – the start of the era of nuclear bomb tests, disposable plastics and the human population boom.
The Anthropocene has fast become an academic buzzword and has achieved a degree of public visibility in recent years. But the more the term is used, the more confusion reigns, at least for those not versed in the niceties of the underpinning science.
Roughly translated, the Anthropocene means the “age of humans”. Geologists examine layers of rock called “strata”, which tell a story of changes to the functioning of Earth’s surface and near-surface processes, be these oceanic, biological, terrestrial, riverine, atmospheric, tectonic or chemical.
Earth’s history, spiralling towards the present. USGS/Wikimedia Commons
The chart features a hierarchy of terms like “system” and “stage”; generally, the suffix “cene” refers to a geologically brief stretch of time and sits at the bottom of the hierarchy. We have spent the past 11,500 years or so living in the so-called Holocene epoch, the interglacial period during which Homo sapiens has flourished.
If the Holocene has now truly given way to the Anthropocene, it’s because a single species – us – has significantly altered the character of the entire hydrosphere, cryosphere, biosphere, lithosphere and atmosphere.
The end of an era?
Making this call is not straightforward, because the Anthropocene proposition is being investigated in different areas of science, using different methods and criteria for assessing the evidence. Despite its geological ring, the term Anthropocene was coined not by a geologist, but by the Nobel Prize-winning atmospheric chemist Paul Crutzen in 2000.
Comparing these changes to those occurring during the Holocene, they concluded that we humans have made an indelible mark on our one and only home. We have altered the Earth system qualitatively, in ways that call into question our very survival over the coming few centuries.
Crutzen’s group talks of the post-1950 period as the “Great Acceleration”, when a range of factors – from human population numbers, to disposable plastics, to nitrogen fertiliser – began to increase exponentially. But their benchmark for identifying this as a significant change has nothing to do with geological stratigraphy. Instead, they ask whether the present period is qualitatively different to the situation during the Holocene.
Rocking out
Meanwhile, a small group of geologists has been investigating the stratigraphic evidence for the Anthropocene. A few years ago a subcommission of the ICS set up the Anthropocene working group, which has now suggested that human activity has left an indelible mark on the stratigraphic record.
The major problem with this approach is that any signal is not yet captured in rock. Humans have not been around long enough for any planet-wide impacts to be evident in Earth’s geology itself. This means that any evidence for a Holocene-Anthropocene boundary would necessarily be found in less permanent media like ice sheets, soil layers or ocean sediments.
The ICS has always considered evidence for boundaries that pertain to the past, usually the deep past. The WGA is thus working against convention by looking for present-day stratigraphic markers that might demonstrate humans’ planetary impact. Only in thousands of years’ time might future geologists (if there are any) confirm that these markers are geologically significant.
In the meantime, the group must be content to identify specific calendar years when significant human impacts have been evident. For example, one is 1945, when the Trinity atomic device was detonated in New Mexico. This and subsequent bomb tests have left global markers of radioactivity that ought still to be evident in 10,000 years.
Alternatively, geographers Simon Lewis and Mark Maslin have suggested that 1610 might be a better candidate for a crucial human-induced step change. That was the year when atmospheric carbon dioxide dipped markedly, suggesting a human fingerprint linked to the New World colonists’ impact on indigenous American agriculture, although this idea is contested.
Decision time
The fact that the WGA has picked a more recent date, 1950, suggests that it agrees with the idea of defining the Great Acceleration of the latter half of the 20th century as the moment we stepped into the Anthropocene.
It’s not a decision that is taken lightly. The ICS is extremely scrupulous about amending the International Chronostratigraphic Chart. The WGA’s suggestion will face a rigorous evaluation before it can be scientifically accepted by the commission. It may be many years before it is formally ratified.
Elsewhere, the term is fast becoming a widely used description of how people now relate to our planet, rather like the Iron Age or the Renaissance. These words describe real changes in history and enjoy widespread use in academia and beyond, without the need for rigorously defined “boundary markers” to delimit them from prior periods.
Does any of this really matter? Should we care that the jury is still out in geology, while other scientists feel confident that humans are altering the entire Earth system?
Writing on The Conversation, geologist James Scourse suggests not. He feels that the geological debate is “manufactured” and that humans’ impact on Earth is sufficiently well recognised that we have no need of a new term to describe it.
Clearly, many scientists beg to differ. A key reason, arguably, is the failure of virtually every society on the planet to acknowledge the sheer magnitude of the human impact on Earth. Only last year did we finally negotiate a truly global treaty to confront climate change.
In this light, the Anthropocene allows scientists to assemble a set of large-scale human impacts under one graphic conceptual banner. Its scientific status therefore matters a great deal if people worldwide are at long last to wake up to the environmental effects of their collective actions.
Gaining traction
But the scientific credibility of the Anthropocene proposition is likely to be called into question the more that scientists use the term informally or otherwise. Here the recent history of climate science in the public domain is instructive.
Even more than the concept of global warming, the Anthropocene is provocative because it implies that our current way of life, especially in wealthy parts of the world, is utterly unsustainable. Large companies who make profits from environmental despoliation – oil multinationals, chemical companies, car makers and countless others – have much to lose if the concept becomes linked with political agendas devoted to things like degrowth and decarbonisation. When one considers the organised attacks on climate science in the United States and elsewhere, it seems likely that Anthropocene science will be challenged on ostensibly scientific grounds by non-scientists who dislike its implications.
Sadly, such attacks are likely to succeed. In geology, the WGA’s unconventional proclamation potentially leaves any ICS definition open to challenge. If accepted, it also means that all indicators of the Holocene would now have to be referred to as things of the past, despite evidence that the transition to a human-shaped world is not quite complete in some places.
Some climate contrarians still refuse to accept that researchers can truly distinguish a human signature in the climate. Similarly, scientists who address themselves to the Anthropocene will doubtless face questions about how much these changes to the planet are really beyond the range of natural variability.
If “Anthropocene sceptics” gain the same momentum as climate deniers have enjoyed, they will sow seeds of confusion into what ought to be a mature public debate about how humans can transform their relationship with the Earth. But we can resist this confusion by recognising that we don’t need the ICS’s imprimatur to appreciate that we are indeed waving goodbye to Earth as we have known it throughout human civilisation.
We can also recognise that Earth system science is not as precise as nuclear physics or geometry. This lack of precision does not mean that the Anthropocene is pure scientific speculation. It means that science knows enough to sound the alarm, without knowing all the details about the unfolding emergency.
The Anthropocene deserves to become part of our lexicon – a way we understand who we are, what we’re doing and what our responsibilities are as a species – so long as we remember that not all humans are equal contributors to our planetary maladies, with many being victims.
In the months leading up to the Rio Olympics, there was growing awareness that Brazil had not met the water quality goals outlined in their bid, and that athletes might be swimming, sailing, rowing or canoeing in waters contaminated with untreated human sewage. News articles discussed the poor water quality in competition waters, health risks to the athletes and the reasons why the US$ 4 billion pledged to greatly reduce the flow of untreated sewage into Guanabara Bay had not materialized.
A common theme of these articles was one of shock: that sewage was being disposed of untreated into the environment, that water quality was so poor and that elite athletes might risk their health to compete in the Olympics.
These articles are accurate: There is health risk to Olympics athletes, and having athletes compete in water contaminated with human sewage is reprehensible. In the next few days to weeks, we’ll learn what consequences there might be of this broken pledge.
However, what is missing from many, but not all, of the coverage is that the situation in Rio is not only not abnormal, it is common. Currently, about one-third of the global population (2.4 billion people) does not have access to sanitation facilities, such as a latrine or sewerage system, including 946 million people who have no facilities and practice open defecation. Another 2.1 billion urban residents worldwide use improved sanitation facilities that do not safely dispose of human waste, including 1.5 billion who use sewerage systems without treatment.
Rio’s water problems also highlight the limitations of reliance on centralized wastewater treatment systems. To meet the needs of the billions of people who suffer the health consequences of untreated human sewage every day, we need new technological innovations and approaches to sanitation provision.
Varying treatment around the world
Americans might be surprised to learn how recently current-day sewage treatment was introduced.
Today’s sewage treatment in the United States has its roots in engineering innovations from the late 19th and early 20th century. During this time, U.S. cities installed water systems that provided piped, treated and safe water supplies to households. This provision is credited with large reductions in infant and child deaths and elimination of epidemic diseases such as cholera and typhoid.
With the installation of these water supplies came the need for household wastewater disposal. Sewerage systems, where household wastewater is centrally collected and disposed, were first installed in the early 1900s. By 1940, half of the population with sewers also had some water treatment before disposal. In more rural areas, septic tanks were installed.
The Deer Island wastewater treatment plant was the centerpiece of a project to clean up the Boston Harbor. It wasn’t fully operational until 2000. Doc Searls/flickr, CC BY
Over time, treatment and environmental disposal improved. The Boston Harbor, for instance, was once known as one of the dirtiest in the U.S. The centerpiece of the $3.8 billion cleanup project is the Deer Island wastewater treatment plant, which became fully operational in the year 2000, treats wastewater for over 2.5 million people, disposes of water 9.5 miles into the ocean instead of into Boston Harbor and vastly improved Boston Harbor water quality. In the year 2000, in North America, about 90 percent of wastewater was treated before disposal.
The United States’ situation, unfortunately, is not the norm.
In the year 2000, the percentage of urban wastewater collected through sewerage systems treated before disposal was only 66 percent for Europe, 35 percent for Asia, 14 percent for Latin America and the Caribbean, and less than one percent for Africa.
In Rio, only 12 percent of sewerage system wastewater was treated when the city was awarded the Olympics; that number is estimated to be 65 percent today. While this is an impressive improvement, it is short of the pledged 80 percent.
The health effects of exposure to human sewage are myriad, including diarrhea, the cause of 760,000 deaths in children per year worldwide, and stunting, which impacts 162 million children under five throughout the world. Due to these health consequences, in 2007, sanitation provision was voted the greatest medical advance since 1840 by readers of the prestigious British Medical Journal.
Technical and social innovation
But reversing these health threats will require that countries take a different path than that the U.S. took during the 20th century.
The primary challenge of improving the current worldwide sanitation situation is that the three existing sanitation solutions – sewerage systems, septic tanks, and latrines – have limitations.
Sewerage systems are expensive to install, and are fixed systems that lack the ability to rapidly expand with population growth; septic tanks require land with appropriate soils; and latrines require space, fill quickly and do not treat waste.
Because installing centralized sewage treatment plants is expensive and plants do not expand quickly to match population growth, alternative methods such as regular waste collection businesses are needed. gtzecosan/flickr, CC BY
Thus, there is need for new sanitation technologies that isolate human waste from the environment and provide options for the fastest-growing segment of the worldwide population: those living in densely populated mega-cities and urban slums. There is active and ongoing research and programming in developing alternative approaches and technologies, some examples of which include:
Community mobilization strategies using education to encourage communities to completely eliminate open defecation by triggering the communities’ desire for collective change. These programs can encourage local development of sanitation solutions, and certify communities as open-defecation free.
Systems-based approaches where sanitation facilities are built and franchised by local operators that charge a per-use fee or are installed in community institutions. Waste is collected and converted at a centralized facility to organic fertilizer, insect-based animal feed and renewable energy.
Social enterprise services where container-based toilets are installed in homes at no cost, and a monthly charge is assessed for waste collection. Waste is then transformed into briquettes and sold as a clean-burning alternative to charcoal.
While there are promising advances, many are currently small-scale, and more work is needed to reach the 2.5 billion without access to any improved sanitation facilities and additional 2.1 billion urban residents using improved sanitation facilities that do not safely dispose of human waste.
As the Rio Olympics proceed, and we hope for the health and safety of the elite athletes competing in contaminated waters, let us also consider – and work to improve sanitary conditions for – the billions of people worldwide who daily suffer the health consequences of living in an environment contaminated with human waste.
All eyes are turned toward Rio de Janeiro to watch top athletes from all over the world compete. Yet the headlines continue to highlight the problems with the water quality and the risks to the athletes who swim, row and sail, and even to tourists simply visiting the beaches.
Large concentrations of disease-causing viruses have been found in the aquatic venues, particularly in the Rodrigo de Freitas Lagoon, where Olympic rowing will take place, and the Gloria Marina, the starting point for the sailing races. These viruses – adenoviruses, rotaviruses and noroviruses – are coming from human fecal wastes, untreated and/or inadequately treated sewage, and cause a variety of health problems, ranging from milder symptoms such as headache, respiratory infection or diarrhea to severe illness impacting the heart, liver and central nervous system.
But Brazil’s wastewater woes are hardly unique. The water quality of lakes, rivers and coastal shorelines around the world is degrading at an alarming rate. In fact, pollution of the 10 largest rivers on earth is so significant that it affects five billion people.
One of the root problems in Rio and other places is how water quality is tested. Monitoring for a broader set of viruses and other microbes in water would be a big step in improving public health.
Beyond e.coli testing
Human fecal waste remains one of the most important sources of pathogens. Today, water quality is most often measured by testing for E.coli bacteria, and this is the standard used around the world. But we have better ways to identify the microbes that cause problems when pollution, such as sewage, is released in our rivers, lakes and shorelines.
In my own research, my colleagues and I have tested for the presence of an alternative virus (known as the coliphage) as an inexpensive indicator for evaluating sewage treatment. We also use a whole variety of other tests which allow us to monitor for specific pathogens including viruses.
Our analysis and others suggest we should be striving at a minimum for 99.9 percent reduction of viruses by the variety of sewage treatment designs. If we rely on testing only for E.coli bacteria, we won’t be able to remove viruses.
Testing for a broader set of microbes makes it easier to diagnose what the source of pollutants are. For example, these microbial source tracking tools allow one to trace the pollution back to humans, cattle or pigs. We have used these tests throughout the U.S. and Europe, and they are now being used in resource-poor areas including Africa and South America. While these methods are not routine and are slightly more expensive, the results provide valuable information that allows one to better remediate water quality problems.
Studies on how frequently pathogens occur can then be connected back to the sources, with recommendations on treatment in order to reverse pollution trends. Incentives can be used to enhance best management practices such as preventing runoff from farms, composting to reduce pathogens in manure and improved disinfection of wastewater to kill off viruses.
Moving targets
Globally, the challenge of implementing new tests and treatments is immense. In the last 60 years we have seen a great acceleration of population growth, and this, in combination with lack of sewage treatment and failing infrastructure, has caused a continual degradation of water quality, as demonstrated by increasing toxic algal blooms and fecal contamination that cause microbial hazards. Indeed, one of the United Nations’ Development Goals is “access to improved sanitation facilities.”
Around the world, the regulations governing water quality for recreation are in urgent need of revisions in part because of the growing array of pathogens in wastewater.
Millions of dollars were spent to clean up the trash and treat sewage in the waterways around Rio before the Olympics, but water quality remains a worry. Ricardo Moraes/Reuters
Sewage contains well over 100 different viruses (adenoviruses, astroviruses, coxsackieviruses, enteroviruses, noroviruses and rotaviruses) among other pathogens like the enteric protozoa (Cryptosporidium). Newly emerging viruses such as Cycloviruses, which are causing neurological problems in children in Asia, are also showing up in sewage. Thus, the detection of these large concentrations of adenoviruses such as was found in Brazil is likely the tip of the iceberg.
It must be said clearly that the E.coli test simply does not work for viruses, and we must evaluate whether sewage treatment is properly removing viruses. While the World Health Organization, the U.S. EPA, the EU and the scientific community have known about the deficiencies of the E.coli indicator system for decades, little has been done to address this. Monitoring costs, lack of development of standard methods and no focus on a water diagnostic strategy are among the reasons for this lack of advancement.
Yet to my knowledge, many government agencies and even large nonprofits such as the Gates Foundation are not aware of these limitations. The E.coli approach alone cannot help resolve the questions that need to be answered to improve sanitation, sewage treatment and water reuse while protecting important aquatic ecosystem services.
Different paths of contact
New molecular tests can detect both live and dead viruses. Adenoviruses, for example, have been found in raw sewage around the world. If adequate treatment and disinfection are used, this contamination can be reduced to nondetectable levels.
The numbers of adenoviruses found in Rio were reported from 26 million to 1.8 billion per liter, which is essentially the level found in untreated sewage. It is not known how many viruses were alive but 90 percent of the samples did contain some level of live viruses.
Adenoviruses have been found in U.S. waters as well, posing a threat to public health. Our studies in Chicago found 65 percent of the Chicago Area Waterways System (CAWS) which receives treated wastewater tested positive for adenoviruses, with average concentrations of 2,600 viruses per liter in the canals and about 110 viruses per liter on the beaches. These data indicate some die-off as viruses move toward the beach, but some remain alive and would be able to cause disease. About 4 percent of the people using these waters for boating and fishing became sick. The presence of these viruses and the subsequent illnesses indicate the need for greater testing and treatment.
Around the world, those who swim in and boat on or use polluted surface waters for hygienic purposes such as bathing, cleaning clothes, washing dishes or even for religious purposes are all at risk of diarrhea, respiratory disease, skin, eye, ear and nose infections. This is the sad state of affairs and the reality for many people throughout the world. This does not even account for the risks associated with irrigation of food crops or use of the water for animals and drinking water.
While the spotlight is shining on the athletes over the next few weeks, let us also shine a spotlight on what we can do to improve and restore water quality around the world through our collective efforts, use of new tools and risk frameworks, moving the political will one step closer toward sewage treatment and protection of the biohealth of the blue planet.
With a heat wave pushing the heat index well above 100 degrees Fahrenheit (38 Celsius) through much of the U.S., most of us are happy to stay indoors and crank the air conditioning. And if you think it’s hot here, try 124°F in India. Globally, 2016 is poised to be another record-breaking year for average temperatures. This means more air conditioning. Much more.
In a paper published in the Proceedings of the National Academy of Science (PNAS), Paul Gertler and I examine the enormous global potential for air conditioning. As incomes rise around the world and global temperatures go up, people are buying air conditioners at alarming rates. In China, for example, sales of air conditioners have nearly doubled over the last five years. Each year now more than 60 million air conditioners are sold in China, more than eight times as many as are sold annually in the United States.
A ‘heat dome’ arrives in the U.S. NOAA Forecast Daily Maximum Heat Index
This is mostly great news. People are getting richer, and air conditioning brings great relief on hot and humid days. However, air conditioning also uses vast amounts of electricity. A typical room air conditioner, for example, uses 10-20 times as much electricity as a ceiling fan.
Meeting this increased demand for electricity will require billions of dollars of infrastructure investments and result in billions of tons of increased carbon dioxide emissions. A new study by Lawrence Berkeley Lab also points out that more ACs means more refrigerants that are potent greenhouse gases.
Evidence from Mexico
To get an idea of the global impact of higher air conditioner use, we looked at Mexico, a country with highly varied climate ranging from hot and humid tropical to arid deserts to high-altitude plateaus. Average year-round temperatures range from the high 50’s Fahrenheit in the high-altitude plateaus to low 80’s in the Yucatan Peninsula.
Graphic shows the range of average temperatures in Fahrenheit in different parts of Mexico. Davis and Gertler, PNAS, 2015. Copyright 2015 National Academy of Sciences, USA.
Patterns of air conditioning vary widely across Mexico. There is little air conditioning in cool areas of the country; even at high-income levels, penetration never exceeds 10 percent. In hot areas, however, the pattern is very different. Penetration begins low but then increases steadily with income to reach near 80 percent.
Davis and Gertler, PNAS, 2015. Copyright 2015 National Academy of Sciences, USA.
As Mexicans grow richer, many more will buy air conditioners. And as average temperatures increase, the reach of air conditioning will be extended, even to the relatively cool areas where saturation is currently low. Our model predicts that near 100 percent of households will have air conditioning in all the warm areas within just a few decades.
Global air conditioning potential
We expect this pattern to hold not only in Mexico but around the world. When you look around, there are a lot of hot places where people are getting richer. In our study, we ranked countries in terms of air conditioning potential. We defined potential as the product of population and cooling degree days (CDDs), a unit used to determine the demand for energy to cool buildings.
Davis and Gertler, PNAS, 2015. Copyright 2015 National Academy of Sciences, USA.
Number one on the list is India. India is massive, with four times the population of the United States. It is also extremely hot. Annual CDDs are 3,120, compared to only 882 in the United States. That is, India’s total air conditioning potential is more than 12 times that of the United States.
Mexico ranks #12 but has fewer than half the CDDs experienced by India, Indonesia, Philippines and Thailand. These countries currently have lower GDP per capita, but our research predicts rapid air conditioning adoption in these countries over the next couple of decades.
Carbon cliff?
What does all this mean for carbon dioxide emissions? It depends on the pace of technological change, both for cooling equipment and for electricity generation.
Today’s air conditioners use only about half as much electricity now as in 1990, and continued advances in energy efficiency could reduce the energy consumption impacts substantially. Likewise, continued development of solar, wind and other low-carbon sources of electricity generation could mitigate the increases in carbon dioxide emissions.
As an economist, my view is that the best way to get there is a carbon tax. Higher-priced electricity would slow the adoption and use of air conditioning, while spurring innovation in energy efficiency. A carbon tax would also give a boost to renewable generating technologies, increasing their deployment. Low- and middle-income countries are anticipating large increases in energy demand over the next several decades, and carbon legislation along the lines of carbon tax is the most efficient approach to meeting that demand with low-carbon technologies.
Pricing carbon would also lead to broader behavioral changes. Our homes and businesses tend to be very energy-intensive. In part, this reflects the fact that carbon emissions are free. Energy would be more expensive with a price on carbon, so more attention would go to building design. Natural shade, orientation, building materials, insulation and other considerations can have a big impact on energy consumption. We need efficient markets if we are going to stay cool without heating up the planet.
It was back in 250ʙⅽ when Archimedes reportedly stepped into his bathtub and had the world’s first Eureka moment – realising that putting himself in the water made its level rise.
More than two millennia later, the comments sections of news stories still routinely reveal confusion about how this same thing happens when polar ice melts and sea levels change.
This is in marked contrast to the confidence that scientists have in their collective understanding of what is happening to the ice sheets. Indeed, the 2014 Assessment Report of the Intergovernmental Panel on Climate Change reported “very high confidence” that the Greenland Ice Sheet was melting and raising sea levels, with “high confidence” of the same for the Antarctic Ice Sheet.
Despite this, commenters below the line on news stories frequently wonder how it can be true that Antarctica is melting and contributing to sea-level rise, when satellite observations show Antarctic ice expanding.
Unravelling the confusion depends on appreciating the difference between the two different types of ice, which we can broadly term “land ice” and “sea ice” – although as we shall see, there’s a little bit more to it than that. The two different types of ice have very different roles in Earth’s climate, and behave in crucially different ways.
Sea levels rise when ice resting on land, grounded ice, melts (often after forming icebergs). Floating sea ice that melts has a very important role in other areas of our climate system.
Land ice
Ice sheets form by the gradual accumulation of snow on land over long periods of time. This “grounded” ice flows in glaciers to the ocean under the influence of gravity, and when it arrives it eventually melts. If the amount of ice flowing into the oceans is balanced by snowfall on land, the net change in global sea level due to this ice sheet is zero.
However, if the ice begins to flow more rapidly or snowfall declines, the ice sheet can be out of balance, resulting in a net rise in sea level.
But this influence on sea level is only really relevant for ice that is grounded on land. When the ice sheet starts to float on the ocean it is called an “ice shelf”. The contribution of ice shelves to sea-level rise is negligible because they are already in the sea (similar to an ice cube in a glass of water, although the ocean is salty unlike a glass of water). But they can nevertheless play an important role in sea-level rise, by governing the rate at which the grounded ice can discharge into the oceans, and therefore how fast it melts.
Sea ice
When viewed from space, all polar ice looks pretty much the same. But there is a second category of ice that has effectively nothing to do with the ice sheets themselves.
“Sea ice” is formed when ocean water is frozen due to cooling by the air. Because it is floating in the ocean, sea ice does not (directly) affect sea level.
Sea ice is generally no more than a few metres thick, although it can grow to more than 10 metres thick if allowed to grow over many winters. Ice shelves, on the other hand, are hundreds of metres thick, as seen when an iceberg is created and rolls over.
A big breakup.
In the ocean around Antarctica, almost all the sea ice melts in the southern hemisphere spring. This means that every year an area of ocean twice the size of Australia freezes over and then melts – arguably the largest seasonal change on our planet.
So, while ice sheets change over decades and centuries, the time scale of sea ice variability is measured in months.
Antarctic sea ice grows and shrinks dramatically over the course of the year. These changes do not directly affect sea level. Land ice changes are slower but do affect sea levels, at least until the land ice becomes afloat.
The seasonal cycle of Arctic sea ice is much smaller. This is because the Arctic retains much more of its sea ice in the summer, and its winter extent is limited by land that surrounds the Arctic Ocean.
What is happening to land ice?
The two great ice sheets are in Greenland and Antarctica. Thanks to satellite measurements, we now know that since the early 1990s both have been contributing to sea-level rise.
It is thought that most of the Antarctic changes are caused by seawater melting the ice shelves faster, causing the land ice to flow faster and hence leading to sea-level rise as the ice sheet is tipped out of balance.
In Greenland, both surface and ocean melting play important roles in driving the accelerated contribution to sea levels.
What about sea ice?
Over the last four decades of satellite measurements, there has been a rapid decrease and thinning of summer Arctic sea ice. This is due to human activity warming the atmosphere and ocean.
In the Antarctic there has been a modest increase in total sea ice cover, but with a complex pattern of localised increases and decreases that are related to changes in winds and ocean currents. What’s more, satellite measurement of changes in sea ice thickness is much more difficult in the Antarctic than in the Arctic mainly because Antarctic sea ice has a lot of poorly measured snow resting on it.
The Southern Ocean is arguably a much more complex system than the Arctic Ocean, and determining humans’ influence on these trends and projecting future change is challenging.
Observations of the changes happening in the Arctic and Antarctic reveal complex stories that vary from place to place and over time.
These changes require ongoing monitoring and greater understanding of the causes of the observed changes. And public confusion can be avoided through careful use of the different terms describing ice in the global climate system. It pays to know your ice sheets from your sea ice.
In recent years wildfire seasons in the western United States have become so intense that many of us who make our home in dry, fire-prone areas are grappling with how to live with fire.
When I moved to a small town in eastern Washington in 2004, I thought I was prepared for the reality of wildfires. As a fire ecologist, I had studied climate change and knew the predictions of hotter, drier and longer fire seasons.
But the severity and massive size of recent wildfires in our area have highlighted the importance of making our communities more resilient to fire.
In addition to better preparing for the inevitability of fire, my research and related studies have shown that prescribed burns and proactive thinning can make our neighboring forests less susceptible to large fire events.
A history of frequent fire
The valley where I live in eastern Washington is so special that I hesitate to share its name. In spite of record-breaking wildfire seasons in recent years, many people are still moving here to build cabins in the woods.
The Methow Valley is stunningly beautiful, with shrub steppe and ponderosa pine lowlands grading into mixed conifer forests at higher elevations, topped by high mountain peaks. Our valley was named by Native Americans for the balsamroot sunflower blossoms that wash the springtime hillsides in brilliant gold.
Warmer and drier springs are contributing to more extreme fire events, such as the Tripod Complex fire of 2006, which was the largest in 50 years. US Forest Service
The native plants here depend on fire for growing space and regeneration. The arrowleaf balsamroot, for example, is deeply rooted and easily resprouts following fire. Ponderosa pine trees have thick, deeply grooved bark, and can shed their lower branches. If surface fires burn them, thick bark insulates their living tissue, and the lack of lower branches can prevent fires from spreading to crowns.
Historically, most semi-arid landscapes of western North America evolved with frequent fire. Ever-changing patterns of forest and rangeland vegetation were created by past burns. Grasslands, shrublands, open-grown and closed-canopy forests were all part of the patchwork.
Prior wildfire patterns constrained future fire spread through a mosaic of forest and nonforest vegetation that, in general, did not let fire burn contagiously across vast areas. While fires burned frequently, they were small to medium in size. Large fires, those of more than 10,000 acres, were infrequent by comparison and occurred during prolonged droughts, often under hot and windy conditions.
Today, in the absence of frequent fire, the same semi-arid landscapes have much more continuous forest cover. And fires, when they do burn, tend to be larger and more severe. My community lived through two such fire events in the past two summers.
How the forests has changed
Despite recent wildfires, semi-arid forests in my valley and across the inland West are still under a chronic fire deficit, resulting from a variety of historicalfactors. Fire suppression, displacement of native people, railroad and road building, and livestock grazing all contributed to the lack of fire.
It is difficult to convey how excluding fires from forests can so radically change them. Imagine if we replaced days of rain and snow with sunshine: the absence of precipitation would quickly shift all existing vegetation to sparse desert. Similarly, the near absence of fire over the past century has dramatically altered semi-arid landscapes, gradually replacing varied burn mosaics, characterized by forests of varying ages, shrublands and grasslands, with dense, multi-layered forests.
Markedly different wildfire behavior accompanies these changes. Wildfires are now able to contagiously burn vast areas of flammable vegetation, and severe fires, including crown fires that consume forest canopies, are increasingly common.
It was after an early and dry spring in 2006 that the largest wildfire in 50 years, the Tripod Complex fire, raged north of our small town of Winthrop, Washington.
I remember watching it start – awestruck by the smoke plume, which resembled the aftermath of a bomb explosion. As the plume collapsed and smoke settled into our valley, the reality of living through a major wildfire sunk in. I wasn’t prepared for this kind of fire. None of us was.
Eight years later, the 2014 Carlton Complex fire burned down our valley, and in two days became the largest wildfire in state history. Lightning strikes had started many small fires, and when high winds arrived on July 17, fire starts exploded into fire storms, coalescing to burn over 160,000 acres and traveling nearly 40 miles in just nine hours.
If you asked anyone in our valley who lived through the Carlton Complex fires, you would need to prepare for a long story. Evacuations of everyone downwind of the fires. Night skies filled with ember showers. A total of 310 homes destroyed. Loss of pets and livestock. Properties so blackened and charred that owners chose to move. Wide-ranging opinions about firefighter responses, from profound gratitude to what might have been done. Massive flood and mudslide events that followed. Heroic acts of tight-knit neighborhoods and communities as we pulled together and helped each other recover and rebuild.
Recovery had just begun when the 2015 wildfire season struck. Drought continued across the region and set the stage for a second, fire-filled summer. In mid-July, lightning storms ignited the Okanogan Complex, the latest record-holding wildfire in state history. One hundred and twenty homes were destroyed, many in neighboring communities to the north and south. In our valley, three firefighters lost their lives, and a fourth was badly burned. After all that we have been through, the loss and injury of these young people is the most devastating.
Evidence for thinning and prescribed burns
As we face another dry summer, our community is coming to terms with the continuing reality of wildfires. By my estimate, since 1990 over one-third of our watershed has burned. We are beginning to discuss what it means to be fire-adapted: making our homes less penetrable to burning embers, reducing fuels and thinning vegetation around our properties, and choosing better places to live and build. We can also create safe access for firefighters, plan emergency evacuation routes, and manage dry forests to be more resilient.
After decades of fire exclusion, dense and dry forests with heavy accumulations of fuel and understory vegetation often need to be treated with a combination of thinning and prescribed burning. Restoring landscape patterns will take time and careful management to mitigate how future wildfires burn across landscapes.
Parts of the Tripod fire in 2006 burned in a mosaic pattern of trees of different ages, which can prevent large-scale, contiguous burns. It’s evidence that prescribed burning and thinning can make forests more resilient. U.S. Forest Service
From our research, we know that fuel reduction in dry forests can mitigate the effects of wildfires. After the 2006 Tripod fires, we studied how past forest thinning and prescribed burning treatments influenced subsequent wildfire severity. We found that tree mortality was high in untreated or recently thinned forests, but lower in forests that had been recently thinned and prescribed burned. Our results, along with other studies in the western United States, provide compelling evidence that thinning, in combination with prescribed burning, can make forests more resilient.
On average, one-quarter of mature trees died in thinned and prescribed burned forests compared to 60-65 percent of trees in untreated or thinned forests. In a driving tour of the Tripod burn post-wildfire, areas that were prescribed burned are generally green islands amidst a gray sea of standing dead trees.
In ongoing research, we hope to learn how restoration treatments can be strategically placed to create more fire-resistant landscapes.
Self-regulating?
Wildfires also have a critical role in restoration. The 2014 Carlton and 2015 Okanogan Complex fires burned the borders of the Tripod fire and of other recent wildfires, but sparse fuels on the margins of these prior burned areas did not support fire spread.
As more fires burn across dry forests, they are creating vast puzzle-piece mosaics, and in time may become more self-regulating – limiting the size and spread.pdf) of subsequent fires.
Unmanaged stands on the left compared to an adjacent plot that’s been thinned to reduce vulnerability to severe fire. Susan J Prichard, Author provided
However, the imprints of recent fires are large, and it will take many small to medium wildfires to restore the diverse mosaic these landscapes need and once supported. Managing naturally ignited wildfires that burn in the late season or under favorable weather conditions, in combination with prescribed burning, will be essential to restore self-regulating landscapes.
Recent summers have taught us that we can’t permanently exclude fire from our valley or other fire-prone areas. This is difficult to accept for a community so recently devastated by fire and sick of the smoke that comes with it. However, summers are getting hotter and drier, and more wildfires are on the way. We have to adapt the way we live with fire and learn ways to promote resilience – within our homes, communities and neighboring forests.
Native peoples, less than 150 years ago, proactively burned the landscapes we currently inhabit – for personal safety, food production and enhanced forage for deer and elk. In some places, people still maintain and use traditional fire knowledge. As we too learn to be more fire-adapted, we need to embrace fire not only as an ongoing problem but an essential part of the solution.
Climate change is a major public health threat, already making existing problems like asthma, exposure to extreme heat, food poisoning, and infectious disease more severe, and posing new risks from climate change-related disasters, including death or injury.
Those were the alarming conclusions of a new scientific assessment report released by the Obama administration this week, drawing on input from eight federal agencies and more than 100 relevant experts.
“As far as history is concerned this is a new kind of threat that we are facing,“ said U.S. Surgeon General Vivek Murthy at a White House event. Pregnant women, children, low-income people and communities of color are among the most at risk.
Despite ever more urgent warnings of scientists, Americans still tend to view climate change as a scientific or environmental issue, but not as a problem that currently affects them personally, or one that connects to issues that they already perceive as important.
Yet research suggests that as federal agencies, experts, and societal leaders increasingly focus on the public health risks of climate change, this reframing may be able to overcome longstanding public indifference on the issue. The new communication strategy, however, faces several hurdles and uncertainties.
Putting a public health focus to the test
In a series of studies that I conducted with several colleagues in 2010 and 2011, we examined how Americans respond to information about climate change when the issue is reframed as a public health problem.
In line with the findings of the recent Obama administration report, the messages we tested with Americans stressed scientific findings that link climate change to an increase in the incidence of infectious diseases, asthma, allergies, heat stroke and other health problems – risks that particularly impact children, the elderly and the poor.
We evaluated not only story lines that highlighted these risks, but also the presentations that focused on the benefits to public health if actions were taken to curb greenhouse emissions.
In an initial study, we conducted in-depth interviews with 70 respondents from 29 states, recruiting subjects from six previously defined audience segments. These segments ranged on a continuum from those individuals deeply alarmed by climate change to those who were deeply dismissive of the problem.
Across all six audience segments, when asked to read a short essay that framed climate change in terms of public health, individuals said that the information was both useful and compelling, particularly at the end of the essay when locally focused policy actions were presented with specific benefits to public health.
Effects of climate change, including higher temperatures, have direct effects on public health, but historically it’s largely been framed as an environmental issue. anoushdehkordi/flickr, CC BY
In a follow-up study, we conducted a nationally representative online survey. Respondents from each of the six audience segments were randomly assigned to three different experimental conditions in which they read brief essays about climate change discussed as either an environmental problem, a public health problem or a national security problem. This allowed us to evaluate their emotional reactions to strategically framed messages about the issue.
In comparison to messages that defined climate change in terms of either the environment or national security, talking about climate change as a public health problem generated greater feelings of hope among subjects. Research suggests that fostering a sense of hope, specifically a belief that actions to combat climate change will be successful, is likely to promote greater public involvement and participation on the issue.
Among subjects who tended to doubt or dismiss climate change as a problem, the public health focus also helped diffuse anger in reaction to information about the issue, creating the opportunity for opinion change.
A recent study by researchers at Cornell University built on our findings to examine how to effectively reframe the connections between climate change and ocean health.
In this study involving 500 subjects recruited from among passengers on a Seattle-area ferry boat, participants were randomly assigned to two frame conditions in which they read presentations that defined the impact of climate change on oceans.
For a first group of subjects, the consequences of climate change were framed in terms of their risks to marine species such as oysters. For the second group, climate change was framed in terms of risks to humans who may eat contaminated oysters.
The framing of ocean impacts in terms of risks to human health appeared to depoliticize perceptions. In this case, the human health framing condition had no discernible impact on the views of Democrats and independents, but it did influence the outlook of Republicans. Right-leaning people, when information emphasized the human health risks, were significantly more likely to support various proposed regulations of the fossil fuel industry.
In two other recent studies, the Cornell team of researchers have found that communications about climate change are more persuasive among political conservatives when framed in terms of localized, near-term impacts and if they feature compassion appeals for the victims of climate change disasters, such as drought.
Challenges to reframing climate change
To date, a common weakness in studies testing different framing approaches to climate change is that they do not evaluate the effects of the tested messages in the context of competing arguments.
In real life, most people hear about climate change by way of national news outlets, local TV news, conversations, social media and political advertisements. In these contexts, people are likely to also encounter arguments by those opposed to policy action who misleadingly emphasize scientific uncertainty or who exaggerate the economic costs of action.
Thus our studies and others may overestimate framing effects on attitude change, since they do not correspond to how most members of the public encounter information about climate change in the real world.
The two studies that have examined the effects of novel frames in the presence of competing messages have foundmixed results. A third recent study finds no influence on attitudes when reframing action on climate change in terms of benefits to health or the economy, even in the absence of competing frames. In light of their findings, the authors recommend that communication efforts remain focused on emphasizing the environmental risks of inaction.
Communicating about climate change as a public health problem also faces barriers from how messages are shared and spread online, suggests another recent study.
In past research on Facebook sharing, messages that are perceived to be conventional are more likely to be passed on than those that are considered unconventional. Scholars theorize that this property of Facebook sharing relates closely to how cultures typically tend to reinforce status quo understandings of social problems and to marginalize unconventional perspectives.
In an experiment designed like a game of three-way telephone in which subjects were asked to select and pass on Facebook messages about climate change, the authors found that a conventional framing of climate change in terms of environmental risks was more likely to be shared, compared to less conventional messages emphasizing the public health and economic benefits to action.
In all, these results suggest that efforts to employ novel framing strategies on climate change that involve an emphasis on public health will require sustained, well-resourced, and highly coordinated activities in which such messages are repeated and emphasized by a diversity of trusted messengers and opinion leaders.
That’s why the new federal scientific assessment, which was promoted via the White House media and engagement offices, is so important. As these efforts continue, they will also need to be localized and tailored to specific regions, cities, or states and periodically evaluated to gauge success and refine strategy.
Beneath fields of corn and soybeans across the U.S. Midwest lies an unseen network of underground pipes. These systems, which are known as tile drainage networks, channel excess water out of soil and carry it to lakes, streams and rivers. There are over 38 million acres of tile drainage in the Corn Belt states.
These networks play a vital role in farm production. They allow farmers to drive tractors into fields that would otherwise be too wet and make it possible to plant early in spring. And they boost crop growth and yield by preventing fields from becoming waterlogged.
But drainage systems are also major contributors to water pollution. The water they remove from fields contains nitrogen, which comes both from organic matter in rich Midwestern soil and from fertilizer. This nitrogen over-fertilizes downstream water bodies, causing blooms of algae. When the algae die, bacteria decompose them, using oxygen in the water as fuel.
The result is hypoxic zones, also known as dead zones, where nothing can live. Some of these zones, such as the one that forms in the Gulf of Mexico every year fed by Midwestern farm drainage water, cover thousands of miles.
The Gulf of Mexico dead zone forms every summer, fed by drainage from midwestern farms. NASA/NOAA via Wikipedia
Across the Midwest and in many other areas, we need to reduce nitrogen pollution on a very large scale to improve water quality. My research focuses on woodchip bioreactors – simple trenches that can be constructed on farms to clean the water that flows out of tile drains. This is a proven practice that is ready for broad-scale implementation. Nevertheless, there is still great potential to improve how well wood chip bioreactors work, and to convince farmers to use them through additional research and engagement.
Removing nitrogen from farm runoff
Researchers studying ways to improve agricultural water quality have shown that we can use a natural process called denitrification to treat subsurface drainage water on farms. It relies on bacteria found in soil around the world to convert nitrate – the form of nitrogen in farm drainage water – to nitrogen gas, which is environmentally benign and makes up more than three-fourths of the air we breathe.
These bacteria use carbon as a food source. In oxygen-free conditions, such as wetlands or soggy soils, they are fueled by carbon in the surrounding soil, and inhale nitrate while exhaling nitrogen gas. Bioreactors are engineered environments that take advantage of their work on a large scale.
Denitrifying bioreactors on farms are surprisingly simple. To make them we dig trenches between farm fields and the outlets where water flows from tile drains into ditches or streams. We fill them with wood chips, which are colonized by native bacteria from the surrounding soil, and then route water from farm drainage systems through the trenches. The bacteria “eat” the carbon in the wood chips, “inhale” the nitrate in the water, and “exhale” nitrogen gas. In the process, they reduce nitrogen pollution in water flowing off of the farm by anywhere from 15 percent to over 90 percent.
A denitrifying woodchip bioreactor removing nitrate from a tile-drained corn field Christianson and Helmers/Iowa State Extension
Although denitrifying bioreactors are relatively new, they have moved beyond proof of concept. A new special collection of papers in the Journal of Environmental Quality, which I co-edited with Dr. Louis Schipper of the University of Waikato in New Zealand, demonstrates that these systems can now be considered an effective tool to reduce pollution in nitrate-laden waters. Researchers are using these systems in an expanding range of locations, applications, and environmental conditions.
Making bioreactors work for farmers
Woodchip bioreactors can be installed without requiring farmers to take land out of production, and require very little annual maintenance. These are important selling points for farmers. The Clean Water Act does not regulate nitrogen pollution from diffuse agricultural sources such as farm runoff, but states across the Midwest are working with federal regulators to set targets for reducing nitrogen pollution. They also are developing water quality strategies that call for installing tens of thousands of denitrifying bioreactors to help reach those targets.
So far, wood chips have proven to be the most practical bioreactor fill. Research at the lab scale has also analyzed the idea of using farm residues such as corn cobs instead. In laboratory studies, such agricultural residues consistently provide much higher nitrate removal rates than wood chips. However, they need to be replaced more frequently than wood chips, which have an estimated design life of 10 years in a bioreactor.
Laboratory studies have also helped us understand how other factors influence nitrate removal in bioreactors, including water temperature and the length of time that water remains inside the bioreactor – which, in turn, depends on the flow rate and the size of the bioreactor. Another challenge is that bioreactors work best in late summer, when drainage flow rates are low and the water flowing from fields is warm, but most nitrogen flows from fields in drainage water in spring, when conditions are cool and wet. Researchers are working to design bioreactors that can overcome this disconnect.
Installing a denitrifying woodchip bioreactor L. Christianson /Iowa Soybean Association Environmental Programs and Services
We have also carried out tests to see whether bioreactors can treat aquaculture wastewater, which typically contains much higher levels of nitrate and other water pollutants than tile drainage water. Our study showed that bioreactors could be a viable low-cost water treatment option for fish farms.
And researchers from New Zealand recently showed that denitrifying bioreactors may be an effective option for treating some small sources of municipal wastewater. Their work provided the first indication that woodchip bioreactors may be able to remove microbial contaminants like E.coli and viruses, which can be hazardous to human health, from water. The exact process by which the E.coli and viruses were removed is not yet known.
One difficult challenge in designing denitrifying bioreactors is testing novel designs at the field scale. We need to build and test large bioreactors so that we can provide useful information to farmers, landowners, crop advisors, drainage contractors, conservation staff, and state and federal agencies. They want to know practical facts, such as how long the wood chips last (approximately 7-15 years), how much it costs to install a field-scale bioreactor ($8,000-$12,000), and whether bioreactors back up water in tile drainage systems (no). To refine what we know, we plan to continue installing full-size bioreactors either on research farms or by collaborating with private farmers who want to be at the cutting edge of water-quality solutions.
We all play a role in agriculture because we all eat, and at the same time, we all need clean water. Simple technologies like woodchip bioreactors can help meet both goals by helping farmers maintain good drainage and providing cleaner water downstream.
Is the discovery of a vast reservoir of groundwater under California’s Central Valley the answer to the state’s water shortage? Maybe, but getting to it could be expensive, say experts.
“It’s not often that you find a ‘water windfall,’ but we just did,” says study coauthor Robert Jackson, a professor at Stanford University. “There’s far more fresh water and usable water than we expected.”
While this is potentially good news for California, the findings also raise concerns about cost and quality.
Previous estimates of groundwater in California are based on data that are decades old and only extend to a maximum depth of 1,000 feet, and often less. Until now, little was known about the amount and quality of water in deeper aquifers.
“Water a thousand feet down used to be too expensive to use,” says Jackson. “Today it’s used widely. We need to protect all of our good quality water.”
Times are different now. California is in the midst of its fifth year of severe drought, and in 2014 Gov. Jerry Brown declared a drought emergency in the state. To meet its surface water needs, the state is increasingly turning to groundwater supplies.
In the new study, Jackson and postdoctoral associate Mary Kang used data from 938 oil and gas pools and more than 35,000 oil and gas wells to characterize both shallow and deep groundwater sources in eight California counties.
The researchers conclude that when deeper sources of groundwater are factored in, the amount of usable groundwater in the Central Valley increases to 2,700 cubic kilometers—or almost triple the state’s current estimates. They published their results in the Proceedings of the National Academy of Sciences.
Why it’s not all good news
While this is potentially good news for California, the findings also raise some concerns. First, much of the water is 1,000 to 3,000 feet underground, so pumping it will be more expensive.
Without proper studies, tapping these deeper aquifers might also exacerbate the ground subsidence—the gradual sinking of the land—that is already happening throughout the Central Valley. Groundwater pumping from shallow aquifers has already caused some regions to drop by tens of feet.
Furthermore, some of the deep aquifer water is also higher in salt concentration than shallower water, so desalination or other treatment will be required before it can be used for agriculture or for drinking.
Another concern the Stanford scientists uncovered is that oil and gas drilling activities are occurring directly into as much as 30 percent of the sites where the deep groundwater resources are located. For example, in Kern County, where the core of California’s oil and gas industry is centered near the city of Bakersfield, one in every six cases of oil and gas activities was occurring directly into freshwater aquifers.
For useable water—water that the US Environmental Protection Agency deems drinkable if treated—the number was one in three.
Jackson and Kang stress that just because a company has hydraulically fractured or used some other chemical treatment near an aquifer doesn’t mean that the water is ruined.
“What we are saying is that no one is monitoring deep aquifers. No one’s following them through time to see how and if the water quality is changing,” Kang says. “We might need to use this water in a decade, so it’s definitely worth protecting.”