The explosion of the then Soviet Union’s Chernobyl nuclear reactor electric generation plant in 1986 was the worst nuclear power plant accident in history based on its cost and the casualties caused, but something surprising has happened since over 300,000 people from the surrounding area were evacuated and resettled to other cities.
A very interesting article on the New Scientist website reveals what that is:
The site of the world’s worst nuclear accident is now a wildlife haven. The abundance of large animals around Chernobyl, such as deer, elk and wild boar, matches that of nature reserves in the region – and wolves are seven times as common.
Some 116,000 people fled the radioactive fallout from the reactor after it exploded in 1986, and another 220,000 were resettled after that, vacating a zone covering some 4200 square kilometres split equally between Belarus and Ukraine.
“Whatever negative effects there are from radiation, they are not as large as the negative effects of having people there,” says Jim Smith of the University of Portsmouth in the UK. “We’re not saying there weren’t radiological effects at all, but we can’t see effects on populations as a whole.”
The message is clear, he says. “The everyday things we do, such as occupying an area, forestry, hunting and agriculture, are what damages the environment.”
“The striking Chernobyl findings reveal that nature can flourish if people will just leave it alone,” says Bill Laurance of James Cook University in Cairns, Australia. “This underscores the vital importance of having people-free parts of the planet.”
Lee Hannah of Conservation International says Chernobyl is a living testament to the resilience of nature. “Wild places can come back if we give them a chance, but we don’t want to rely on nuclear disasters to make this happen,” he says.
It turns out that the most severe impact of the radiation from Chernobyl’s fallout was, in fact, pretty short lived. The article continues:
Smith says that the worst impacts of radiation on animals occurred within the first year or so after the accident, mainly because of short-lived but highly toxic isotopes such as iodine-131 and technetium-99. For example, cattle died after eating grass contaminated with the iodine, and early studies showed that mice suffered many more miscarriages.
“By 1987 the dose rate fell low enough to avoid these larger, more acute effects,” says Smith.
Since the disaster, the estimated radiation doses that animals receive in the worst-hit areas have stabilised at around 1 milligray per day, about a tenth the dose someone would receive during an abdominal CT scan.
Chernobyl created an accidental laboratory for studying how well nature can recover from human-caused disasters, perhaps providing some hope for recovery of other wild places in the world that have been damaged by human activity.
The Tasmanian government this month released a draft of the revised management plan for the Tasmanian Wilderness World Heritage Area, which proposes rezoning certain areas from “wilderness zones” to “remote recreation zones”.
The changes would enable greater private tourism investment in the World Heritage Area and allow for logging of speciality timbers.
At the centre of the debate is how we define wilderness – and what people can use it for.
For wildlife or people?
“Wilderness quality” is a measure of the extent to which a landscape (or seascape) is remote from, and undisturbed by, modern technological society. High wilderness quality means a landscape is relative remote from settlement and infrastructure and largely ecologically intact. Wilderness areas are those that meet particular thresholds for these criteria.
The word’s largest wilderness areas include Amazonia, the Congo forests, the Northern Australian tropical savannas, the Llanos wetlands of Venezuela, the Patagonian Steppe, Australian deserts and the Arctic Tundra.
The Amazon rainforest is one of the largest areas of wilderness in the world. CIFOR/AAP, CC BY-NC-ND
Globally, there are 24 large intact landscapes of at least 10,000 square kilometres (1,000,000 hectares). Wilderness as a scientific concept was developed for land areas, but is also increasingly being applied to the sea.
Legal definitions of wilderness usually include these remote and intact criteria – but the goals range from human-centred to protecting the intrinsic value of wilderness. Intrinsic value recognises that things have value regardless of their worth or utility to human beings, and is recognised in the Convention on Biological Diversity to which Australia is a signatory.
In the NSW Wilderness Act 1987, for instance, one of the three objects of the Act is cast in terms of benefits to the human community: “to promote the education of the public in the appreciation, protection and management of wilderness”. The Act also states that wilderness shall be managed so as “to permit opportunities for solitude and appropriate self-reliant recreation.” Examples of formally declared wilderness areas in New South Wales are the Lost World Wilderness Area and Wollemi National Park.
Intrinsic value is evident in the the South Australia Wilderness Protection Act 1992 which sets out to, among other things, preserve wildlife and ecosystems, and protect the land and its ecosystems from the effects of modern technology – and restoring land to its condition prior to European settlement.
Our understanding of wilderness and its usefulness has changed over the last century as science has revealed its significance for biodiversity conservation and ecosystem services. We have also accepted the ecological and legal realities of Indigenous land stewardship.
The world’s rapidly shrinking areas of high wilderness quality, including formally declared wilderness areas, are largely the customary land of Indigenous peoples, whether or not this is legally recognised.
Significant bio-cultural values, such as Indigenous peoples’ knowledge of biodiversity (recognised in Australia’s federal Environmental Protection and Biodiversity Conservation Act), are dependent on these traditional relationships between people and country.
In many cases around the world, wilderness areas only remain intact because they are under Indigenous stewardship. In Australia, these facts were regrettably ignored in the past and were the source of much loss and harm to Traditional Owners when protected areas were declared without their consent.
Lessons have been learnt, some progress is being made, and the essential role of local and Indigenous communities in the conservation of wilderness areas is now being recognised and reflected in Australian national and state conservation and heritage policy and law.
For example, in 2003 the Northern Territory government agreed to joint management with the Traditional Owners of the Territory’s national parks.
In recent years there has been significant movement toward land acquisition in developing countries to establish forestry plantations for offsetting carbon pollution elsewhere in the world. This is often referred to as land grabbing.
These carbon trading initiatives work on the basis that forestry plantations absorb carbon dioxide and other polluting greenhouse gases. This helps to undo the environmental damage associated with modern western lifestyles.
Carbon markets are championed as offering solutions to climate change while delivering positive development outcomes to local communities. Heavy polluters, among them the airline and energy sectors, buy carbon credits and thereby pay local communities, companies and governments to protect forests and establish plantations.
But are carbon markets – and the feel good stories that have sprung up around them – all just a bit too good to be true?
There is mounting evidence that forestry plantations and other carbon market initiatives severely compromise livelihoods and ecologies at a local level. The corporate land grabs they rely on also tend to affect the world’s most vulnerable people – those living in rural areas.
But such adverse impacts are often written out of the carbon market ledger. Sometimes they are simply justified as ‘externalities’ that must be accepted as part of ensuring we avoid climate apocalypse.
Green Resources is one of a number of large-scale plantation forestry and carbon offset corporations operating on the continent. Its activities are having a profound impact on the livelihoods of a growing number of people. Norwegian-registered, the company produces saw log timber and charcoal in Mozambique, Tanzania and Uganda. It receives carbon revenue from its plantation forestry operations.
In Uganda, the focus of our research, Green Resources holds two licenses over 11,864 hectares of government-owned, ‘degraded’ Central Forest Reserve. Historically, villagers could access this land to grow food, graze animals and engage in cultural practices.
Under the licensed land agreement between Uganda’s government and Green Resources, more than 8,000 people face profound disruptions to their livelihoods. Many are experiencing forced evictions as a direct result of the company’s take over of the land.
Carbon violence on local villagers
Villagers across Green Resources’ two acquisitions in Uganda report being denied access to land vital for growing food and grazing livestock. These are at Bukaleba and Kachung Central Forest Reserves. They also cannot collect forest resources. Many say they are denied access to sites of cultural significance and to resources vital to their livelihoods.
There are also many stories about land and waterways that have been polluted by agrichemicals the company uses in its forestry plantations. This has caused crop losses and livestock deaths.
Many of those evicted, as well as those seeking to use land licensed to Green Resources, have also experienced physical violence at the hands of police and private security forces tied to the arrival of the company. Some villagers have been imprisoned or criminalised for trespass.
These diverse forms violence are directly tied to the company’s participation in the carbon economy. Thus Green Resources’ plantation forestry and carbon market activities are inflicting ‘carbon violence’ on local villagers.
Green Resources appears to be continuing to tighten the perimeter of its plantation operations as part of ensuring compliance with regulations and certifications required for entry into carbon markets. This further entrenches these diverse forms of violence. In short, subsistence farmers and poor communities are carrying heavy costs associated with the expansion of forestry plantations and global carbon markets.
There are many ongoing signs that the planet is heating up, even “on fire.”
In the western region of North America, the prolonged drought has led to high temperatures and many wildfires, from Canada and the Northwest earlier this summer to California more recently. The Pacific is very active with hurricanes, typhoons and tropical cyclones, and with several damaging hits in Japan, China and Taiwan, in particular. So far, by contrast, the Atlantic tropical storm season is quiet.
Globally, surface temperatures have been setting record high values (see figure below). US temperatures this year are well above normal as a whole, running 1.7 Fahrenheit above the 20th-century average (through July; the 10th highest on record). However, precipitation has been well above average in much of the country outside of the West, making temperatures lower than they otherwise would have been (owing to more cloud and evaporative cooling).
So what’s going on? Increased warming is expected because human activities are leading to increases in heat-trapping greenhouse gases, mainly carbon dioxide from burning of fossil fuels. And indeed, the global mean surface temperature (GMST) has been rising fairly steadily: every decade after the 1960s was warmer than the one before, and the decade of the 2000s was the warmest on record by far; see figure.
At the same time, it is readily apparent that there is variability in GMST from year to year and decade to decade. This is expected and known to arise largely from internal natural variability. While the rate of surface temperature increase has been mostly upward from about 1920 and the recent rate is not out of step overall, there are two hiatus intervals with much lower rates of temperature increase. The first was from about 1943 to 1975, and the second was from 1999 to 2013.
In a paper entitled Has There Been a Global Warming Hiatus?, I find that natural variability through interactions among the oceans, atmosphere, land and ice can easily mask the upward trend of global temperatures. For climate scientists to improve climate models, better understanding of these variations and their effect on global temperatures is essential.
Hiatus revisited
The warmest year in the 20th century was 1998. However, since then there has been an apparent absence of an increase in GMST from 1998 through 2013. This has become known as the “hiatus.” While 2005 and 2010 GMST values slightly exceeded the 1998 value, the trend upwards slowed markedly until 2014, which is now the warmest year on record. Moreover, there are excellent prospects that 2015 will break that record – the past 12 months through June 2015 are indeed the warmest 12 months on record (see figure). It looks like the hiatus is over!
Seasonal global mean surface temperatures from NOAA, after 1920, relative to the mean of the 20th century. The seasons are defined as December-February, etc. A 20-term Gaussian filter is used to show the decadal variations (heavy black curve). (middle) The seasonal mean Pacific Decadal Oscillation (PDO) anomalies, in units of standard deviation. The positive (pink) and negative (light blue) PDO regimes are indicated throughout the figure. (bottom) Decadal average anomalies (starting 1921-1930) of GMST (green) along with piecewise slopes of GMST for the phases of the PDO (yellow). Kevin Trenberth/Data from NOAA, Author provided
El Niño and Pacific Decadal Oscillation (PDO)
A closer look at the events during these hiatus periods sheds light on the role of natural variability on the long-term trend of global warming.
The year 1998 was the warmest on record in the 20th century because there was warming associated with the biggest El Niño on record – the 1997-98 event. Prior to that event, ocean heat that had built up in the tropical western Pacific spread across the Pacific and into the atmosphere, invigorating storms and warming the surface especially through latent heat release, while the ocean cooled from evaporative cooling.
Now, in 2015, another strong El Niño is under way; it began in 2014 and has developed further, and in no small part is responsible for the recent warmth and the pattern of weather around the world: the enhanced tropical storm activity in the Pacific at the expense of the Atlantic, the wetter conditions across the central United States, and cool snowy conditions in New Zealand.
There is also strong decades-scale variability in the Pacific, known in part as the Pacific Decadal Oscillation (PDO) or Interdecadal Pacific Oscillation (IPO) – the former is Northern Hemisphere focused, but the two are closely related. The positive phase of the PDO pattern, which affects ocean temperatures, is similar to that of El Niño.
The PDO is a major player in these hiatus periods, as has been well-established by observations and models. There are major changes in Pacific tradewinds, sea level pressure, sea level, rainfall and storm locations throughout the Pacific and Pacific rim countries, but also extending into the southern oceans and across the Arctic into the Atlantic.
There is good but incomplete evidence that these changes in winds alter ocean currents, ocean convection and overturning, which leads to changes in the amount of heat being sequestered at greater depths in the ocean during the negative phase of the PDO. The effects are greatest in winter in each hemisphere. The result is that during the positive phase of the PDO, the GMST increases, while during the negative phase it stagnates.
Results suggest that the Earth’s total energy imbalance – that is, the growing amount of the sun’s incoming energy trapped by greenhouse gases – is largely unchanged with the PDO. But during the positive phase, more heat is deposited in the upper 300 meters of the ocean, where it can influence the GMST. In the negative phase, more heat is dumped below 300 meters, contributing to the overall warming of the oceans, but likely irreversibly mixed and lost to the surface.
Modulating human-induced changes
The internal climate variability can also be modulated by external influences, including the various human influences.
Increased warming from increases in heat-trapping greenhouse gases can be offset by visible pollution (in the form of particles called atmospheric aerosols), which are mostly also a product of fossil fuel combustion. Indeed, from 1945 to 1970 there were increases in pollution in the atmosphere arising from post-World War II industrialization in Europe and North America, especially over the Atlantic, and some volcanic activity that increased aerosols in the stratosphere. However, regulations in developed countries, such as the US Clean Air Act of 1970, brought that era to an end.
Climate model simulations and projections of GMST suggest that the signal of human-induced climate change emerged from the noise of natural climate variability in about the 1970s. Expected rates of change were very much in step with the rate observed from 1975 to 1999, but not the slower rate from 1999 on. (This is another reason to say there has been a hiatus from 2000 to 2013.)
Human-induced climate change is relentless and largely predictable, even though at any time and especially locally it can be masked by natural variability, whether on interannual (El Niño) or decadal time scales. But the predominant driver of the slowdowns in GMST is the PDO. There is speculation now as to whether or not the decadal variability has reversed – going to a positive phase (see figure). With this change and the latest El Niño event, the GMST is taking another step up to a higher level.
The role of natural variability paints a different picture than one of steadily rising global mean temperatures. Indeed, the combination of decadal variability plus a heating trend from increasing greenhouse gases makes the GMST record more like a rising staircase than a monotonic climb.
Yellowstone Park has actually been pretty cutting edge with recycling projects, and now they’ve completed yet another one. In partnership with Michelin, the park has now replaced all of the asphalt paths that were around the Old Faithful geyser, which, of course, is one of the park’s most popular attractions, with paths made of recycled rubber. An informative article on Popular Science provides some great details:
Michelin, the tire company that helped build the project, says that the benefit of the new surface is that unlike asphalt, it won’t leach oil into the ground, and it will help prevent erosion.
The 900 tires that were cut up and used to create the new, 6,400-square-foot path once adorned some of the 452 park vehicles that patrol Yellowstone.
So, not only is it recycling, but it is, in fact, local recycling, how about that! This may be a trend for Yellowstone, last year, in partnership with Toyota, the National Park set up a system to use recycled car batteries to power buildings in remote areas of the park.
Here is a great “what-if”: if we (the human race) were to burn all available fossil fuels, could we melt the largest and most stable ice sheet on the planet – Antarctica? Could our collective industrial impacts on the planet possibly have that far a reach?
The spoiler is: “yes,” although in our recent computer modeling-based study, we find that it would require all of our fossil fuel resources to do it, and to see the very last of the ice melt, we might have to wait as long as 10,000 years.
Before we get any further, let’s consider this as a thought experiment in ice sheet dynamics and the global carbon cycle response to CO2 emissions to test our understanding of the long-term effects that extreme perturbations could have on the Earth system.
What I have in mind is a socioeconomic carbon use scenario that I hope personally would never come to fruition, but equally one that is not intended to be an implausible scare story or a “sky is-falling-in” simulation of doom and gloom and future global environmental catastrophe. (And also, to be completely honest, it was not my thought experiment in the first place, but instead comes from the head of Ken Caldeira at the Carnegie Institution for Science, Stanford, who was very ably assisted in bringing it to fruition by a brace of ice-sheets modelers at the Potsdam Institute for Climate Impact Research in Germany – Ricarda Winkelmann and Anders Levermann.)
However, given unrestrained burning of fossil fuels, our study does show that the largest mass of ice in the world, including both the East and West Antarctica ice sheets, ultimately is vulnerable to irreversible melting – and dramatic sea-level rise.
Lessons from the past?
We already know that the Antarctic ice sheet has not always been there, and there is abundant geological evidence that around 50-100 million years ago, sea surface temperatures around Antarctica were pleasantly warm and vegetation on the Antarctic Peninsula was lush and warm-temperature. (And yes, prior to 65 millions years ago, there were dinosaurs living there too.) Our best reconstruction of atmosphere CO2 at the time is somewhere in the region of 556-1,112 parts per million (ppm) and higher than the almost 400 ppm we have reached today.
How Antarctic ice would be affected by different emissions scenarios. GtC stands for gigatons of carbon. Ken Caldeira and Ricarda Winkelmann, Author provided
But this does not provide a particularly helpful guide to future ice sheet susceptibility. These past warm climates represent intervals of millions of years of elevated atmospheric CO2, whereas in the future, CO2 levels will start to drop back down once fossil fuel emissions cease. And this brings us to the crux of the problem, at least from my perspective: just how quickly will CO2 decay back down toward 278 ppm, the preindustrial atmospheric concentration?
A new species of gazelle was identified this year “hiding in plain sight” in Israel and Palestine, and now it turns out that it’s an endangered species. Although gazelles have lived in the region for millennia, they were always just assumed to be the same as other gazelle species inhabiting the Arabian Peninsula. An excellent article about this on the Scientific American website provides some details on the new species and how it was found that it was already endangered:
Gazelles, smaller relatives of the antelope, are a fairly wide-ranging group of species that can be found in Africa and much of Asia. Most of the dozen or so species and subspecies are fairly close to each other in appearance, so it’s easy to see why this new one evaded notice for so long. Close examination confirmed that the mountain gazelles in Israel had numerous genetic deviations from other gazelles, as well as some minor physiological differences.
News of the new species inspired researchers from the Israel Nature and Parks Authority to count the gazelles in the region, something that hadn’t been necessary until they were differentiated from the gazelles on the Arabian Peninsula. That survey revealed a species in trouble. The gazelles have declined from an estimated population of 10,000 in the 1990s to about 2,000 today. As a result, the species will now likely be classified as endangered under Israeli law and on the Red List of Threatened Species compiled by the International Union for Conservation of Nature (IUCN).
On the next page we’ll explore why the precipitous decline…
As the summer ends, heat is dominating the meteorological landscape, with the warmest month ever recorded and the drought continuing unabated in California. At the same time, it is clear that an El Niño is building that is expected to culminate in the fall and last until the winter, with the possibility of it becoming a “mega” El Niño.
The hope in California is that the large amounts of precipitation usually associated with extreme El Niño events would lessen the impacts of the state’s multi-year drought by partly refilling reservoirs and groundwater, even as scientists caution that this might not happen to the degree needed to alter the present situation.
What drives the El Niño weather pattern and what do scientists know about El Niño under man-made greenhouse warming?
A tropical Pacific phenomenon with global influence
To be clear, El Niño is a tropical Pacific phenomenon, even though it represents the strongest year-to-year meteorological fluctuation on the planet and disrupts the circulation of the global atmosphere. When sea surface temperature changes – or anomalies – in the eastern equatorial Pacific exceed a certain threshold, it becomes an El Niño.
What are the mechanisms behind El Niño? In normal conditions in the tropical Pacific, the trade winds blow from east to west, driving ocean currents westwards underneath. These currents transport warm water that is heated by low-latitude solar radiation and eventually piles up in the western Pacific. As a result, heat accumulates in the upper ocean.
Under normal conditions, winds help carry warm water from east to west. Michael McPhaden/NOAA
The warm water evaporates from the ocean surface, and the light, warm and humid air rises, leading to deep convection in the form of towering cumulonimbus clouds and heavy precipitation. As this air ascends, it reaches upper levels of the troposphere and returns eastwards to eventually sink over the cooler water of the eastern Pacific. This east-west (zonal) circulation is called the Walker Circulation.
What happens to the atmosphere and the ocean during El Niño?
This circulation gets disrupted every few years by El Niño or enhanced by La Niña, the opposite effect. This periodic, naturally occurring phenomenon is called the El-Niño Southern Oscillation (ENSO).
During the typical El Niño, the warm phase of that oscillation, the trade winds weaken, and episodic westerly wind bursts in the western equatorial Pacific generate internal waves into the ocean. These waves trigger the transport of the warm water from the west to the east of the basin.
During an El Niño, the trade winds weaken and change ocean circulation patterns. Michael McPhaden/NOAA
This induces a reduction of the upwelling (upward motion) of cold water in the east, at the equator and along the coast. It also creates warm sea surface temperature anomalies along the equator from the international dateline in the Pacific to the coast of South America.
As the central part of the Pacific warms up during El Niño, the atmospheric convection that normally occurs over the western warm pool migrates to the central Pacific. That transfer of heat from the ocean to the atmosphere gives rise to extraordinary rainfall in the normally dry eastern equatorial Pacific. Warm air then flows from the west, feeding this convection and further weakening the east-west-flowing trade winds. This leads to further warming as this feedback loop amplifies the phenomenon and ensures that deep atmospheric convection and rainfall patterns are maintained in the central equatorial Pacific. El Niño eventually ends when changes in the ocean cause negative feedbacks that reverse the dynamics that create the El Niño effects.
How can El Niño affect weather in United States and rainfall in California?
In association with El Niño, the heat redistribution in the ocean creates a major reorganization of atmospheric convection, severely disrupting global weather patterns from Australia to India and from South Africa to Brazil.
What explains the specific effect on the US and California, however, is a particular type of connection – called extratropical teleconnections – between the heating generated by El Niño and North America. This heating excites wave trains, or groups of similar-sized atmospheric waves, that propagate northward, connecting the central equatorial Pacific to North America. This shifts the subtropical jet stream northward and induces a series of storms over California and the southern US, in general. The increased precipitation that ensues seems to only occur during a strong El Niño.
While El Niños have a rather “typical” signature in the tropics, their impacts over North America vary because other influences act in temperate climates. Nevertheless, most El Niño winters are mild over western Canada and parts of the northern central United States, and wet with anomalous precipitation over the southern United States from Texas to Florida.
How might El Niño evolve under man-made greenhouse warming?
Scientists are now studying the diversity in El Niño behavior – strong and weak ones, changes in duration, and the different regions for the maximum SST anomalies. Are these changes to El Niño related to global warming? It is too early to say.
For one thing, there is significant natural variability in the Pacific over the decade-length and longer time scales, which could be masking changes driven by global warming.
Climate models do suggest that the mean conditions in the Pacific will evolve toward a warmer state. That means sea surface temperatures are likely to rise and the trade wind to weaken, which could lead to a more permanent El Niño state and/or more intense El Niño events.
Some climate model projections, together with reconstructions of past El Niños, provide empirical support for more extreme El Niño events under greenhouse warming. They also point toward an eastward shift of the center point where heat from the ocean transfers to the air. This would mean an eastward shift of extratropical rainfall teleconnections, the phenomenon responsible for weather changes in North America, including more rain in the West.
But models diverge in their predictions of whether and how the teleconnections’ intensity will change. So there is no simple answer to how precipitation will change in California in association with changes of El Niño related to greenhouse warming.
A complex phenomenon with many tricks for scientists
Will the sensitivity of the atmosphere to the primary mechanism at the heart of El Niño – that is, feedback between the higher sea temperatures and slowing trade winds, leading to atmospheric convection over the central Pacific – continue in the future?
It was not maintained during 2014, when otherwise favorable conditions for a big El Niño were present. In that case, persistent deep convection did not occur in the central Pacific, and the usual strong interaction between the atmosphere and the ocean there failed to play its normal role in anchoring the convection and heat transfer.
These results show us that we still have much to learn. This is true despite the dramatic scientific progress that has been accomplished over the last few decades regarding El Niño and ENSO cycles, including new theories, sophisticated seasonal forecasting models and extensive observation systems.
Our ability to predict El Niño and the potential connections between increasing greenhouse gases and El Niño is still limited by the complexity of the ENSO dynamics, as exemplified by the failed prediction of a 2014 El Niño. In the meantime, we can look forward to a winter when El Niño, perhaps even a mega El Niño, will dominate the weather discussion.
In a creepy, strange twist to the usual global warming disaster scenarios, scientists have now found ancient viruses that are being released from the ice that is swiftly melting due to the well-documented global rise in temperatures.
Popular Science recently published an excellent article on this topic, and it says:
Melting polar regions are already causing unprecedented sea level rise but there are other threats besides water buried in the swiftly melting ice. Since 2003, large viruses (longer than 0.5 microns) have been found ensconced in permafrost, a layer of soil in the Arctic that is usually permanently frozen. In a new study published in PNAS today, scientists announced that they were studying a 30,000 year old virus found in the same frigid environment. The ancient virus, Mollivirus sibericum, is able to infect a modern amoeba, which raises concerns among scientists.
Could ancient viruses awaken and devastate the human population? The scope of that risk is not really known, but scientists are becoming concerned. Check out the full article over on Popular Science for the rest of the story.
Florida’s freshwater springs with extensive underwater caves have long been a coveted destination for divers, but over the past 25 years, many of the springs have been invaded by slimy algae. Discover’s website has a really interesting article that reports on the changes that have occurred:
The water in the caves of Peacock Springs is still crystal-clear, but for how much longer? Photo: Mark Long
Twenty-five years ago, the striking blue waters of Florida’s Peacock Springs were as clear as glass, “like a fantasy,” recalls environmental scientist and cave diver Pete Butt. Snorkeling at the surface, he could see through the water to the limestone bottom and its craggy portals to one of the longest underwater cave systems in the nation.
Divers still converge on Peacock and the other springs that sparkle azure in the forests of northern and central Florida. Yet outbreaks of algae have started to cloud the crystal waters — along with the future of Florida’s collection of more than 1,000 freshwater springs, one of the world’s largest concentrations.
Algae clump on the surface in smelly mats, smother native aquatic vegetation with slime or grow along the bottom in hairy, green strands. “Amorphous goo,” Butt calls it. “Atrocious.”
While scientific conventional wisdom has tied such algae growths to increased nitrate pollution from farming runoff and other sources, as well as pumping of ground water for drinking water, it turns out that the picture may be a lot more complex than that. On the next page, we explore why that is…