Residue on newly discovered ancient pot shards suggests delicacies like Gruyere and Emmental got their start more than 3,000 years ago when dairy herders moved into the Swiss Alps and started making cheese.
Researchers found that the residue on shards from the 1st millennium BCE—the Iron Age—had the same chemical signatures associated with heating milk from animals such as cows, sheep, and goats, as part of the cheesemaking process.
The ceramic fragments were found in the ruins of stone buildings similar to those used by modern alpine dairy managers for cheese production during the summer months.
Although there is earlier evidence for cheese production in lowland settings, until now virtually nothing was known about the origins of cheesemaking at altitude due to the poor preservation of archaeological sites.
Researchers say that the development of alpine dairying occurred around the same time as an increasing population and the growth of arable farming in the lowlands. The resulting pressure on valley pastures forced herders to higher elevations.
“The principal interest of this piece of work is that it provides direct evidence for early dairying at high altitude in the Alps,” says Kevin Walsh, senior lecturer of archaeology at the University of York and coauthor of a new study published in PLOS ONE.
“Until now, we have been reliant on indirect evidence for pastoralism and dairying in the Alps, changes in vegetation and archaeological structures that suggest pastoral practices.”
Even today, producing cheese in a high mountainous environment requires extraordinary effort, says Francesco Carrer, a research associate at Newcastle University.
“Prehistoric herders would have had to have detailed knowledge of the location of alpine pastures, be able to cope with unpredictable weather, and have the technological knowledge to transform milk into a nutritious and storable product.
“We can now put alpine cheese production into the bigger picture of what was happening at lower levels. But there is more work needed to fully understand the prehistoric alpine cheesemaking process such as whether the cheese was made using a single milk or a blend and how long it was matured for.”
We humans are fire creatures. Tending fire is a species trait, a capacity we alone possess – and one we are not likely to tolerate willingly in any other species. But then we live on Earth, the only true fire planet, the only one we know of that burns living landscapes. Fire is where, uniquely, our special capabilities and Earth’s bioenergy flows converge. That has made us the keystone species for fire on Earth. Our environmental power is literally a fire power.
The Pyrocene threatens to overwhelm Earth with fire as the Pleistocene did with ice. It has forced us to reexamine the nature of our firepower, which has taken two forms. One involves open burning on the landscape. We tweak natural fire regimes to better suit our purposes. We set fires for hunting, foraging, protection against wildfire, even warfare. We burn slashed woods and drained peatlands for farming. We kindle pastures to improve fodder and browse. We burn fallow, of any and all kinds. Over the past century we have sought, with equal intensity, to remove fire from protected forests and parks. The pyrogeography of the planet is sculpted by the fires we apply and withhold, and the landscapes we have fashioned, which in turn shape the fires they exhibit.
Our other firepower comes from closed combustion. We put fire into special chambers – hearths, forges, furnaces, engines, candle wicks, dynamos – to generate light, heat and power. These mechanical keepers of the flame have enormously leveraged our firepower. Matthew Boulton, James Watt’s business partner in promoting the steam engine, put it with brutal pithiness: “I sell here, sir, what all the world desires to have – Power.”
As fire industrialized, as biotas, terrain, air and lightning were disaggregated and refined into fuel, oxygen and spark to produce maximum effects, fire began to vanish from daily life and landscapes. The two narratives of fire – open and closed – once overlapped. We domesticated landscapes by passing the equivalent of the hearth fire over them. Now we use closed combustion to substitute for or suppress outright those free-burning flames.
Shifting our understanding of fire
Today, as measured by emissions, even allowing for the massive incineration of tropical peat in Indonesia, we burn far more by closed combustion than by open. Particularly in urban and industrial societies, more and more combustion comes from confined fires than from open flames on landscapes. In modern cities free-burning fire is progressively banned, even for ceremonial purposes. The Burning Man festival had to relocate from San Francisco’s Baker Beach to Black Rock, a salt playa in Nevada. Candles are banished from university dormitories.
Most of humanity’s fire history has pivoted around a quest for combustibles, for new and more abundant sources of stuff to burn. As we exhausted one cache of combustibles, we moved to another, eventually drafting fossil biomass from the geologic past. Slash-and-burn agriculture is an apt metaphor for humanity’s fevered quest for fire generally.
Now we face a question of sinks – of the capacity of ecological systems, including Earth itself, to absorb all the effluent. So, too, our understanding of fire’s place in planetary history is inverting. We used to understand fire as a subset of natural history, particularly of climate. Now natural history, including climate, is becoming a subset of fire history.
Leaving behind Promethean fire
The open and closed narratives of fire, once linked, have diverged. The story of closed combustion is Promethean, stolen from the gods and brought under human control. It speaks to fire abstracted from its setting, perhaps by violence, and certainly held in defiance of an existing order. Promethean fire provides the motive power behind most of our technology.
The narrative of open burning is a more primeval story that speaks to fire as a companion on our journey, as part of how we exercise stewardship of our natural habitat. We are the agent that brokers fire for the biosphere, who more than any other organism shapes the patterning of fire on the land.
Overall, thanks to Promethean fire, we now have too much of the wrong kind of fire, and it has led to a quest for alternative forms of energy that do not rely on combustion. The move toward carbon-neutral energy promises to unbundle the source of our power from our grip on the torch. Recent developments in nuclear fusion, which has long promised a full replacement for burning, have inspired calls for a “Wright brothers moment” to show the world what is possible. Together fusion and solar power promise to replace the human need for controlled flames, to decouple Promethean from primeval fire.
Such is the power of fire in our imagination, however, that we continue to speak loosely of such alternatives as “fire,” as earlier times lumped together all natural phenomena that radiated heat and light. Well into the 18th century, the Enlightenment saw central fires in the Earth that boiled over as volcanoes, celestial fires in the guise of stars and comets, solar fire blazing from the sun, electrical fires crackling as lightning. Fire was, and remains, a potent source of metaphor.
But fusion and solar power are not combustion. They represent a decarbonization of energy to the point that it is no longer fire. We can all breathe easier (literally) when Promethean fire shrinks, and perhaps vanishes.
Returning fire to nature
That still leaves primeval fire, an emergent property of the living world that has flourished since the first plants colonized continents. It will not go away. Rather, its removal, even its attempted removal, can be profoundly disruptive. We need a lot more primeval fire of the right sort. Paradoxically, the more we find surrogates for closed combustion, the more we can embrace open burning.
We have to sort out good fire from bad. That’s exactly what our species monopoly makes possible and what our firepower demands of us. We can begin by reversing the Promethean story, by taking fire out of our machines and putting it back into its indigenous setting. Faux fires like solar power, nuclear fission and fusion can nudge that project along by taking its place and fulfilling our modern energy needs. A triumph of fusion energy won’t mean the end of fire. It will simply liberate it from its enforced captivity and relocate it into landscapes where it can do the ecological work that it alone can do.
The largest nuclear disaster in history occurred 30 years ago at the Chernobyl Nuclear Power Plant in what was then the Soviet Union. The meltdown, explosions and nuclear fire that burned for 10 days injected enormous quantities of radioactivity into the atmosphere and contaminated vast areas of Europe and Eurasia. The International Atomic Energy Agency estimates that Chernobyl released 400 times more radioactivity into the atmosphere than the bomb dropped on Hiroshima in 1945.
Radioactive cesium from Chernobyl can still be detected in some food products today. And in parts of central, eastern and northern Europe many animals, plants and mushrooms still contain so much radioactivity that they are unsafe for human consumption.
Our studies provide new fundamental insights about consequences of chronic, multigenerational exposure to low-dose ionizing radiation. Most importantly, we have found that individual organisms are injured by radiation in a variety of ways. The cumulative effects of these injuries result in lower population sizes and reduced biodiversity in high-radiation areas.
Broad impacts at Chernobyl
Radiation exposure has caused genetic damage
and increased mutation rates in many organisms in the Chernobyl region. So far, we have found little convincing evidence that many organisms there are evolving to become more resistant to radiation.
Organisms’ evolutionary history may play a large role in determining how vulnerable they are to radiation. In our studies, species that have historically shown high mutation rates, such as the barn swallow (Hirundo rustica), the icterine warbler (Hippolais icterina) and the Eurasian blackcap (Sylvia atricapilla), are among the most likely to show population declines in Chernobyl. Our hypothesis is that species differ in their ability to repair DNA, and this affects both DNA substitution rates and susceptibility to radiation from Chernobyl.
Much like human survivors of the Hiroshima and Nagasaki atomic bombs, birds and mammals
at Chernobyl have cataracts in their eyes and smaller brains. These are direct consequences of exposure to ionizing radiation in air, water and food. Like some cancer patients undergoing radiation therapy, many of the birds have malformed sperm. In the most radioactive areas, up to 40 percent of male birds are completely sterile, with no sperm or just a few dead sperm in their reproductive tracts during the breeding season.
Tumors, presumably cancerous, are obvious on some birds in high-radiation areas. So are developmental abnormalities in some plants and insects.
Not every species shows the same pattern of decline. Many species, including wolves, show no effects of radiation on their population density. A few species of birds appear to be more abundant in more radioactive areas. In both cases, higher numbers may reflect the fact that there are fewer competitors or predators for these species in highly radioactive areas.
Moreover, vast areas of the Chernobyl Exclusion Zone are not presently heavily contaminated, and appear to provide a refuge for many species. One report published in 2015 described game animals such as wild boar and elk as thriving in the Chernobyl ecosystem. But nearly all documented consequences of radiation in Chernobyl and Fukushima have found that individual organisms exposed to radiation suffer serious harm.
There may be exceptions. For example, substances called antioxidants can defend against the damage to DNA, proteins and lipids caused by ionizing radiation. The levels of antioxidants that individuals have available in their bodies may play an important role in reducing the damage caused by radiation. There is evidence that some birds may have adapted to radiation by changing the way they use antioxidants in their bodies.
Parallels at Fukushima
Recently we have tested the validity of our Chernobyl studies by repeating them in Fukushima, Japan. The 2011 power loss and core meltdown at three nuclear reactors there released about one-tenth as much radioactive material as the Chernobyl disaster.
Overall, we have found similar patterns of declines in abundance and diversity of birds, although some species are more sensitive to radiation than others. We have also found declines in some insects, such as butterflies, which may reflect the accumulation of harmful mutations over multiple generations.
Our most recent studies at Fukushima have benefited from more sophisticated analyses of radiation doses received by animals. In our most recent paper, we teamed up with radioecologists to reconstruct the doses received by about 7,000 birds. The parallels we have found between Chernobyl and Fukushima provide strong evidence that radiation is the underlying cause of the effects we have observed in both locations.
Some members of the radiation regulatory community have been slow to acknowledge how nuclear accidents have harmed wildlife. For example, the U.N.-sponsored Chernobyl Forum instigated the notion that the accident has had a positive impact on living organisms in the exclusion zone because of the lack of human activities. A more recent report of the United Nations Scientific Committee on the Effects of Atomic Radiation predicts minimal consequences for the biota animal and plant life of the Fukushima region.
Unfortunately these official assessments were largely based on predictions from theoretical models, not on direct empirical observations of the plants and animals living in these regions. Based on our research, and that of others, it is now known that animals living under the full range of stresses in nature are far more sensitive to the effects of radiation than previously believed. Although field studies sometimes lack the controlled settings needed for precise scientific experimentation, they make up for this with a more realistic description of natural processes.
Our emphasis on documenting radiation effects under “natural” conditions using wild organisms has provided many discoveries that will help us to prepare for the next nuclear accident or act of nuclear terrorism. This information is absolutely needed if we are to protect the environment not just for man, but also for the living organisms and ecosystem services that sustain all life on this planet.
There are currently more than 400 nuclear reactors in operation around the world, with 65 new ones under construction and another 165 on order or planned. All operating nuclear power plants are generating large quantities of nuclear waste that will need to be stored for thousands of years to come. Given this, and the probability of future accidents or nuclear terrorism, it is important that scientists learn as much as possible about the effects of these contaminants in the environment, both for remediation of the effects of future incidents and for evidenced-based risk assessment and energy policy development.
From exploring octopus consciousness to an amazing breakthrough in nanotube technology; from the first fossilized heart ever found to a dark galaxy discovered by the ALMA telescope; and from the reclassification of a type of thyroid cancer so it isn’t cancer anymore to climatologists clashing over the affect of climate change on the weather and peoples’ attitudes about it….
Yes, it’s been yet another fascinating and amazing week in the world of science!
And here are this week’s most popular stories on Science Rocks My World as voted by your clicks:
Octopuses are super-smart… but are they conscious?
Inky the wild octopus has escaped from the New Zealand National Aquarium. Apparently, he made it out of a small opening in his tank, and suction cup prints indicate he found his way to a drain pipe that emptied to the ocean.
Nice job Inky. Your courage gives us the chance to reflect on just how smart cephalopods really are. In fact, they are real smart…
This Noninvasive Thyroid ‘Cancer’ isn’t Cancer Anymore
The reclassification of a noninvasive type of thyroid cancer that has a low risk of recurrence is expected to reduce the fears and the unnecessary interventions that come with a cancer diagnosis, experts say.
The incidence of thyroid cancer has been rising partly due to early detection of tumors that are indolent or non-progressing, despite the presence of certain cellular abnormalities that are traditionally considered cancerous, says senior investigator Yuri Nikiforov, professor of pathology at the University of Pittsburgh.
“This phenomenon is known as overdiagnosis,” Nikiforov says. “To my knowledge, this is the first time in the modern era a type of cancer is being reclassified as a non-cancer…”
According to a new report published in “Nature” on April 20, 2016 by Patrick Egan and Megan Mullin, weather conditions have “improved” for the vast majority of Americans over the past 40 years. This, they argue, explains why there has been little public demand so far for a policy response to climate change.
Egan and Mullin do note that this trend is projected to reverse over the course of the coming century, and that Americans will become more concerned about climate change as they perceive more negative impact from weather. However, they estimate that such a shift may not occur in time to spur policy responses that could avert catastrophic impacts…
[Video] This Octopus has an Odd Way of Grabbing a Meal
Unlike most octopuses, which tackle their prey with all eight arms, a rediscovered tropical octopus subtly taps its prey on the shoulder and startles it into its arms.
“I’ve never seen anything like it,” says Roy Caldwell, professor of integrative biology at the University of California, Berkeley. “Octopuses typically pounce on their prey or poke around in holes until they find something…
Because of their venomous sting, scorpions are usually avoided at all costs. But a new discovery suggests the toxins found in some venom might actually have a unique benefit.
Published in the Proceedings of the National Academy of Sciences, the findings show that when a toxin produced by Scorpio maurus—a scorpion species found in North Africa and the Middle East—permeates the cell membrane it loses its potency and may actually become healthful.
“This is the first time a toxin has been shown to chemically reprogram once inside a cell, becoming something that may be beneficial,” says Isaac Pessah, professor of molecular biosciences at the University of California, Davis, School of Veterinary Medicine.
“Being able to understand how this family of toxins lose their toxicity and become pharmacologically beneficial by changing activity towards the calcium channel target inside the cell is what’s novel and may have translational significance.”
Controlled release of calcium is a key step in many cellular processes, researchers say.
“In any cell you can think of, calcium plays a role in shaping responses, activating or inhibiting enzymes, changing the shape of the cell, or triggering cell division,” Pessah says.
Calcium also is a key signal in both fertilization and programmed cell death. And, altered calcium regulation is a common step in many animal and human diseases. Pharmaceuticals that regulate cellular calcium homeostasis include drugs that suppress the immune system in organ transplant patients and treatments for high blood pressure and heart disease.
Several years ago, Pessah began working with researchers from the Institute for Neurosciences in Grenoble France and the Pasteur Institute in Tunisia to isolate a specific toxin peptide called maurocalcin, which targets a calcium channel called the ryanodine receptor inside the cell. Maurocalcin is quite unusual in that it readily permeates into cells, while most other peptide toxins target more accessible receptors on the cell’s surface.
“We therefore thought maurocalcin should be very toxic, since we previously showed that very low concentrations can completely stabilize an open (toxic) state of the ryanodine receptor and thereby upset a cell’s calcium balance,” Pessah says.
Maurocalcin, however, was seemingly benign once inside cells. Intrigued, the researchers set out to find the reason. They discovered that once inside the cell, maurocalcin was modified by an enzymatic reaction called phosphorylation, a common cellular “switch” that normally turns reactions inside cells on or off by adding a phosphate group to a precise position on proteins.
This is the first example of a scorpion peptide being subjected to such modification once inside a mammalian cell. Phosphorylation of maurocalcin was found to completely reprogram its activity from that of a potential toxin to a potentially useful pharmacological tool.
“This is the real twist of nature,” Pessah says. “The toxic peptide is not supposed to get inside cells, but it does, and then is phosphorylated, which not only neutralizes its toxicity but also reprograms its activity to be beneficial.”
Researchers further tested the plausibility and molecular details responsible for pharmacological reprogramming by synthesizing artificial “phosphomimics,” and studying their three-dimensional structures and how they modified ryanodine receptor channels.
Identifying the best synthetic substitutes for maurocalcin could pave the way for a novel strategy to control ryanodine receptor channels that leak calcium. Leaky ryanodine receptor channels are known to contribute to a number of human and animal diseases of genetic and/or environmental origins.
As far as big life decisions go, choosing when to lose your virginity or the best time start a family are probably right up there for most people. It may seem that such decisions are mostly driven by social factors, such as whether you’ve met the right partner, social pressure or even your financial situation. But scientists are increasingly realising that such sexual milestones are also influenced by our genes.
In a new study of more than 125,000 people, published in Nature Genetics, we identified gene variants that affect when we start puberty, lose our virginity and have our first child. This is hugely important as the timing of these events affect educational achievements as well as physical and mental health.
Children can start puberty at any time between eight and 14-years-old. Yet it is only in recent years that we have begun to understand the biological reasons for this. Through studies of both animals and humans, we now know that there’s a complex molecular machinery in the brain that silences puberty hormones until the right time. At this point, chemical messengers secreted from the brain begin a cascade of events, leading to the production of sex hormones and reproductive maturity.
Human genetics studies have identified many genes that are linked to individual differences in the onset of puberty. There are broadly two approaches used to map such genes – studies of patients affected by rare disorders that affect puberty and large-scale population studies. The former is helpful because it can investigate gene variants that cause extremely early or delayed/absent puberty.
In previous research, we used population studies to survey a large number of individuals using questionnaires and then genome-wide association studies to scan these same participants for common genetic differences. We could then assess whether the participants’ reported age at puberty was related to particular gene variants. In this way, we have in a number of studies identified more than 100 such variants, each modifying puberty timing by just a few weeks. However, together they contribute substantially.
We now understand that both nature and nurture play a roughly equal role in regulating the timing of puberty. For example, studies have consistently shown that obesity and excessive nutrition in children can cause an early onset of puberty.
However, we know far less about the biological and genetic factors behind the ages that we first have sexual intercourse or have a first child. This is because previous research has focused more on environmental and family factors than genetics. But the launch of UK Biobank, a study with over half a million participants, has greatly helped to fill this lack of knowledge.
In our new study, we used this data to survey some 125,000 people in the same way as in the puberty studies. We found 38 gene variants associated with the age of first sexual intercourse. The genes that we identified fall broadly into two groups. One category is genes with known roles in other aspects of reproductive biology and pubertal development, such as the oestrogen receptors, a group of proteins found on cells in the reproductive tract and also in behaviour control centres of the brain.
The other group includes genes which play roles in brain development and personality. For example, the gene CADM2, which controls brain activity and also has strong effects on whether we regard ourselves to be risk-takers. We discovered that this gene was also associated with losing your virginity early and having a higher number of children throughout life. Similarly, the gene MSRA, linked to how irritable we are, was also associated with age at first sexual intercourse. Specifically, people who are more irritable typically have a later encounter. However, more research is needed to show exactly how these genes help regulate the timing of the reproductive milestones.
We were also able to quantify that around 25% of the variation in these milestones was due to genetic differences rather than other factors.
Using a statistical genetics approach called Mendelian Randomisation, a technique that helps clarify the causal relationship between human characteristics, these studies can tell us whether such epidemiological associations are likely to be causal rather than just random associations. We managed to show that early puberty actually contributes to a higher likelihood of risk-taking behaviours, such as sexual intercourse at an earlier age. It was also linked to having children earlier, and having more children throughout life.
These findings, along with previous studies linking early puberty and loss of virginity to social and health risks, back the idea that future public health interventions should aim to help children avoid early puberty, for example by diet and physical activity and avoiding excess weight gain. Our findings predict that this would have benefits both on improving adolescent health and educational outcomes and also for future health at older ages.
A software tool makes it easy for anybody to quickly design a custom robot—including its movements—and print out its parts with a 3D printer. You assemble the parts like a puzzle. Add electronic motors to the joints, install a control unit and battery, and then unleash your creature.
The first step is to create a basic skeleton for the desired robot, specifying how many extremities the figure will have and how many segments there will be in the backbone. This skeleton can be modified at will by extending or shortening its segments or breaking them up with new joints.
The primary challenge of the research project was to design the robot’s movements so that they would also work outside the digital realm.
“That’s the hard part of this work, the part where technical innovation is needed,” says Bernhard Thomaszewski of Disney Research Zurich. From a user’s perspective, he says, the tools offered by their program are comparable with those used in the animation of purely digital figures.
However, unlike in digital animations, the robots must obey the laws of physics. In particular, physical robots cannot balance in every pose that is digitally possible, and there is a limit to the accelerations that can be produced by the motors.
“Without support from a computer, it is extremely difficult for users to take these restrictions into account when planning the movements, and this quickly becomes frustrating for the layman,” says Thomaszewski. “This is precisely the task that our software automates through simulation and numerical optimization.
“The user can therefore focus entirely on the creative aspects of the design.”
HOW THE ROBOTS MOVE
In order to design the motion of a robot, the user specifies simple motion goals such as “walk forward” or “turn left.”
Vittorio Megaro, a doctoral student at ETH Zurich, designed the program to automatically convert these high-level commands into low-level control signals for the motors, allowing the robot to walk stably.
Whenever the user changes the robot’s skeleton or its motion goals, the computer automatically adapts the time-dependent motor values. This process is very fast, offering immediate feedback on the resulting motion, as predicted by simulation.
Once the user is satisfied with the robot, the program automatically generates three-dimensional building plans for all segments of the body and for the connecting parts, which house the electric motors.
Standard sizes of various commercially available motors are stored in the program, which means users only need to select the one that matches to get the connecting parts.
The parts are fabricated on a 3D printer and, finally, the robot is assembled by hand.
CHEAP COMPONENTS, EXPENSIVE PRINTING
The electric motors, cables, battery and control unit for the robot are available commercially, and Megaro was able to buy these components cheaply online. On the other hand, a greater financial burden is associated with manufacturing the robot limbs on a high-quality 3D printer.
Megaro manufactured the first two prototypes using an in-house printer. This was cheap, he says, but the quality of the body parts was not particularly good. Apparently, the shin bones broke in the first prototype, which was a four-legged robotic dog.
He commissioned an outside company to produce his insect-like masterpiece. This, he says, made of sturdy, high-grade plastic. “That quality comes at a price,” says Megaro.
Megaro and his colleagues intentionally kept the design of their robotic creatures simple. They can only adopt gaits that the user has first created using the software.
Megaro’s five-legged robotic insect can move forwards and sideways using various techniques. It cannot, however, identify obstacles—the robots don’t have sensors and aren’t designed to travel independently. They also cannot be controlled remotely, something that could potentially be achieved using a smartphone app.
“It also wasn’t the project’s aim to create an autonomous robot,” Megaro points out.
The software is still in development and not available to the public yet. Researchers at Carnegie Mellon University collaborated on the project.
Palaeontologists and the famous Tin Man in The Wizard of Oz were once in search of the same thing: a heart. But in our case, it was the search for a fossilised heart. And now we’ve found one.
A new discovery, announced today in the journal eLife, shows the perfectly preserved 3D fossilised heart in a 113-119 million-year-old fish from Brazil called Rhacolepis.
This is the first definite fossilised heart found in any prehistoric animal.
For centuries, the fossil remains of back-boned animals – or vertebrates – were studied primarily from their bones or fossilised footprints. The possibility of finding well-preserved soft tissues in really ancient fossils was widely thought to be impossible.
Soft organic material rapidly decays after death, so organs start breaking down from bacterial interactions almost immediately after an animal has died. Once the body has decayed, what remains can eventually become buried and what’s left of the skeleton might one day become a fossil.
Exceptional preservation of fossils
But certain rare fossil deposits, called konservat laggerstätten (meaning “place of storage”), are formed by rapid burial under special chemical conditions. These deposits can preserve a range of soft tissues from the organism.
The famous Burgess Shale fossils from British Columbia in Canada show soft-bodied worms and other invertebrate creatures. These were buried by rapid mudslides around 525 million years ago.
The well-preserved fishes from the 113-119 million-year-old Santana Formation of Brazil were among the first vertebrate fossils to show evidence of preserved soft tissues. These include parts of stomachs and bands of muscles.
The discovery of complete soft tissues preserved as whole internal organs in a fossil was a bit of a Holy Grail for palaeontologists. Such finds could contribute to understanding deeper evolutionary patterns as internal soft organs have their own set of specialised features.
Finding a complete fossilised heart in a fish almost 120 million years old was a major breakthrough for José Xavier-Neto of the Brazilian Biosciences National Laboratory, Lara Maldanis of the University of Campinas, Vincent Fernandez of the European Synchotron Radiation Facility and colleagues from across Brazil and Sweden.
Back in 2000, a group of US scientists claimed to have found a heart preserved in a dinosaur nicknamed Willo, a Thescelosaurus. But recent work has debunked this claim, showing the cavity of the dinosaur body was infilled by sediment and then impregnated with iron-rich minerals to make the cavity inside look a bit heart-like when imaged by CT scanning.
The only other claims for fossilised vertebrate hearts are stains supposedly made by haemoglobin-rich blood found in the region of the fossil where the heart should be. These, along with stains representing possibly the liver, have recently been documented in 390 million-year-old fishes from Scotland.
Digital heart surgery on a fossil
The new discovery was made by imaging a fossil still entombed within its limestone concretion using synchrotron X-ray tomography down to 6µm sections. The heart is then rendered out slice by slice using software to digitally restore the features of the organ.
This method has now been widely applied in palaeontology for the past decade or so to reveal many intricate soft tissue structures in fossils, including the actual preserved brain of a 300 million-year-old fish from North America and actual muscle bundles attached to 380 million-year-old placoderm fishes from Australia.
The Rhacolepis heart was digitally restored by tomography and from images studied in cross-sections through the rock. It shows clear detail of the conus arteriosus, or bulb at the top of the heart, which has a pattern of five rows of valves inside it.
A detailed comparison with a dissected tarpon heart in the paper shows similar structures in the same relative position as the fossil heart.
The discovery of the fossilised heart is significant in that it shows the valve condition in an early member of the ray-finned fish group. These are the largest group of vertebrates alive today with nearly 30,000 species, and naturally they display a wide range of valve patterns in their hearts.
Some, such as the African reedfish, a very basal member of the ray-finned fishes, has nine rows of valves. But the modern most diverse group of ray-fins, the teleosts, have just a single outflow valve in the heart. In teleosts another structure, the bulbus arteriosus, prevails over the conus arteriosus to dominate outflow of blood from the heart.
Enter our fossil, Rhacolepis, a fish belonging to an entirely extinct family, the Pachyrhizodontidae, named after the extinct fish Pachyrhizodus. This is a group placed close to the base of the teleosts.
The pattern shown by the fossil seems to represent a good intermediate condition between the most primitive pattern and the most advanced type. In biology, simple patterns often hold more complex hidden meanings.
Within some ray-finned fish groups there is also thought to be a secondary simplification of the valve arrangements. For example, in sturgeons and bowfins there is independent pattern of simplification within the conus arteriosus.
There is also evidence for independent increase in the numbers of valves in some basal ray-fins, like the reedfish Polypterus, so interpreting evolutionary patterns from just one data point in time must be open to several explanations.
Nonetheless, for the first time we actually do have a data point to study the anatomy in detail of a fossilised heart in an extinct group of fishes.
The find demonstrates the immense potential for more discoveries of this nature, enabling more discussion of the comparative anatomy of soft organs in extinct organisms and how they have evolved through time.
With increased discoveries like this one, and more detailed knowledge of the soft tissue anatomy of extinct animals, we will one day really get to the heart of understanding the evolution of the first back-boned animals.
According to a new report published in “Nature” on April 20, 2016 by Patrick Egan and Megan Mullin, weather conditions have “improved” for the vast majority of Americans over the past 40 years. This, they argue, explains why there has been little public demand so far for a policy response to climate change.
Egan and Mullin do note that this trend is projected to reverse over the course of the coming century, and that Americans will become more concerned about climate change as they perceive more negative impact from weather. However, they estimate that such a shift may not occur in time to spur policy responses that could avert catastrophic impacts.
However, when we consider what Americans “prefer” with respect to weather, it is important to consider all variations in the weather – across hours, days and especially the extremes – rather than simply looking at annual averages.
After all, no one experiences long-term average weather, but we do increasingly experience weather extremes and their impacts on our health, safety and well-being.
Many of those studies focus on extreme events such as floods, hurricanes, heat waves and droughts because these are the weather phenomena that have major impacts and costs: they destroy crops, wreck infrastructure and threaten lives and property.
Analyzing the impact of climate change by focusing on average weather patterns greatly underplays climate change impacts and may make Americans dangerously complacent about how climate change is already affecting our lives.
Impacts of climate extremes
Egan and Mullin claim that “80 percent of Americans live in counties that are experiencing more pleasant weather than they did four decades ago.” They attribute this change to rising winter temperatures paired with summers that have not become “markedly more uncomfortable.” The result, they conclude, is that weather has shifted toward a temperate year-round climate that Americans have been demonstrated to prefer.
For their investigation of temperature trends, the authors looked only at the average of temperatures reported in the months of January and July. For precipitation trends, the authors looked only at annual precipitation totals and the number of days on which precipitation occurs annually.
But people don’t live in annual or monthly averages!
The effects of climate change are mainly manifested through changes in extremes, because the biggest impacts, loss of life and damage to property occur especially in those conditions that break records and go beyond previous experience.
But the Egan and Mullin paper does not account adequately for extremes. Moreover, it should also be noted that people care about weather year-round, not just in January or July.
For temperature, it is fluctuations up and down around averages that draw attention and impact lives. Increasing heat waves, intensifying droughts and expanding wildfires are taking a ruinous toll, especially in summer months.
The wildfire season is many weeks longer than it used to be. Wildfires are local, but they affect us all through smoke and air quality, insurance and fire-fighting costs. Increasing pollen, allergies and asthma also accompany warmer conditions. In 2012 the U.S. suffered widespread drought and its hottest year on record.
In the past four decades, there has been an increasing frequency of high-humidity heat waves, which are characterized by the persistence of high nighttime temperatures. When the air stays extremely warm at night, there is less overnight relief, a fact that affects the young, elderly and ill particularly. The percentage of land area in the United States with unusually hot summer nights has increased from an average under 10 percent in the 1970s to over 40 percent in recent years.
Yes, it is likely true that some Americans prefer warmer winter conditions. Skiers and others who love winter sports, however, are not in that group, and more significantly, in many places, including California, warmer and drier winters have helped to drive long-term drought. Last winter, for the first time in 120 years of record-keeping, the winter average minimum temperature in the Sierra Nevada mountains was above freezing. Across the state, the prior 12 months were the warmest on record.
As a result, the Sierra Nevada snow pack that normally provides nearly 30 percent of California’s water stood at its lowest level in at least 500 years, despite modest increases in precipitation from the record lows of preceding years. The few winter storms of that year were warmer than average and tended to produce rain, not snow. What snow fell melted away almost immediately.
Warmer winters also allow insects and diseases to survive with dramatic consequences. The successful overwintering of pine beetles, for example, in the warming winters of the Rocky Mountains contributed to the death of 46 million acres of trees from 2000 through 2012.
Paradoxically, perhaps, in winter, warming can also create increased snowfall. Warmer winters reduce sea and lake ice, increasing so-called lake-effect snows in places like Buffalo.
Extremes are also the most dangerous aspect of rising sea levels.
For sea level, it is not the gradual increases that matter because we barely notice gradual increases in global mean sea level. Rather, it is a storm surge on top of high tide on top of the rising sea level that causes devastation, as happened in the New York area and New Jersey shore in Superstorm Sandy.
The same is true of rainfall.
It is not the number of days with gentle showers that are of concern, but the increasing trend of torrential downpours – as witnessed just this week in Houston, where record-breaking April rains drove devastating floods.
The fact is that over the past century the U.S. has, on average, witnessed a 20 percent increase in the amount of precipitation falling in the heaviest downpours, with a 71 percent increase in the Northeast region and a 37 percent increase in the Midwest. This surge of extreme precipitation has dramatically increased the risk of flooding, especially in the regions with the largest increases in heavy precipitation.
In a warming world, storms become stronger and rainfall more intense owing to more moisture residing in the warmer atmosphere. Torrential rains flooded much of South Carolina,
for example, last October, and Missouri experienced unprecedented rains in November and December 2015, resulting in flooding along the Mississippi River.
In May 2015 it was Texas and Oklahoma that experienced record rains and flooding, perhaps influenced by the major El Niño event combined with global warming. In September 2013 it was Boulder and the Front Range of the Rockies that suffered from major flooding arising from heavy prolonged rains.
These examples show that climate change makes itself felt throughout the year.
Anticipating new extremes
It is important to note that our cities, our agricultural system and our infrastructure are all built entirely around the weather conditions of the past.
In other words, changes in extreme weather, in any direction, can have a profound impact. Disaster often strikes when a threshold is crossed, and extreme events are precisely when this happens. Adding climate change to natural variability in extreme weather can become the straw that breaks the camel’s back.
As detailed above, extreme weather has an outsized impact on everyday life. Ignoring the impact of extreme weather in determining the trend in “pleasant” weather conditions is, I would argue, nonsensical. Indeed, the trends in heat waves, drought and extreme precipitation would all seem to indicate that the weather overall has become more unpleasant and difficult to deal with.
The world took a collective sigh of relief in the last days of 2015, when countries came together to adopt the historic Paris agreement on climate change.
The international treaty was a much-needed victory for multilateralism, and surprised many with its more-ambitious-than-expected agreement to pursue efforts to limit global warming to 1.5°C.
The next step in bringing the agreement into effect happens in New York on Friday 22 April, with leaders and dignitaries from more than 150 countries attending a high-level ceremony at the United Nations to officially sign it.
The New York event will be an important barometer of political momentum leading into the implementation phase – one that requires domestic climate policies to be drawn up, as well as further international negotiations.
It comes a week after scientists took a significant step to assist with the process. On April 13 in Nairobi, the Intergovernmental Panel on Climate Change agreed to prepare a special report on the impacts of global warming of 1.5°C above pre-industrial levels. This will provide scientific guidance on the level of ambition and action needed to implement the Paris agreement.
Why the ceremony?
The signing ceremony in New York sets in motion the formal, legal processes required for the Paris Agreement to “enter into force”, so that it can become legally binding under international law.
Although the agreement was adopted on December 12 2015 in Paris, it has not yet entered into force. This will happen automatically 30 days after it has both been ratified by at least 55 countries, and by countries representing at least 55% of global greenhouse gas emissions. Both conditions of this threshold have to be met before the agreement is legally binding.
So, contrary to some concerns after Paris, the world does not have to wait until 2020 for the agreement to enter into force. It could happen as early as this year.
Signing vs ratification
When a country signs the agreement, it is obliged to refrain from acts that would defeat its object and purpose. The next step, ratification, signifies intent to be legally bound by the terms of the treaty.
The decision on timing for ratification by each country will largely be determined by domestic political circumstances and legislative requirements for international agreements.
Those countries that have already completed their domestic processes for international agreements can choose to sign and ratify on the same day in New York.
Who is going to sign and ratify in New York?
It is perhaps no surprise that the countries which are particularly vulnerable to the impacts of climate change and who championed the need for high ambition in Paris will be first out of the gate to ratify in New York.
Thirteen Small Island Developing States (SIDS) from the Caribbean, Indian Ocean and Pacific have signalled their intent to sign and ratify in New York: Barbados, Belize, Fiji, Grenada, Maldives, Marshall Islands, Nauru, Palau, Samoa, Saint Lucia, St Vincent and the Grenadines, the Seychelles and Tuvalu.
While these countries make up about a quarter of the 55 countries needed, they only account for 0.02% of the emissions that count towards the required 55% global emissions total.
Bringing the big emitters on board
China and the United States have recently jointly announced their intentions to sign in New York and to take the necessary domestic steps to formally join the agreement by ratifying it later this year. Given that they make up nearly 40% of the agreed set of global emissions for entry into force, that will go a significant way to meeting the 55% threshold.
We can expect more announcements of intended ratification schedules on 22 April. Canada (1.95%) has signalled its intent to ratify this year and there are early signs for many others. Unfortunately the European Union, long a leader on climate change, seems unlikely to be amongst the first movers due to internal political difficulties, including the intransigence of the Polish government.
The double threshold means that even if all of the SIDS and Least Developed Countries (LDCs) ratified, accounting for more than 75 countries but only around 4% of global emissions, the agreement would not enter into force until countries with a further 51% of global emissions also ratified.
Consequently, many more of the large emitters will need to ratify to ensure that the Paris agreement enters into force. This was a key design feature – it means a small number of major emitters cannot force a binding agreement on the rest of the world, and a large number of smaller countries cannot force a binding agreement on the major emitters.
The 55% threshold was set in order to ensure that it would be hard for a blocking coalition to form – a group of countries whose failure to ratify could ensure that an emissions threshold could not be met in practice. A number much above 60% of global emissions could indeed have led to such a situation.
The countries that appear likely to ratify this year, including China, the USA, Canada, many SIDS and LDCs, members of the Climate Vulnerable Forum along with several Latin American and African countries – around 90 in all – still fall about 5-6% short of the 55% emissions threshold.
It will take one more large emitter, such as the Russian Federation (7.53%), or two such as India (4.10%) and Japan (3.79%) to get the agreement over the line. The intent of these countries is not yet known.
Why is early action important?
The Paris agreement may be ambitious, but it will only be as good as its implementation. That will depend on the political momentum gained in Paris being maintained. Early entry into force for the treaty would be a powerful signal in this direction.
We know from the Climate Action Tracker analyses that the present commitments are far from adequate. If all countries fully implement the national emission reduction targets brought to the climate negotiations last year, we are still on track for temperature increases of around 2.7°C. Worse, we also know that current policies adopted by countries are insufficient to meet these targets and are heading to around 3.6°C of global warming.
With average global annual temperature increase tipping over 1°C above pre-industrial levels for the first time last year, it is clear that action to reduce emissions has never been more urgent.
Early entry into force will unlock the legally binding rights and obligations for parties to the agreement. These go beyond just obligations aimed at delivering emissions reductions through countries’ Nationally Determined Contributions to the critical issues of, for example, adaptation, climate finance, loss and damage, and transparency in reporting on and reviewing action and support.
The events in New York this week symbolise the collective realisation that rapid, transformative action is required to decarbonise the global economy by 2050.
Climate science tells us that action must increase significantly within the next decade if we are to rein in the devastating impacts of climate change, which the most vulnerable countries are already acutely experiencing.