How forensic science can unlock the mysteries of human evolution

By Patrick Randolph-Quinney, University of Central Lancashire; Anthony Sinclair, University of Liverpool; Emma Nelson, University of Liverpool, and Jason Hall, University of Liverpool.

People are fascinated by the use of forensic science to solve crimes. Any science can be forensic when used in the criminal and civil justice system – biology, genetics and chemistry have been applied in this way. Now something rather special is happening: the scientific skill sets developed while investigating crime scenes, homicides and mass fatalities are being put to use outside the courtroom. Forensic anthropology is one field where this is happening.

Loosely defined, forensic anthropology is the analysis of human remains for the purpose of establishing identity in both living and dead individuals. In the case of the dead this often focuses on analyses of the skeleton. But any and all parts of the physical body can be analysed. The forensic anthropologist is an expert at assessing biological sex, age at death, living height and ancestral affinity from the skeleton.

Our newest research has extended forensic science’s reach from the present into prehistory. In the study, published in the Journal of Archaeological Science, we applied common forensic anthropology techniques to investigate the biological sex of artists who lived long before the invention of the written word.

We specifically focused on those who produced a type of art known as a hand stencil. We applied forensic biometrics to produce statistically robust results which, we hope, will offset some of the problems archaeological researchers have encountered in dealing with this ancient art form.

Sexing rock art

Ancient hand stencils were made by blowing, spitting or stippling pigment onto a hand while it was held against a rock surface. This left a negative impression on the rock in the shape of the hand.

Experimental production of a hand stencil. Jason Hall, University of Liverpool

These stencils are frequently found alongside pictorial cave art created during a period known as the Upper Palaeolithic, which started roughly 40 000 years ago.

Archaeologists have long been interested in such art. The presence of a human hand creates a direct, physical connection with an artist who lived millennia ago. Archaeologists have often focused on who made the art – not the individual’s identity, but whether the artist was male or female.

Until now, researchers have focused on studying hand size and finger length to address the artist’s sex. The size and shape of the hand is influenced by biological sex as sex hormones determine the relative length of fingers during development, known as 2D:4D ratios.

But many ratio-based studies applied to rock art have generally been difficult to replicate. They’ve often produced conflicting results. The problem with focusing on hand size and finger length is that two differently shaped hands can have identical linear dimensions and ratios.

To overcome this we adopted an approach based on forensic biometric principles. This promises to be both more statistically robust and more open to replication between researchers in different parts of the world.

The study used a branch of statistics called Geometric Morphometric Methods. The underpinnings of this discipline date back to the early 20th century. More recently computing and digital technology have allowed scientists to capture objects in 2D and 3D before extracting shape and size differences within a common spatial framework.

In our study we used experimentally produced stencils from 132 volunteers. The stencils were digitised and 19 anatomical landmarks were applied to each image. These correspond to features on the fingers and palms which are the same between individuals, as depicted in figure 2. This produced a matrix of x-y coordinates of each hand, which represented the shape of each hand as the equivalent of a map reference system.

Figure 2. Geometric morphometric landmarks applied to an experimentally produced hand stencil. This shows the 19 geometric landmarks applied to a hand. Emma Nelson, University of Liverpool

We used a technique called Procrustes superimposition to move and translate each hand outline into the same spatial framework and scale them against each other. This made the difference between individuals and sexes objectively apparent.

Procrustes also allowed us to treat shape and size as discrete entities, analysing them either independently or together. Then we applied discriminant statistics to investigate which component of hand form could best be used to assess whether an outline was from a male or a female. After discrimination we were able to predict the sex of the hand in 83% of cases using a size proxy, but with over 90% accuracy when size and shape of the hand were combined.

An analysis called Partial Least Squares was used to treat the hand as discrete anatomical units; that is, palm and fingers independently. Rather surprisingly the shape of the palm was a much better indicator of the sex of the hand than the fingers. This goes counter to received wisdom.

This would allow us to predict sex in hand stencils which have missing digits – a common issue in Palaeolithic rock art – where whole or part fingers are often missing or obscured.


This study adds to the body of research that has already used forensic science to understand prehistory. Beyond rock art, forensic anthropology is helping to develop the emergent field of palaeo-forensics: the application of forensic analyses into the deep past.

For instance, we have been able to understand fatal falls in Australopithecus sediba from Malapa and primitive mortuary practices in the species Homo naledi from Rising Star Cave, both in South Africa.

All of this shows the synergy that arises when the palaeo, archaeological and forensic sciences are brought together to advance humans’ understanding of the past.

The ConversationPatrick Randolph-Quinney, Senior Lecturer in Biological and Forensic Anthropology, University of Central Lancashire; Anthony Sinclair, Professor of Archaeological Theory and Method, University of Liverpool; Emma Nelson, Lecturer in Clinical Communication, University of Liverpool, and Jason Hall, Chief Archaeology Technician, University of Liverpool

This article was originally published on The Conversation. Read the original article.

Now, Check Out:


Here’s Evidence that a Massive Collision Formed the Moon

Scientists have new evidence that our moon formed when a planet-sized object struck the infant Earth some 4.5 billion years ago.

Lab simulations show that a giant impact of the right size would not only send a huge mass of debris hurtling into space to form what would become the moon. It would also leave behind a stratified layer of iron and other elements far below Earth’s surface, just like the layer that seismic imaging shows is actually there.

Johns Hopkins University geoscientist Peter Olson says a giant impact is the most prevalent scientific hypothesis on how the moon came to be, but has been considered unproven because there has been no “smoking gun” evidence.

“We’re saying this stratified layer might be the smoking gun,” says Olson, a research professor in earth and planetary sciences. “Its properties are consistent with it being a vestige of that impact.”

“Our experiments bring additional evidence in favor of the giant impact hypothesis,” says Maylis Landeau, lead author of the paper and a postdoctoral fellow at Johns Hopkins when the simulations were done. “They demonstrate that the giant impact scenario also explains the stratification inferred by seismology at the top of the present-day Earth’s core. This result ties the present-day structure of Earth’s core to its formation.”

1,800 miles below Earth’s crust

The argument compares evidence on the stratified layer—believed to be some 200 miles (322 kilometers) thick and 1,800 miles (2,897 kilometers) below the Earth’s surface—with lab simulations of the turbulence of the impact. The turbulence in particular is believed to account for the stratification—meaning there are materials in layers rather than a homogeneous composition—at the top of the planet’s core.

The stratified layer is believed to contain iron and lighter elements, including oxygen, sulfur, and silicon. The existence of the layer is understood from seismic imaging; it is far too deep to be sampled directly.

Up to now, most simulations of the hypothetical big impact have been done in computer models and have not accounted for impact turbulence, Olson says. Turbulence is difficult to simulate mathematically, he adds.

The researchers simulated the impact using liquids meant to approximate the turbulent mixing of materials that would have occurred when a planetary object struck when Earth was just about fully formed—a “proto-Earth,” as scientists call it.

Olson says the experiments depended on the principle of “dynamic similarity.” In this case, that means scientists can make reliable comparisons of fluid flows without doing an experiment as big and powerful as the original Earth impact, which—of course—is impossible. The study in Olson’s lab was meant to simulate the key ratios of forces acting on each other to produce the turbulence of the impact that could leave behind a layered mixture of material.

The researchers conducted more than 60 trials in which about 3.5 ounces of saline or ethanol solutions representing the planetary projectile that hit the Earth were dropped into a rectangular tank holding about 6 gallons of fluid representing the early Earth. In the tank was a combination of fluids in layers that do not mix: oil floating on the top to represent the Earth’s mantle and water below representing the Earth’s core.

Analysis showed that a mix of materials was left behind in varying amounts and that the distribution of the mixture depended on the size and density of the projectile hitting the Earth. The authors argue for a moon-forming projectile smaller or equal to the size of Mars, a bit more than half the size of Earth.

A summary of the study has been published by the journal Nature Geoscience.

Source: Republished from as a derivative work under the Attribution 4.0 International license. Original article posted to Futurity by .

Featured Image Credit: Brian, via Wikimedia Commons, CC BY-2.0

Now, Check Out:

Scientist at work: Tracking melt water under the Greenland ice sheet

By Joel T. Harper, The University of Montana.

During the past decade, I’ve spent nearly a year of my life living on the Greenland ice sheet to study how melt water impacts the movement of the ice.

What happens to the water that finds its way from the melting ice surface to the bottom of the ice sheet is a crucial question for glaciologists like me. Knowing this will help us ascertain how quickly Greenland’s ice sheet could contribute to global sea-level rise. But because doing this type of research requires studying the bottom side of a vast and thick ice sheet, my colleagues and I have developed relatively unique research techniques.

Our approach is to mimic the alpine style of mountaineering to do our polar research. That involves a small group of self-sufficient climbers who keep their loads light and depend on speed and efficiency to achieve their goals. It’s the opposite of expedition-style mountaineering, which relies on a large support crew and lots of heavy equipment to slowly advance a select few people to the summit.

We bring a small team of scientists who are committed to our fast and light field research style, with each person taking on multiple roles. We use mostly homemade equipment that is designed to produce novel results while being lightweight and efficient – the antithesis of “overdesigned.” The chances of scientific failure from this less conventional approach can be unnerving, but the benefits can be worth the risks. Indeed, we’ve already gained significant insights into the Greenland ice sheet’s underside.

Mysterious place

Our science team from the University of Montana and University of Wyoming sleeps in backpacking tents, the endless summer sunshine making shadows that rotate in circles around us. Ice-sheet camping is challenging. Your tent and sleeping pad insulate the ice as it melts, and soon your tent rises up into the relentless winds on an icy drooping pillar. Occasionally people’s tents slide off their pillars in the middle of the night.

But it’s not the melting on the surface that concerns us so much as what’s happening at the base of the Greenland ice sheet. Arctic warming has increased summer melting of this huge reservoir of ice, causing sea levels to rise. Before the melt water runs to the oceans, much of it finds its way to the bottom of the ice sheet.

The additional water can lubricate the base of the ice sheet in places where the ice can be 1,000 or more meters thick. This causes the ice to slide more quickly across the bedrock on which it sits. The result is that more ice is transported from the high center of the ice sheet, where snow accumulates, to the low elevation margins of the ice sheet, where it either calves into the sea or melts in the warmth of low elevations.

A system of pumps and heaters generates a high pressure jet of hot water that is used to melt a hole to the bottom of Greenland ice sheet.

One school of thought is that a feedback may be kicking in; the more water added, the faster the ice will move, and so ultimately the faster the ice will melt.

An alternative hypothesis is that adding more water to the bed will create large water flow pathways at the contact between the ice and bedrock. These channels are efficient at flushing the water quickly, which could limit the effects of increased melt water at the bed. In other words, by adding more water there is actually less lubrication – not more – because a drainage system develops that quickly moves the water away.

We know flowing water generates heat and melts open the channels in the ice. However, the enormous pressure at the base of the ice acts to squeeze the channels shut. Competing forces battle in a complicated dance.

We can represent these processes with equations, and simulate the opening and closing of the channels on a computer. But the meaningfulness of our results depends on whether we have properly accounted for all of the physical processes actually taking place. To test this, we need to look under the ice sheet.

The bottom of the ice sheet is a mysterious place we glaciologists spend a lot of time hypothesizing about. It’s not a place you can actually go and have a look around. So our team has drilled boreholes to the bed of the Greenland ice sheet to insert sensors and to conduct experiments designed to reveal the water flow and ice sliding conditions. They are essentially pinpricks that allow us to test and refine our models.

Homemade heat drill

Our approach to penetrating many hundreds of meters of cold ice (e.g., -18 degrees Celsius) is to run a light and nimble drilling campaign. We use alpine climbing tactics so that we can move quickly around the ice sheet to drill as many holes as we can in different places, to see if conditions vary from place to place. Our drill can be moved long distances in just a few helicopter loads, and we carry it ourselves for shorter hauls.

We don’t have devoted cooks or mechanics or engineers; we have a small group of faculty and carefully selected students who need to do it all. We rely on people who can fiddle with the electronics of homemade instruments while being unafraid of hard manual labor like moving fuel barrels and hooking up heavy pumps and hoses in the biting cold Greenland wind. Back in the lab, these same people must have outstanding skills to apply math and physics to data analysis and modeling.

The drill is moved long distances by helicopter, and shorter distances by hand-carrying over the ice. Our goal is to keep the drilling equipment as small and light as possible to permit easy transport.
Joel Harper, Author provided

Our homemade drill uses hot water to melt a hole through the ice. We capture surface melt water flowing in streams, heat it to near boiling and then pump it at very high pressure through a hose to a nozzle that sprays a carefully designed jet of water.

Our drilling days are long, extending from morning to well into the night. When the hole is finished, that’s when our work really begins because we only have about two hours before the hole completely freezes shut again. We need to get the drill out of the hole and all experiments completed before that happens. Like astronauts who rehearse their spacewalks, we plan every step and try not to panic when something unexpected happens.

We conduct experiments by artificially adding slugs of water to the bed to measure how the drainage system can accommodate extra water. We send down a camera to take pictures of the bed, a suction tube to sample the sediment and homemade sensors to measure the temperature, pressure and movement of the water. We build the sensors ourselves because you just can’t buy sensors designed for the bottom of a 800-meter-deep hole through an ice sheet.

Joel Harper (Univ. of Montana) and Neil Humphrey (Univ. of Wyoming) operate the hot water drill.
Joel Harper

I’ll admit our fast, light approach to drilling comes with risks. We don’t have redundant systems and we don’t carry lots of backup parts. Our lightweight drill makes a narrow hole, and the top of hole is freezing closed as drilling advances the bottom. We’ve had scary episodes where we’ve almost lost the drill.

A generator fails or a gear box blows, and now the hole is freezing shut around the 700 meters of hose and drill stem. If we can’t come up with a fix within minutes, the drill is lost and the project is over. We could take much less risk by scaling up logistics and reducing our goals. But that would mean doubling the crew and the pile of equipment, and adding another zero to our budget, only to drill one or two holes a year.

Our light-and-nimble approach has allowed us to drill holes quickly and to move large distances. We have drilled 36 boreholes spread along 45 kilometers (28 miles) of the ice sheet’s western side. The holes are up to 850 meters deep, or about a half of a mile, and have produced multi-year records of conditions under the ice.

Different physics than thought

Our instruments have discovered the water pressure under the ice is higher than portrayed by computer models. The melting power of flowing water is less effective than we thought, and so the enormous pressure under the thick ice has the upper hand – the squeezing inhibits large channels from opening.

This does not necessarily mean the ice will move faster due to enhanced lubrication as more melt water reaches the bed. This is because we have also discovered ways the water flows in smaller channels and sheets much more quickly than we expected. Now we are retrofitting our computer models to include these physics.

Our ultimate goal is to improve simulations of Greenland’s future contributions to sea level. Our discoveries are not relevant to tomorrow’s sea level or even next year’s, but nailing down these processes is important for knowing what will happen over upcoming decades to centuries. Sea level rise has big societal consequences, so we will continue our nimble approach to investigating water at Greenland’s bed.

The ConversationJoel T. Harper, Professor of Geosciences, The University of Montana

This article was originally published on The Conversation. Read the original article.

Now, Check Out:

An official welcome to the Anthropocene epoch – but who gets to decide it’s here?

By Noel Castree, University of Wollongong.

It’s literally epoch-defining news. A group of experts tasked with considering the question of whether we have officially entered the Anthropocene – the geological age characterised by humans’ influence on the planet – has delivered its answer: yes.

The British-led Working Group on the Anthropocene (WGA) told a geology conference in Cape Town that, in its considered opinion, the Anthropocene epoch began in 1950 – the start of the era of nuclear bomb tests, disposable plastics and the human population boom.

The Anthropocene has fast become an academic buzzword and has achieved a degree of public visibility in recent years. But the more the term is used, the more confusion reigns, at least for those not versed in the niceties of the underpinning science.

Roughly translated, the Anthropocene means the “age of humans”. Geologists examine layers of rock called “strata”, which tell a story of changes to the functioning of Earth’s surface and near-surface processes, be these oceanic, biological, terrestrial, riverine, atmospheric, tectonic or chemical.

When geologists identify boundaries between layers that appear to be global, those boundaries become candidates for formal recognition by the International Commission on Stratigraphy (ICS). The commission produces the International Chronostratigraphic Chart, which delimits verified changes during the planet’s 4.5 billion-year evolution.

Earth’s history, spiralling towards the present.
USGS/Wikimedia Commons

The chart features a hierarchy of terms like “system” and “stage”; generally, the suffix “cene” refers to a geologically brief stretch of time and sits at the bottom of the hierarchy. We have spent the past 11,500 years or so living in the so-called Holocene epoch, the interglacial period during which Homo sapiens has flourished.

If the Holocene has now truly given way to the Anthropocene, it’s because a single species – us – has significantly altered the character of the entire hydrosphere, cryosphere, biosphere, lithosphere and atmosphere.

The end of an era?

Making this call is not straightforward, because the Anthropocene proposition is being investigated in different areas of science, using different methods and criteria for assessing the evidence. Despite its geological ring, the term Anthropocene was coined not by a geologist, but by the Nobel Prize-winning atmospheric chemist Paul Crutzen in 2000.

He and his colleagues in the International Geosphere-Biosphere Program have amassed considerable evidence about changes to everything from nutrient cycles to ocean acidity to levels of biodiversity across the planet.

Comparing these changes to those occurring during the Holocene, they concluded that we humans have made an indelible mark on our one and only home. We have altered the Earth system qualitatively, in ways that call into question our very survival over the coming few centuries.

Crutzen’s group talks of the post-1950 period as the “Great Acceleration”, when a range of factors – from human population numbers, to disposable plastics, to nitrogen fertiliser – began to increase exponentially. But their benchmark for identifying this as a significant change has nothing to do with geological stratigraphy. Instead, they ask whether the present period is qualitatively different to the situation during the Holocene.

Rocking out

Meanwhile, a small group of geologists has been investigating the stratigraphic evidence for the Anthropocene. A few years ago a subcommission of the ICS set up the Anthropocene working group, which has now suggested that human activity has left an indelible mark on the stratigraphic record.

The major problem with this approach is that any signal is not yet captured in rock. Humans have not been around long enough for any planet-wide impacts to be evident in Earth’s geology itself. This means that any evidence for a Holocene-Anthropocene boundary would necessarily be found in less permanent media like ice sheets, soil layers or ocean sediments.

The ICS has always considered evidence for boundaries that pertain to the past, usually the deep past. The WGA is thus working against convention by looking for present-day stratigraphic markers that might demonstrate humans’ planetary impact. Only in thousands of years’ time might future geologists (if there are any) confirm that these markers are geologically significant.

In the meantime, the group must be content to identify specific calendar years when significant human impacts have been evident. For example, one is 1945, when the Trinity atomic device was detonated in New Mexico. This and subsequent bomb tests have left global markers of radioactivity that ought still to be evident in 10,000 years.

Alternatively, geographers Simon Lewis and Mark Maslin have suggested that 1610 might be a better candidate for a crucial human-induced step change. That was the year when atmospheric carbon dioxide dipped markedly, suggesting a human fingerprint linked to the New World colonists’ impact on indigenous American agriculture, although this idea is contested.

Decision time

The fact that the WGA has picked a more recent date, 1950, suggests that it agrees with the idea of defining the Great Acceleration of the latter half of the 20th century as the moment we stepped into the Anthropocene.

It’s not a decision that is taken lightly. The ICS is extremely scrupulous about amending the International Chronostratigraphic Chart. The WGA’s suggestion will face a rigorous evaluation before it can be scientifically accepted by the commission. It may be many years before it is formally ratified.

Elsewhere, the term is fast becoming a widely used description of how people now relate to our planet, rather like the Iron Age or the Renaissance. These words describe real changes in history and enjoy widespread use in academia and beyond, without the need for rigorously defined “boundary markers” to delimit them from prior periods.

Does any of this really matter? Should we care that the jury is still out in geology, while other scientists feel confident that humans are altering the entire Earth system?

Writing on The Conversation, geologist James Scourse suggests not. He feels that the geological debate is “manufactured” and that humans’ impact on Earth is sufficiently well recognised that we have no need of a new term to describe it.

Clearly, many scientists beg to differ. A key reason, arguably, is the failure of virtually every society on the planet to acknowledge the sheer magnitude of the human impact on Earth. Only last year did we finally negotiate a truly global treaty to confront climate change.

In this light, the Anthropocene allows scientists to assemble a set of large-scale human impacts under one graphic conceptual banner. Its scientific status therefore matters a great deal if people worldwide are at long last to wake up to the environmental effects of their collective actions.

Gaining traction

But the scientific credibility of the Anthropocene proposition is likely to be called into question the more that scientists use the term informally or otherwise. Here the recent history of climate science in the public domain is instructive.

Even more than the concept of global warming, the Anthropocene is provocative because it implies that our current way of life, especially in wealthy parts of the world, is utterly unsustainable. Large companies who make profits from environmental despoliation – oil multinationals, chemical companies, car makers and countless others – have much to lose if the concept becomes linked with political agendas devoted to things like degrowth and decarbonisation. When one considers the organised attacks on climate science in the United States and elsewhere, it seems likely that Anthropocene science will be challenged on ostensibly scientific grounds by non-scientists who dislike its implications.

Sadly, such attacks are likely to succeed. In geology, the WGA’s unconventional proclamation potentially leaves any ICS definition open to challenge. If accepted, it also means that all indicators of the Holocene would now have to be referred to as things of the past, despite evidence that the transition to a human-shaped world is not quite complete in some places.

Some climate contrarians still refuse to accept that researchers can truly distinguish a human signature in the climate. Similarly, scientists who address themselves to the Anthropocene will doubtless face questions about how much these changes to the planet are really beyond the range of natural variability.

If “Anthropocene sceptics” gain the same momentum as climate deniers have enjoyed, they will sow seeds of confusion into what ought to be a mature public debate about how humans can transform their relationship with the Earth. But we can resist this confusion by recognising that we don’t need the ICS’s imprimatur to appreciate that we are indeed waving goodbye to Earth as we have known it throughout human civilisation.

We can also recognise that Earth system science is not as precise as nuclear physics or geometry. This lack of precision does not mean that the Anthropocene is pure scientific speculation. It means that science knows enough to sound the alarm, without knowing all the details about the unfolding emergency.

The Anthropocene deserves to become part of our lexicon – a way we understand who we are, what we’re doing and what our responsibilities are as a species – so long as we remember that not all humans are equal contributors to our planetary maladies, with many being victims.

The ConversationNoel Castree, Professor of Geography, University of Wollongong

This article was originally published on The Conversation. Read the original article.

Now, Check Out:

Ancient Mummy Head Reconstructed with CT Scanning, 3D Printing, and Forensic Art

Researchers have produced a reconstruction of the head of an 18- to 25-year-old woman who lived at least 2,000 years ago in ancient Egypt.

They have named her Meritamun, which means beloved of the god Amun. Her face is only the start of the team’s work to answer questions about how she may have died, what diseases she had, when she lived, where she was from, and even what she ate.

“The idea of the project is to take this relic and, in a sense, bring her back to life by using all the new technology,” says Varsha Pilbrow, a biological anthropologist who teaches anatomy in the University of Melbourne’s department of anatomy and neuroscience.

“This way she can become much more than a fascinating object to be put on display. Through her, students will be able to learn how to diagnose pathology marked on our anatomy, and learn how whole population groups can be affected by the environments in which they live.”

CT scanning, 3D printing, forensic science, and art have brought Meritamun back to life. Sculpture by Jennifer Mann. (Credit: Paul Burston)
CT scanning, 3D printing, forensic science, and art have brought Meritamun back to life. Sculpture by Jennifer Mann. (Credit: Paul Burston)

In the museum

How and why the University of Melbourne has a mummified Egyptian head in the basement of its medical building is a mystery. It may well have been part of the collection of Professor Frederic Wood Jones (1879-1954) who before becoming head of anatomy at the University in 1930 had undertaken archaeological survey work in Egypt.

Meritamun is housed in a purpose-built archival container alongside rows of human specimens preserved in glass jars and formaldehyde solution at the Harry Brookes Allen Museum of Anatomy and Pathology, in the School of Biomedical Sciences. She lies face up, as she would have been buried. Even though she is covered in tightly wound bandages and blackened by oil and embalming fluid, her delicate features are clear.

“Her face is kept upright because it is more respectful that way,” says museum curator Ryan Jefferies. “She was once a living person, just like all the human specimens we have preserved here, and we can’t forget that.”

The genesis of the project was Jefferies’ concern that the head, whose origin remains a mystery, could be decaying from the inside without anyone noticing. Removing the bandages wasn’t an option as it would have damaged the relic and further violated the individual who had been embalmed for the afterlife. But the scan revealed the skull to be in extraordinarily good condition. From there the opportunity to use technology to research the mystery of the head was one that was too good to resist.

“The CT scan opened up a whole lot of questions and avenues of enquiry and we realized it was a great forensic and teaching opportunity in collaborative research,” says Jefferies, a parasitologist.

Tooth decay and anemia

Meritamun was identified as ancient Egyptian by Janet Davey, a forensic Egyptologist from Monash University who is based at the Victorian Institute of Forensic Medicine, where the head was scanned.

Davey determined Meritamun’s gender from the bone structure of her face, identifying such markers as the smallness and angle of the jaw, the narrowness of the roof of her mouth, and the roundness of her eye sockets.

Davey guesses that Meritamun was probably about five feet, four inches tall, given accepted thinking that ancient peoples were generally shorter than people today. If the researchers had bone from her arm, leg, or even just her heel they could have established a more accurate estimate.

Dating her is also difficult. Meritamun has some significant tooth decay that conventional thinking would date her to Greco-Roman times when sugar was introduced after Alexander’s conquest of Egypt in 331 BCE. But honey could also account for the decay. And while mummification was more accessible to Egyptians after Alexander’s time, the fineness of the linen bandages suggests Meritamun was high status enough anyway to be embalmed at the earlier time of the Pharaohs.

Davey is now waiting on radiocarbon dating to give a better idea of when Meritamun lived, which she says could be as long ago as c. 1500 BCE. Radiocarbon dating works by measuring the amount of organic carbon atoms that are left in a tissue sample. Since carbon atoms decay over time, the fewer carbon atoms there are in a sample, the older it is.

Biomedical science masters student Stacey Gorski, under the supervision of Pilbrow, is using forensic pathology to try to uncover how healthy Meritamun was and how she may have died.

Just from reading the CT scans Gorski has been able to see that in addition to two tooth abscesses, there are patches on the skull where the bone had pitted and thinned. It is a clear symptom of anemia—a lack of red blood cells that starves the body of oxygen. The thinning occurs when the bone marrow swells as it goes into overdrive in an effort to produce more red blood cells.

In Meritamun’s case, it may have been caused by parasites such as malaria or the flatworm infection schistosomiasis, both of which would have been hazards in the Nile Delta in ancient times. “The fact that she lived to adulthood suggests that she was infected later in life,” says Gorski. Nevertheless, the parasites and anemia would have left her pale and lethargic at the end.

“Anemia is a very common pathology that is found in bodies from ancient Egypt, but it usually isn’t very clear to see unless you can look directly at the skull,” says Gorski. “But it was completely clear from just looking at the images.”

Without having the rest of Meritamun’s body it will be impossible to know for sure how she died, but the anemia could certainly have been a predisposing factor, as could the abscesses if they had become seriously infected.

‘Giving back some of her identity’

It took 140 hours of printing time on a simple consumer-level 3D printer to produce the skull that has been used to reconstruct Meritamun’s face, not counting the tweaking and design work of the department of anatomy and neuroscience’s imaging technician Gavan Mitchell. Because the 3D printer builds from the bottom up and the print is always more detailed at the top, Mitchell has to print out the skull out in two sections to better capture the detail of the jaws and the base of the skull.

“It has been a hugely rewarding process to be able to transform the skull from CT data on a screen into a tangible thing that can be handled and examined,” says Mitchell.

The printed skull formed the base on which sculptor Jennifer Mann has used all her forensic and artistic skill to reconstruct Meritamun’s face.

“It is incredible that her skull is in such good condition after all this time, and the model that Gavan produced was beautiful in its details,” says Mann.

Mann learned the technique for facial reconstruction at the Forensic Anthropology Centre at Texas State University. She practiced on skull casts previously used in actual cases to reconstruct unidentified murder victims.

“It is really poignant work and extremely important for finally identifying these people who would otherwise have remained unknown.”

She cautions that any facial reconstruction can only be an approximation of what someone actually looked like in life, but the results she had at Texas closely matched those of the eventually identified murder victims.

The methodology involves attaching to the printed skull plastic markers to indicate different tissue depths at key points on the face, based on averages in population data. This data is derived from modern Egyptians and has been specifically selected by reconstruction experts from around the world as the best approximation for ancient Egyptians.

It was then about applying the clay according to the musculature of the face and known anatomical ratios based on the actual skull. For example, Meritamun’s nose is squashed almost flat by the tight bandaging, but Mann was able to estimate what her nose would have looked like using calculations based on the dimensions of the nasal cavity. The skull also displays a small overbite that Mann has reconstructed. Meritamun’s ears are based on the CT scan results.

“I have followed the evidence and an accepted methodology for reconstruction and out of that has emerged the face of someone who has come down to us from so long ago. It is an amazing feeling.”

The reconstruction was then cast in a polyurethane resin and painted. The researchers have taken a middle course in the long-running debate on what the predominant skin color of ancient Egyptians may have been, choosing a dark olive hue. The finishing touch was to reconstruct her hair, which has been modeled on that of an Egyptian woman, Lady Rai, who lived around 1570-1530 BCE and whose mummified body is now in the Egypt Museum in Cairo. She wears her hair in tightly-plaited thin braids either side of her head. For the Meritamun reconstruction, replicating Lady Rai’s hair was an all-day job for an African hair salon in Melbourne.

“By reconstructing her we are giving back some of her identity,” says Davey.

Source: Republished from as a derivative work under the Attribution 4.0 International license. Original article posted to Futurity by U. Melbourne. 

Now, Check Out:


Ancient DNA in lake mud sheds light on the mystery of how humans first reached America

By Suzanne McGowan, University of Nottingham.

Modern humans started spreading from Africa to Europe, Asia and Australia some 100,000 years ago – a process that took about 70,000 years. We also know that at some point in the past 25,000 years, a group managed to reach America from Siberia at the end of the last ice age.

However, exactly when this occurred and which route these early pioneers took has long been debated. Now new research based on ancient DNA and plant remains from lake deposits, published in Nature, is finally helping us to answer these questions.

The study investigated a 1,500km long strip of land that was an “ice-free corridor: during the ice age, located in the British Colombia-Alberta region of Canada. For many years, scientists considered this region to be the only place where the two vast ice sheets which covered most of Canada during the last ice age did not meet. Theories of human migration therefore suggested that the earliest migrants from Siberia travelled across the Bering land bridge, exposed at that time due to lower sea levels, through Alaska, and down this open corridor, colonising North America after this time.

However, as new evidence has accumulated, scientists have started to question whether this is plausible. Radiocarbon dating, which is notoriously tricky to interpret, suggests that the ice sheets did in fact meet to make the corridor impassable for a period lasting from around 23,000 years ago until around 14-15,000 years ago. What’s more, new archaeological discoveries have revealed that the earliest human remains from America date back to 14,700 years ago – and they were discovered thousands of kilometres to the south in Chile. To get all the way to Chile by this time, these people must have arrived in the Americas much earlier – when it was impossible to pass through the ice.

The distribution of the early archaeological remains across North America also do not cluster around the ice-free corridor area, suggesting there was no progressive southward movement of humans.

Tracing ancient climate

The study looked at the past environmental conditions in the corridor. If it was indeed a migration pathway for humans, it must have supported the plants and animals that humans require to survive. Archaeological evidence from other areas show that early North Americans hunted large animals such as bison and mammoth, as well as fish and waterfowl during the later stages of the ice age.

Laminated lake sediments, younger layers deposited on top of older layers, containing molecular and fossil evidence revealing the succession of plants and animals in the ice-free corridor. Mikkel Winther Pedersen

Lake sediments can help shed light on the plant and animal life of this period because the successive layers of sediment allow us to step back in time to reveal a history of past environments. The researchers recovered sediment cores dating back to almost 13,000 years ago from an area of the corridor which is thought to be the last to become ice-free. Identification of the pollen grains and small fragments of plants in sediments are important in revealing vegetation development.

Lake sediments encapsulate a cocktail of partially decomposed compounds and organic remains, including DNA from the tissues and excretions of organisms – leaving a unique marker of their presence. As it gets older, the DNA breaks down into small fragments, increasing the challenge of isolating messages. The researchers used “shotgun sequencing” which screens the entire DNA cocktail to look for matches with known DNA databases.

These analyses show that around 12,900 years ago, a large lake covered this area, formed by glacial meltwaters. The surrounding vegetation was very sparse, comprising a few grasses and herbs. Around 12,700 years ago, steppe (known as prairie in North America) developed – with sagebrush, birch and willow. These enabled bison to roam the area by 12,600 years ago, followed by small mammals, mammoth, elk and bald eagles by 12,400 years ago.

The authors therefore argue that the corridor only became a viable passage for human travel around 12,700 years ago, meaning it couldn’t have been the first migration route into America. Instead, it became an alternative route slightly later on.

So where did the first humans enter the Americas? The currently favoured theory is that humans migrated via the Bering land bridge along the western Pacific coastline at a time when sea levels were lower, exposing an ice-free coastline for travel with the possibility for transport over water. The so-called “Kelp Highway Hypothesis” also suggests that marine resources were very abundant at this time, and easily capable of supporting migrant populations. Archaeologists have so far struggled to investigate this hypothesis thoroughly, however, because most remains are submerged under seas which are now around 120 metres higher than they were during the ice-age.

Map outlining the opening of the human migration routes in North America revealed by the results presented in this study.
Mikkel Winther Pedersen

The study has implications for later groups of Americans including the “Clovis people” who existed between 13,400-12,800 years ago. The new data suggests these people may not have had much use of the corridor either – the steppe didn’t develop until about 12,700 years ago. However, this is controversial because another very recent genetic analysis from bison in the area suggests these animals were roaming the corridor around 13,400 years ago – making it viable for humans.

The best way to tackle these conflicting strands of evidence would be to commission further studies incorporating palaeontology, archaeology and palaeoenvironmental work to resolve the question.

The ConversationSuzanne McGowan, Head of School of Geography (UNMC), University of Nottingham

This article was originally published on The Conversation. Read the original article.

Now, Check Out:


Plate tectonics: new findings fill out the 50-year-old theory that explains Earth’s landmasses

By Philip Heron, University of Toronto.

Fifty years ago, there was a seismic shift away from the longstanding belief that Earth’s continents were permanently stationary.

In 1966, J. Tuzo Wilson published Did the Atlantic Close and then Re-Open? in the journal Nature. The Canadian author introduced to the mainstream the idea that continents and oceans are in continuous motion over our planet’s surface. Known as plate tectonics, the theory describes the large-scale motion of the outer layer of the Earth. It explains tectonic activity (things like earthquakes and the building of mountain ranges) at the edges of continental landmasses (for instance, the San Andreas Fault in California and the Andes in South America).

At 50 years old, with a surge of interest in where the surface of our planet has been and where it’s going, scientists are reassessing what plate tectonics does a good job of explaining – and puzzling over where new findings might fit in.

Evidence for the theory

Although the widespread acceptance of the theory of plate tectonics is younger than Barack Obama, German scientist Alfred Wegener first advanced the hypothesis back in 1912.

A map of the original supercontinent, Pangaea, with modern continent outlines.
Kieff, CC BY-SA

He noted that the Earth’s current landmasses could fit together like a jigsaw puzzle. After analyzing fossil records that showed similar species once lived in now geographically remote locations, meteorologist Wegener proposed that the continents had once been fused. But without a mechanism to explain how the continents could actually “drift,” most geologists dismissed his ideas. His “amateur” status, combined with anti-German sentiment in the period after World War I, meant his hypothesis was deemed speculative at best.

In 1966, Tuzo Wilson built on earlier ideas to provide a missing link: the Atlantic ocean had opened and closed at least once before. By studying rock types, he found that parts of New England and Canada were of European origin, and that parts of Norway and Scotland were American. From this evidence, Wilson showed that the Atlantic Ocean had opened, closed and re-opened again, taking parts of its neighboring landmasses with it.

And there it was: proof our planet’s continents were not stationary.

The 15 major plates on our planet’s surface.

How plate tectonics works

Earth’s crust and top part of the mantle (the next layer in toward the core of our planet) run about 150 km deep. Together, they’re called the lithosphere and make up the “plates” in plate tectonics. We now know there are 15 major plates that cover the planet’s surface, moving at around the speed at which our fingernails grow.

Based on radiometric dating of rocks, we know that no ocean is more than 200 million years old, though our continents are much older. The oceans’ opening and closing process – called the Wilson cycle – explains how the Earth’s surface evolves.

A continent breaks up due to changes in the way molten rock in the Earth’s interior is flowing. That in turn acts on the lithosphere, changing the direction plates move. This is how, for instance, South America broke away from Africa. The next step is continental drift, sea-floor spreading, ocean formation – and hello, Atlantic Ocean. In fact, the Atlantic is still opening, generating new plate material in the middle of the ocean and making the flight from New York to London a few inches longer each year.

A simplified ‘Wilson Cycle’.
Philip Heron, CC BY

Oceans close when their tectonic plate sinks beneath another, a process geologists call subduction. Off the Pacific Northwest coast of the United States, the ocean is slipping under the continent and into the mantle below the lithosphere, creating in slow motion Mount St Helens and the Cascade mountain range.

In addition to undergoing spreading (construction) and subduction (destruction), plates can simply rub up against each other – usually generating large earthquakes. These interactions, also discovered by Tuzo Wilson back in the 1960s, are termed “conservative.” All three processes occur at the edges of plate boundaries.

But the conventional theory of plate tectonics stumbles when it tries to explain some things. For example, what produces mountain ranges and earthquakes that occur within continental interiors, far from plate boundaries?

Gone but not forgotten

The answer may lie in a map of ancient continental collisions my colleagues and I assembled.

Over the past 20 years, improved computer power and mathematical techniques have allowed researchers to more clearly look below the Earth’s crust and explore the deeper parts of our plates. Globally, we find many instances of scarring left over from the ancient collisions of continents that formed our present-day continental interiors.

Present day plate boundaries (white) with hidden ancient plate boundaries that may reactivate to control plate tectonics (yellow). Regions where anomalous scarring beneath the crust are marked by yellow crosses.
Philip Heron, CC BY

A map of ancient continental collisions may represent regions of hidden tectonic activity. These old impressions below the Earth’s crust may still govern surface processes – despite being so far beneath the surface. If these deep scarred structures (more than 30 km down) were reactivated, they would cause devastating new tectonic activity.

It looks like previous plate boundaries (of which there are many) may never really disappear. These inherited structures contribute to geological evolution, and may be why we see geological activity within current continental interiors.

Mysterious blobs 2,900 km down

Modern geophysical imaging also shows two chemical “blobs”
at the boundary of Earth’s core and mantle – thought to possibly stem from our planet’s formation.

These hot, dense piles of material lie beneath Africa and the Pacific. Located more than 2,900 km below the Earth’s surface, they’re difficult to study. And nobody knows where they came from or what they do. When these blobs of anomalous substance interact with cold ocean floor that has subducted from the surface down to the deep mantle, they generate hot plumes of mantle and blob material that cause super-volcanoes at the surface.

Does this mean plate tectonic processes control how these piles behave? Or is it that the deep blobs of the unknown are actually controlling what we see at the surface, by releasing hot material to break apart continents?

Answers to these questions have the potential to shake the very foundations of plate tectonics.

Arizona State seismology expert Ed Garnero’s summary of how far we have come in over 100 years of studying the interior of the Earth.
Ed Garnero, CC BY

Plate tectonics in other times and places

And the biggest question of all remains unsolved: How did plate tectonics even begin?

The early Earth’s interior had significantly hotter temperatures – and therefore different physical properties – than current conditions. Plate tectonics then may not be the same as what our conventional theory dictates today. What we understand of today’s Earth may have little bearing on its earliest beginnings; we might as well be thinking about an entirely different world.

In the coming years, we may be able to apply what we discover about how plate tectonics got started here to actual other worlds – the billions of exoplanets found in the habitable zone of our universe.

Venus has some geologic features, but not plate tectonics.

So far, amazingly, Earth is the only planet we know of that has plate tectonics. In our solar system, for example, Venus is often considered Earth’s twin – just with a hellish climate and complete lack of plate tectonics.

Incredibly, the ability of a planet to generate complex life is inextricably linked to plate tectonics. A gridlocked planetary surface has helped produce Venus’ inhabitable toxic atmosphere of 96 percent CO₂. On Earth, subduction helps push carbon down into the planet’s interior and out of the atmosphere.

It’s still difficult to explain how complex life exploded all over our world 500 million years ago, but the processes of removing carbon dioxide from the atmosphere is further helped by continental coverage. An exceptionally slow process starts with carbon dioxide mixing with rain water to wear down continental rocks. This combination can form carbon-rich limestone that subsequently washes away to the ocean floor. The long removal processes (even for geologic time) eventually could create a more breathable atmosphere. It just took 3 billion years of plate tectonic processes to get the right carbon balance for life on Earth.

A theory works now, but what’s in the future?

Fifty years on from Wilson’s 1966 paper, geophysicists have progressed from believing continents never moved to thinking that every movement may leave a lasting memory on our Earth.

Life here would be vastly different if plate tectonics changed its style – as we know it can. A changing mantle temperature may affect the interaction of our lithosphere with the rest of the interior, stopping plate tectonics. Or those continent-sized chemical blobs could move from their relatively stable state, causing super-volcanoes as they release material from their deep reservoirs.

It’s hard to understand what our future holds if we don’t understand our beginning. By discovering the secrets of our past, we may be able to predict the motion of our plate tectonic future.

The ConversationPhilip Heron, Postdoctoral Fellow in Geodynamics, University of Toronto

This article was originally published on The Conversation. Read the original article.

Now, Check Out:


Kennewick Man will be reburied, but quandaries around human remains won’t

By Samuel Redman, University of Massachusetts Amherst.

A mysterious set of 9,000-year-old bones, unearthed nearly 20 years ago in Washington, is finally going home. Following bitter disputes, five Native American groups in the Pacific Northwest have come together to facilitate the reburial of an individual they know as “Ancient One.” One of the most complete prehistoric human skeletons discovered in North America, “Kennewick Man” also became the most controversial.

Two teenagers searching out a better view of a Columbia River speedboat race in 1996 were the first to spot Kennewick Man’s remains. Since then, the bones have mostly been stored away from public view, carefully preserved in museum storerooms while subject to hotly contested legal battles.

Some anthropologists were eager to scientifically test the bones hoping for clues about who the first Americans were and where they came from. But many Native Americans hesitated to support this scientific scrutiny (including tests which permanently destroy or damage the original bone), arguing it was disrespectful to their ancient ancestor. They wanted him laid to rest.

Kennewick Man’s remains had rested in the Columbia River Gorge for millennia. Bleeding Skies, CC BY

This high-profile discovery served as an important, if maddening, test case for a significant new law, the Native American Graves Protection and Repatriation Act (NAGPRA). It aimed to address the problematic history behind museum human remains collections. First it mandated inventories – many museums, in fact, were unaware how large their skeletal collections really were. Then, in certain cases, it called for returning skeletons and mummies to their closest descendant group. Since NAGPRA passed in 1990, the National Park Service estimates over 50,000 sets of human remains have been repatriated in the United States.

The legal framework fits well in cases where ancestry could be determined – think remains found on a specific 19th-century battlefield – but other instances became more contentious. Scientists sometimes argued that very old remains, including Kennewick Man, represented earlier migrations into the Americas by groups who might have moved on long ago. This point of view often clashed with indigenous perspectives, particularly beliefs that their ancestors have lived in specific places since the dawn of time.

Drawn against this complex background, it’s no wonder it’s taken almost two decades to bring the Kennewick Man story into better focus.

Long history of scientizing some human remains

Museums in the U.S. and Europe have collected and studied human remains for well over a century, with the practice gaining considerable momentum after the Civil War. Archaeologists, anatomists and a mishmash of amateurs – influenced by an array of emergent sciences and pseudosciences – gathered bones by the thousands, shipping them in boxes to museums in an effort to systematically study race and, gradually, human prehistory.

Many museums in the United States store human remains collections in spaces colloquially known as ‘bone rooms.’ N Stjerna, CC BY

Museum “bone rooms,” organized to collect and study human remains, helped facilitate new scientific work in the late 19th and early 20th century. The skeletons provided better data about diseases and migration, as well as information about historic diet, with potential impact for living populations.

But building museum bone collections also represented major breaches in ethics surrounding traditional death and burial practices for many indigenous people across the Americas and around the world. For them, data gathering was simply not a priority. Instead, they sought to return their ancestors to the earth.

Considered in context, the concerns raised by many Native Americans are not particularly difficult to comprehend. For example, doing archival research for my book “Bone Rooms,” I learned of the case of several naturally mummified bodies discovered in the American Southwest in the 1870s. The dried corpses were paraded around San Francisco, before being exhibited for the public in Philadelphia and Chicago. Once the immense popularity of the exhibitions died down, the bodies were distributed to several museums across the country where they were put into storage.

Presenting human remains as purely scientific specimens and historical curiosities hurt living descendants by treating entire populations as scientific resources rather than human beings. And by focusing mainly on nonwhite groups, the practice reinforced in subtle and direct ways the scientific racism permeating the era. While some European American skeletons were collected by these museums for comparative purposes, their number was vastly outpaced by the number of Native American bodies collected during this same period.

Anthropologists and other scientists have worked to address some of these negative legacies. But the vestiges of past wrongdoings have left their mark on many museums across the country. Returning ancestral human remains, sacred artifacts and special objects considered to hold collective cultural value attempts to serve as partial redress for these problematic histories.

Kennewick Man’s odyssey

Inaccurate initial media reports muddled the Kennewick Man story. After the first anthropologist who looked at the skull proclaimed a resemblance to European Americans (specifically the actor Patrick Stewart), a New York Times headline in 1998 announced, “Old Skull Gets White Looks, Stirring Dispute.” Indeed, as the paper commented, the bogus reports leading people to believe Kennewick Man might be a white person “heightened an already bitter and muddled battle over the rights to Kennewick Man’s remains and his origins.”

Hidden away from public view, the prehistoric remains were anything but forgotten. Many indigenous people came to view Kennewick Man as a symbol for the failings of the new NAGPRA law.

Forensic anthropologists at the National Museum of Natural History examined Kennewick Man during 16 days of study in 2005 and 2006. Chip Clark, Smithsonian Institution

Some scientists, on the other hand, made impassioned arguments that the bones did not fall under the purview of the new rules. Their extreme age meant the remains were unlikely to be a direct ancestor of any living group. Following this logic, several influential scientists argued the bones should therefore be available for scientific study. Indeed, extensive scientific tests were carried out on the skeleton.

Two years after his discovery, Kennewick Man moved to the behind-the-scenes bone rooms at the Burke Museum on the campus of the University of Washington in Seattle. The long tradition of gathering and interpreting human bones in museums made the decision seem almost natural. Still, it proved a highly problematic (and temporary) “solution” for many Native Americans who wanted the remains buried.

Last year, genetic testing finally proved something many people had suggested for some time: Kennewick Man is more closely related to Native Americans than any other living human group.

Reconciling scientific curiosity with scientific ethics

Should human remains – including the rare, ancient or abnormal bodies sometimes considered especially valuable for science – ever be made into scientific specimens without their approval or that of their descendants? If we do choose to collect and study them for science, who controls the knowledge drawn from these bodies?

These are big questions. I argue that the effort to scientize the dead brings about distinct and specific responsibilities unique to human remains collections. Careful consideration is necessary. Cultural and historical context simply cannot be ignored.

By some estimates, museums today house more than half a million individual Native American remains. Probably hundreds if not thousands of sets of skeletal remains will face these big questions in the coming decades.

Indicative of changing attitudes and ethical approaches to museum exhibition, recent calls to display Kennewick Man’s remains have largely been rebuked, despite potential for engaging large audiences. The prospect for new knowledge or effective popular education is tantalizing, but these objectives should never eclipse basic human and civil rights.

Museums across the country still have human remains in their bone rooms. Wonderlane, CC BY

Two-and-a-half decades after NAGPRA, museums in the United States – including the American Museum of Natural History, Smithsonian Institution and the Cleveland Museum of Natural History – join the Burke Museum in continuing to maintain sizable human remains collections. Kennewick Man may be among the most high-profile cases of human remains going under the microscope – both in terms of the scientific study he was subject to and the intensity of the debate surrounding him – but he is certainly far from alone.

Skeletons wait patiently while the living attempt to work these problems out, but this patience is granted only because the bones have no other choice.

The ConversationSamuel Redman, Assistant Professor of History, University of Massachusetts Amherst

This article was originally published on The Conversation. Read the original article.

Featured Image Credit:  Brittney Tatchell, Smithsonian Institution

Now, Check Out: 

This Ancient Society may have Invented Authority [Video]

Authoritarianism is an issue of special gravitas this year, given claims of heavy-handedness in US presidential politics and widening conflicts against dictatorships in Syria and elsewhere. But why do we as a people let a single person or small group make decisions for everyone else?

A 3,000-year-old archaeological site in the Andes of Peru may hold the answer, says John Rick, associate professor of anthropology at Stanford University.

“More than 5,000 and certainly 10,000 years ago, nowhere in the world was anyone living under a concerted authority. Today we expect that. It is the essence of our organization. ‘Take me to your leader. Who’s in charge here?’ So where did that come from?”

Currently a fellow at the Stanford Humanities Center, Rick is gathering the large amount of evidence from more than two decades of fieldwork at the ancient site of Chavín de Huántar, where that culture developed from roughly 900 BCE to 200 BCE.

He will present his research on Chavín and how authority-minded systems arose in human society in an upcoming book, Innovation, Religion and the Development of the Andean Formative Period, that will explore the role of religion in the shaping hierarchical societies in the New World, especially the Andes.


Chavín was a religious center run by an elaborate priesthood. Located north of Lima, in the Andes Mountains, it sat at the mouth of two large rivers that once held religious significance for the region. During its existence, the Chavín priesthood subjected visitors to an incredible range of routines, some of which involved manipulating light, water, and sound.

The priesthood deliberately worked with underground spaces, architectural stonework, a system of water canals, psychoactive drugs, and animal iconography to augment its demonstrations of authority.

“I was fascinated with the evidence we have for this idea of manipulation of people who went through ritual experiences in these structures,” he says.

The priesthood sought to increase its level of authority, Rick says. “They needed to create a new world, one in which the settings, objects, actions, and senses all argue for the presence of intrinsic authority—both from the religious leaders and from a realm of greater powers they portray themselves as related to.”

Prior archaeological research on Chavín has suggested that the site attracted people because it was a cult of devotion. However, Rick and colleagues believe something else was going on.

They found very little evidence that common people were involved in worshipping at Chavín. Instead they have surmised that visitors were elite pilgrims, local leaders from far-flung parts of the Central Andes. These people were looking for justification to elevate their own status and their positions of control in society.

After their experiences at Chavín, they would use the experiences to more adroitly disseminate messages of authority to their own people, Rick says. “They’re basically in a process of developing a hierarchy, a real social structure that has strong political power at the top,” Rick says.


Architecture was critical to producing this effect. The researchers estimate that there are 2 kilometers of underground labyrinthine, gallery-like spaces, which were clearly designed to confine and manipulate those who entered them.

This was a removal from one world and the creation of another. As a result, the rituals were dramatic and effective in changing ideas about the nature of human authoritative relationships.

Stone was another key element. Leaders at Chavín often recorded their actions by engraving their deeds in stone. While other ancient sites used wood, papier-mâché or textiles, those at Chavín revealed their strategies in the ground and rock itself.

The priesthood also manipulated visitors with psychoactive drugs. Evidence is found in the portrayal of psychoactive plants in stone engravings, with clear illustrations of paraphernalia and the drugs’ effects on humans.

Rick believes Chavín was a place where human psychology was explored and experiments were being conducted to test how people would react to certain stimuli.

Yet another tool of authoritarian manipulation was water, through a sophisticated hydraulic system and underwater canals at the site. Despite water’s danger in flooding, the Chavín priesthood clearly attempted to control water visibly.

“They were playing with this stuff,” Rick says. “They were using water pressure 3,000 years ago to elevate water, to bring it up where it shouldn’t be. They’re using it as an agent to wash away offerings,” he says.

The water control, is a powerful demonstration of human agency over nature, Rick says. He conjures up a picture of what it must have been like for pilgrims to visit Chavín and its dark, underground spaces, undergo strange experiences, and observe the seeming abilities of priests to wield supernatural powers.

Ancient places like Chavín reflect a major change in the way human beings would treat each other, Rick says. Such places gave rise to “complex, highly authoritarian, communications-driven, sometimes charismatically led societies” in human civilization.

Source: Republished from as a derivative work under the Attribution 4.0 International license. Original article posted to Futurity by .

Featured Image Credit: inyucho, via flickr, CC BY 2.0

Now, Check Out:

New Study: Volcanoes Ruled Out as Dinosaur Extinction Cause

Volcanic eruptions did not lead to the extinction of the dinosaurs, according to a new study. The research also suggests Earth’s oceans can absorb large amounts of carbon dioxide—as long as it’s released gradually over an extremely long time.

Scientists have long argued over the cause of the Cretaceous-Palaeogene extinction event, during which three-quarters of all plant and animal species, including the dinosaurs, went extinct roughly 65 million years ago. Most researchers favor the idea that a catastrophic, sudden mechanism such as an asteroid hit triggered the mass die-off, while others say a gradual rise in CO2 emissions from volcanoes in what is now India may have been the cause.

Now, they say they may have a more definitive answer.

“One way that has been suggested that volcanism could have caused extinction is by ocean acidification, where the ocean absorbs CO2 and becomes more acidic as a result, just as it is doing today with fossil fuel-derived CO2,” says Michael Henehan, a postdoctoral associate at Yale University.

“What we wanted to do was gather all the evidence that’s been collected from ocean sediments from this time and add a few new records of our own, and consider what evidence there is for ocean acidification at this time.”

For the study, researchers analyzed sediments from the deep sea, looking for signs of dissolution that would indicate more acidic oceans. The researchers found that the onset of volcanism did cause a brief ocean acidification event. Critically, though, the pH drop caused by CO2 release was effectively neutralized well before the mass extinction event.

“Combining this with temperature observations that others have made about this time, we think there is a conclusive case that although Deccan volcanism caused a short-lived global warming event and some ocean acidification, the effects were cancelled out by natural carbon cycling processes long before the mass extinction that killed the dinosaurs,” Henehan says.

This is not to say that CO2 released by volcanoes did not prompt climate effects, the researchers note: Rather, the gases were released over such a long timescale their effect could not have caused a sudden, species die-off.

The study also has implications for understanding modern climate change. They said it adds to an increasing body of work that suggests restricting CO2 release to much slower and lower levels over thousands of years can allow the oceans to adapt and avoid the worst possible consequences of ocean acidification.

“However, if you cause big disturbances over rapid timescales, closer to the timescales of current human, post-industrial CO2 release, you can produce not only big changes in oceanic ecosystems, but also profound and long-lasting changes in the way the ocean stores and regulates CO2,” Henehan says.

The work also suggests that disruption of marine ecosystems can have profound effects on Earth’s climate.

“The direct effects of an asteroid impact, like massive tsunamis or widespread fires, would have lasted only for a relatively short time,” says postdoctoral associate and coauthor Donald Penman.

“However, the loss of ecologically important groups of organisms following impact caused changes to the global carbon cycle that took millions of years to recover. This could be seen as a warning for our future: We need to be careful not to drive key functional organisms to extinction, or we could be feeling the effects for a very long time.”

Other researchers from Yale, the University of St. Andrews, and the University of Bristol are coauthors of the work, which is published in the Philosophical Transactions of the Royal Society B,

Source: Republished from as a derivative work under the Attribution 4.0 International license. Original article posted to Futurity by .

Featured Image Credit:  Dave Kryzaniak/Flickr

Now, Check Out: