A new electronic material can heal all its functions automatically—even after breaking multiple times. This material could improve the durability of wearable electronics.
“Wearable and bendable electronics are subject to mechanical deformation over time, which could destroy or break them,” says Qing Wang, professor of materials science and engineering at Penn State. “We wanted to find an electronic material that would repair itself to restore all of its functionality, and do so after multiple breaks.”
Self-healable materials are those that, after withstanding physical deformation such as being cut in half, naturally repair themselves with little to no external influence.
In the past, researchers have been able to create self-healable materials that can restore one function after breaking, but restoring a suite of functions is critical for creating effective wearable electronics. For example, if a dielectric material retains its electrical resistivity after self-healing but not its thermal conductivity, that could put electronics at risk of overheating.
The material that Wang and his team created restores all properties needed for use as a dielectric in wearable electronics—mechanical strength, breakdown strength to protect against surges, electrical resistivity, thermal conductivity, and dielectric, or insulating, properties. They report their findings in Advanced Functional Materials.
Most self-healable materials are soft or “gum-like,” says Wang, but the material he and his colleagues created is very tough in comparison. His team added boron nitride nanosheets to a base material of plastic polymer. Like graphene, boron nitride nanosheets are two dimensional, but instead of conducting electricity like graphene they resist and insulate against it.
“Most research into self-healable electronic materials has focused on electrical conductivity but dielectrics have been overlooked,” says Wang. “We need conducting elements in circuits but we also need insulation and protection for microelectronics.”
The material is able to self-heal because boron nitride nanosheets connect to one another with hydrogen bonding groups functionalized onto their surface. When two pieces are placed in close proximity, the electrostatic attraction naturally occurring between both bonding elements draws them close together. When the hydrogen bond is restored, the two pieces are “healed.” Depending on the percentage of boron nitride nanosheets added to the polymer, this self-healing may require additional heat or pressure, but some forms of the new material can self-heal at room temperature when placed next to each other.
Unlike other healable materials that use hydrogen bonds, boron nitride nanosheets are impermeable to moisture. This means that devices using this dielectric material can operate effectively within high humidity contexts such as in a shower or at a beach.
“This is the first time that a self-healable material has been created that can restore multiple properties over multiple breaks, and we see this being useful across many applications,” says Wang.
Additional researchers contributed from Penn State and Harbin Institute of Technology. The China Scholarship Council supported the work, published in in Advanced Functional Materials.
With the world’s population expected to exceed nine billion by 2050, scientists are working to develop new ways to meet rising global demand for food, energy and water without increasing the strain on natural resources. Organizations including the World Bank and the U.N. Food and Agriculture Organization are calling for more innovation to address the links between these sectors, often referred to as the food-energy-water (FEW) nexus.
Nanotechnology – designing ultrasmall particles – is now emerging as a promising way to promote plant growth and development. This idea is part of the evolving science of precision agriculture, in which farmers use technology to target their use of water, fertilizer and other inputs. Precision farming makes agriculture more sustainable because it reduces waste.
We recently published results from research in which we used nanoparticles, synthesized in our laboratory, in place of conventional fertilizer to increase plant growth. In our study we successfully used zinc nanoparticles to increase the growth and yield of mung beans, which contain high amounts of protein and fiber and are widely grown for food in Asia. We believe this approach can reduce use of conventional fertilizer. Doing so will conserve natural mineral reserves and energy (making fertilizer is very energy-intensive) and reduce water contamination. It also can enhance plants’ nutritional values.
Impacts of fertilizer use
Fertilizer provides nutrients that plants need in order to grow. Farmers typically apply it through soil, either by spreading it on fields or mixing it with irrigation water. A major portion of fertilizer applied this way gets lost in the environment and pollutes other ecosystems. For example, excess nitrogen and phosphorus fertilizers become “fixed” in soil: they form chemical bonds with other elements and become unavailable for plants to take up through their roots. Eventually rain washes the nitrogen and phosphorus into rivers, lakes and bays, where it can cause serious pollution problems.
Fertilizer use worldwide is increasing along with global population growth. Currently farmers are using nearly 85 percent of the world’s total mined phosphorus as fertilizer, although plants can uptake only an estimated 42 percent of the phosphorus that is applied to soil. If these practices continue, the world’s supply of phosphorus could run out within the next 80 years, worsening nutrient pollution problems in the process.
In contrast to conventional fertilizer use, which involves many tons of inputs, nanotechnology focuses on small quantities. Nanoscale particles measure between 1 and 100 nanometers in at least one dimension. A nanometer is equivalent to one billionth of a meter; for perspective, a sheet of paper is about 100,000 nanometers thick.
These particles have unique physical, chemical and structural features, which we can fine-tune through engineering. Many biological processes, such as the workings of cells, take place at the nano scale, and nanoparticles can influence these activities.
Scientists are actively researching a range of metal and metal oxide nanoparticles, also known as nanofertilizer, for use in plant science and agriculture. These materials can be applied to plants through soil irrigation and/or sprayed onto their leaves. Studies suggest that applying nanoparticles to plant leaves is especially beneficial for the environment because they do not come in contact with soil. Since the particles are extremely small, plants absorb them more efficiently than via soil. We synthesized the nanoparticles in our lab and sprayed them through a customized nozzle that delivered a precise and consistent concentration to the plants.
We chose to target zinc, which is a micronutrient that plants need to grow, but in far smaller quantities than phosphorus. By applying nano zinc to mung bean leaves after 14 days of seed germination, we were able to increase the activity of three important enzymes within the plants: acid phosphatase, alkaline phosphatase and phytase. These enzymes react with complex phosphorus compounds in soil, converting them into forms that plants can take up easily.
When we made these enzymes more active, the plants took up nearly 11 percent more phosphorus that was naturally present in the soil, without receiving any conventional phosphorous fertilization. The plants that we treated with zinc nanoparticles increased their biomass (growth) by 27 percent and produced 6 percent more beans than plants that we grew using typical farm practices but no fertilizer.
Nanofertilizer also has the potential to increase plants’ nutritional value. In a separate study, we found that applying titanium dioxide and zinc oxide nanoparticles to tomato plants increased the amount of lycopene in the tomatoes by 80 to 113 percent, depending on the type of nanoparticles and concentration of dosages. This may happen because the nanoparticles increase plants’ photosynthesis rates and enable them to take up more nutrients.
Lycopene is a naturally occurring red pigment that acts as an antioxidant and may prevent cell damage in humans who consume it. Making plants more nutrition-rich in this way could help to reduce malnutrition. The quantities of zinc that we applied were within the U.S. government’s recommended limits for zinc in foods.
Next questions: health and environmental impacts of nanoparticles
Nanotechnology research in agriculture is still at an early stage and evolving quickly. Before nanofertilizers can be used on farms, we will need a better understanding of how they work and regulations to ensure they will be used safely. The U.S. Food and Drug Administration has already issued guidance for the use of nanomaterials in animal feed.
Manufacturers also are adding engineered nanoparticles to foods, personal care and other consumer products. Examples include silica nanoparticles in baby formula, titanium dioxide nanoparticles in powdered cake donuts, and other nanomaterials in paints, plastics, paper fibers, pharmaceuticals and toothpaste.
Many properties influence whether nanoparticles pose risks to human health, including their size, shape, crystal phase, solubility, type of material, and the exposure and dosage concentration. Experts say that nanoparticles in food products on the market today are probably safe to eat, but this is an active research area.
Addressing these questions will require further studies to understand how nanoparticles behave within the human body. We also need to carry out life cycle assessments of nanoparticles’ impact on human health and the environment, and develop ways to assess and manage any risks they may pose, as well as sustainable ways to manufacture them. However, as our research on nanofertilizer suggests, these materials could help solve some of the word’s most pressing resource problems at the food-energy-water nexus.
Satellites used to be the exclusive playthings of rich governments and wealthy corporations. But increasingly, as space becomes more democratized, these sophisticated technologies are coming within reach of ordinary people. Just like drones before them, miniature satellites are beginning to fundamentally transform our conceptions of who gets to do what up above our heads.
As a recent report from the National Academy of Sciences highlights, these satellites hold tremendous potential for making satellite-based science more accessible than ever before. However, as the cost of getting your own satellite in orbit plummets, the risks of irresponsible use grow.
The question here is no longer “Can we?” but “Should we?” What are the potential downsides of having a slice of space densely populated by equipment built by people not traditionally labeled as “professionals”? And what would the responsible and beneficial development and use of this technology actually look like?
Some of the answers may come from a nonprofit organization that has been building and launching amateur satellites for nearly 50 years.
The technology we’re talking about
Having your own personal satellite launched into orbit might sound like an idea straight out of science fiction. But over the past few decades a unique class of satellites has been created that fits the bill: CubeSats.
The “Cube” here simply refers to the satellite’s shape. The most common CubeSat (the so-called “1U” satellite) is a 10 cm (roughly 4 inches) cube, so small that a single CubeSat could easily be mistaken for a paperweight on your desk. These mini, modular satellites can fit in a launch vehicle’s formerly “wasted space.” Multiples can be deployed in combination for more complex missions than could be achieved by one CubeSat alone.
Within their compact bodies these minute satellites are able to house sensors and communications receivers/transmitters that enable operators to study the Earth from space, as well as space around the Earth.
They’re primarily designed for Low Earth Orbit (LEO) – an easily accessible region of space from around 200 to 800 miles above the Earth, where human-tended missions like the Hubble Space Telescope and the International Space Station (ISS) hang out. But they can attain more distant orbits; NASA plans for most of its future Earth-escaping payloads (to the moon and Mars especially) to carry CubeSats.
Because they’re so small and light, it costs much less to get a CubeSat into Earth orbit than a traditional communication or GPS satellite. For instance, a research group here at Arizona State University recently claimed their developmental “femtosats” (especially small CubeSats) could cost as little as US$3,000 to put in orbit. This decrease in cost is allowing researchers, hobbyists and even elementary school groups to put simple instruments into LEO, by piggybacking onto rocket launches, or even having them deployed from the ISS.
Since then, NASA, the National Reconnaissance Office and even Boeing have all launched and operated CubeSats. There are more than 130 currently operational in orbit. The NASA Educational Launch of Nano Satellite (ELaNa) program, which offers free launches for educational groups and science missions, is now open to U.S. nonprofit corporations as well.
Clearly, satellites are not just for rocket scientists anymore.
Thinking inside the box
The National Academy of Sciences report emphasizes CubeSats’ importance in scientific discovery and the training of future space scientists and engineers. Yet it also acknowledges that widespread deployment of LEO CubeSats isn’t risk-free.
The greatest concern the authors raise is space debris – pieces of “junk” that orbit the earth, with the potential to cause serious damage if they collide with operational units, including the ISS.
More broadly, the report authors focus on factors that might impede greater use of CubeSat technologies. These include regulations around earth-space radio communications, possible impacts of International Traffic in Arms Regulations (which govern import and export of defense-related articles and services in the U.S.), and potential issues around extra-terrestrial contamination.
But what about the rest of us? How can we be sure that hobbyists and others aren’t launching their own “spy” satellites, or (intentionally or not) placing polluting technologies into LEO, or even deploying low-cost CubeSat networks that could be hijacked and used nefariously?
As CubeSat researchers are quick to point out, these are far-fetched scenarios. But they suggest that now’s the time to ponder unexpected and unintended possible consequences of more people than ever having access to their own small slice of space. In an era when you can simply buy a CubeSat kit off the shelf, how can we trust the satellites over our heads were developed with good intentions by people who knew what they were doing?
Some “expert amateurs” in the satellite game could provide some inspiration for how to proceed responsibly.
Guidance from some experienced amateurs
In 1969, the Radio Amateur Satellite Corporation (AMSAT) was created in order to foster ham radio enthusiasts’ participation in space research and communication. It continued the efforts, begun in 1961, by Project OSCAR – a U.S.-based group that built and launched the very first nongovernmental satellite just four years after Sputnik.
As an organization of volunteers, AMSAT was putting “amateur” satellites in orbit decades before the current CubeSat craze. And over time, its members have learned a thing or two about responsibility.
Here, open-source development has been a central principle. Within the organization, AMSAT has a philosophy of open sourcing everything – making technical data on all aspects of their satellites fully available to everyone in the organization, and when possible, the public. According to a member of the team responsible for FOX 1-A, AMSAT’s first CubeSat:
This means that it would be incredibly difficult to sneak something by us … there’s no way to smuggle explosives or an energy emitter into an amateur satellite when everyone has access to the designs and implementation.
However, they’re more cautious about sharing info with nonmembers, as the organization guards against others developing the ability to hijack and take control of their satellites.
This form of “self-governance” is possible within long-standing amateur organizations that, over time, are able to build a sense of responsibility to community members, as well as society more generally.
How does responsible development evolve?
But what happens when new players emerge, who don’t have deep roots within the existing culture?
Hobbyist and student “new kids on the block” are gaining access to technologies without being part of a longstanding amateur establishment. They are still constrained by funders, launch providers and a tapestry of regulations – all of which rein in what CubeSat developers can and cannot do. But there is a danger they’re ill-equipped to think through potential unintended consequences.
What these unintended consequences might be is admittedly far from clear. Certainly, CubeSat developers would argue it’s hard to imagine these tiny satellites causing substantial physical harm. Yet we know innovators can be remarkably creative with taking technologies in unexpected directions. Think of something as seemingly benign as the cellphone – we have microfinance and text-based social networking at one end of the spectrum, improvised explosive devices at the other.
This is where a culture of social responsibility around CubeSats becomes important – not simply for ensuring that physical risks are minimized (and good practices are adhered to), but also to engage with a much larger community in anticipating and managing less obvious consequences of the technology.
This is not an easy task. Yet the evidence from AMSAT and other areas of technology development suggest that responsible amateur communities can and do emerge around novel technologies.
The challenge here, of course, is ensuring that what an amateur community considers to be responsible, actually is. Here’s where there needs to be a much wider public conversation that extends beyond government agencies and scientific communities to include students, hobbyists, and anyone who may potentially stand to be affected by the use of CubeSat technology.
On the microscale, granular materials interact in remarkably complex ways. That complexity makes them one of the least understood forms of matter.
Now scientists want to figure out how to take advantage of those interactions to design impact-absorbing materials. For example, these new materials might minimize vibrations in vehicles, better protect military convoys, or potentially make buildings safer during an earthquake.
As a first step, researchers have analyzed particle vibrations in very small 2D granular crystals. The results could ultimately help predict how these tiny arrays of particles behave as forces are applied.
‘BUILD BETTER SANDBAGS’
One of the more interesting characteristics of granular materials is that they are dynamically responsive—when you hit them harder, they react differently.
“You can take a pencil and push it through a sandbag, but at the same time it can stop a bullet,” says Nicholas Boechler, assistant professor of mechanical engineering at the University of Washington and senior author of the paper published in Physical Review Letters. “So in some ways what we’re trying to do is build better sandbags in an informed way.
Boechler and colleagues discovered that microscale granular crystals—made of spheres that are smaller than a human blood cell—exhibit significantly different physical phenomena than granular materials with larger particles. Adhesive forces play a more important role, for instance. The array of tiny particles also resonates in complex patterns as forces are applied, and they knock into each other, including combinations of up-and-down, horizontal, and rotational motion.
Tiny granular particles resonates in complex patterns as forces are applied, and they knock into each other, including combinations of up-and-down, horizontal, and rotational motion. (Credit: Samuel Wallen/University of Washington)
“This material has properties that we wouldn’t normally see in a solid material like glass or metal,” says Morgan Hiraiwa, lead author and mechanical engineering doctoral student. “You can think of it as all these different knobs we can turn to get the material to do what we want.”
The team manufactured the 2D ordered layer of micron-sized glass spheres through self-assembly—meaning the millions of particles assemble themselves into a larger functional unit under the right conditions.
Building large amounts of material composed of microscopic particles, such as a panel for a vehicle, is impractical using conventional manufacturing techniques because of the amount of time it would take, Boechler says. Self-assembly offers a scalable, faster, and less expensive way to manufacture microstructured materials.
The team then used laser ultrasonic techniques to observe the dynamics between microscale granular particles as they interact. That involves sending a laser-generated acoustic wave through the crystal and using a separate laser to pick up very small vibrations of the microscopic particles.
Researchers have studied the dynamics of granular crystals with large particles, but this is the first time such complex dynamics have been observed and analyzed in microscale crystals, which have advantages over their larger counterparts. Their small size makes it easier to integrate them into coatings or other materials, and they also resonate at higher frequencies, making them potentially useful for signal processing and other applications.
“The larger systems are really nice for modeling, but can be difficult to integrate into many potential products,” Boechler says.
So far, the team has conducted its experiments using low-amplitude waves. Next steps include exploring high-amplitude, nonlinear regimes in 3D crystals—in which the granular particles are moving more vigorously and even more interesting dynamics may occur.
“Ultimately, the goal is to use this knowledge to start designing materials with new properties,” Boechler says. “For instance, if you could design a coating that has unique impact-absorbing capabilities, it could have applications ranging from spacecraft micro-meteorite shielding to improved bulletproof vests.”
The National Science Foundation, the US Army Research Office, and the University of Washington Royalty Research Foundation funded the work. Additional researchers from the University of Washington and the Massachusetts Institute of Technology took part in the study.
Astronomers have discovered the faintest early-universe galaxy yet and say the new object—seen as it was about 13 billion years ago—could help understand the “reionization epoch” when stars became visible for the first time.
The team was able to see the incredibly faint object using gravitational lensing and a special instrument on the 10-meter telescope at the W.M. Keck Observatory on the summit of Mauna Kea, Hawaii.
Albert Einstein first predicted gravitational lensing. Because gravity can bend the path of light, it’s possible for a distant galaxy to be magnified through the “lens” created by the gravity of another object between it and the viewer.
In this case, the detected galaxy was behind the cluster MACS2129.4-0741, which is massive enough to create three different images of the object. The astronomers were able to show that the three images were of the same galaxy because they showed similar spectra.
It would not have been visible at all if its light was not magnified by the gravitational lens, says Kuang-Han Huang a postdoctoral researcher at the University of California, Davis.
At 13 billion years old, the galaxy lies near the end of the reionization epoch, during which most of the hydrogen gas between galaxies transitioned from being mostly neutral to being mostly ionized, and the stars appeared for the first time. The discovery shows how gravitational lensing can help us understand the faint galaxies that dominate this important period of the early universe, Huang says.
“This galaxy is exciting because the team infers a very low stellar mass, or only 1 percent of 1 percent of the Milky Way galaxy,” says Marc Kassis, staff astronomer at the Keck Observatory. “It’s a very, very small galaxy and at such a great distance, it’s a clue in answering one of the fundamental questions astronomy is trying to understand: What is causing the hydrogen gas at the very beginning of the universe to go from neutral to ionized about 13 billion years ago? That’s when stars turned on and matter became more complex.”
The galaxy’s magnified images were originally seen separately in both Keck Observatory and Hubble Space Telescope data. Researchers used the DEIMOS (DEep Imaging and Multi-Object Spectrograph) instrument on the 10-meter Keck II telescope to confirm that the three images were of the same object.
Other researchers from UC Davis and from the Keck Observatory, University of California, Los Angeles, and the Leibniz Institute for Astrophysics were coauthors of the study that is published in Astrophysical Journal Letters.
“Cure” is a word that’s dominated the rhetoric in the war on cancer for decades. But it’s a word that medical professionals tend to avoid. While the American Cancer Society reports that cancer treatment has improved markedly over the decades and the five-year survival rate is impressively high for many cancers, oncologists still refrain from declaring their cancer-free patients cured. Why?
Patients are declared cancer-free (also called complete remission) when there are no more signs of detectable disease.
However, minuscule clusters of cancer cells below the detection level can remain in a patient’s body after treatment. Moreover, such small clusters of straggler cells may undergo metastasis, where they escape from the initial tumor into the bloodstream and ultimately settle in a distant site, often a vital organ such as the lungs, liver or brain.
When a colony of these metastatic cells reaches a detectable size, the patient is diagnosed with recurrent metastatic cancer. About one in three breast cancer patients diagnosed with early-stage cancer later develop metastatic disease, usually within five years of initial remission.
By the time metastatic cancer becomes evident, it is much more difficult to treat than when it was originally diagnosed.
What if these metastatic cells could be detected earlier, before they established a “foothold” in a vital organ? Better yet, could these metastatic cancer cells be intercepted, preventing them them from lodging in a vital organ in the first place?
The implant is a tiny porous polymer disc (basically a miniature sponge, no larger than a pencil eraser) that can be inserted just under a patient’s skin. Implantation triggers the immune system’s “foreign body response,” and the implant starts to soak up immune cells that travel to it. If the implant can catch mobile immune cells, then why not mobile metastatic cancer cells?
We gave implants to mice specially bred to model metastatic breast cancer. When the mice had palpable tumors but no evidence of metastatic disease, the implant was removed and analyzed.
Cancer cells were indeed present in the implant, while the other organs (potential destinations for metastatic cells) still appeared clean. This means that the implant can be used to spot previously undetectable metastatic cancer before it takes hold in an organ.
For patients with cancer in remission, an implant that can detect tumor cells as they move through the body would be a diagnostic breakthrough. But having to remove it to see if it has captured any cancer cells is not the most convenient or pleasant detection method for human patients.
Detecting cancer cells with noninvasive imaging
There could be a way around this, though: a special imaging method under development at Northwestern University called Inverse Spectroscopic Optical Coherence Tomography (ISOCT). ISOCT detects molecular-level differences in the way cells in the body scatter light. And when we scan our implant with ISOCT, the light scatter pattern looks different when it’s full of normal cells than when cancer cells are present. In fact, the difference is apparent when even as few as 15 out of the hundreds of thousands of cells in the implant are cancer cells.
There’s a catch – ISOCT cannot penetrate deep into tissue. That means it is not a suitable imaging technology for finding metastatic cells buried deep in internal organs. However, when the cancer cell detection implant is located just under the skin, it may be possible to detect cancer cells trapped in it using ISOCT. This could offer an early warning sign that metastatic cells are on the move.
This early warning could prompt doctors to monitor their patients more closely or perform additional tests. Conversely, if no cells are detected in the implant, a patient still in remission could be spared from unneeded tests.
The ISOCT results show that noninvasive imaging of the implant is feasible. But it’s a method still under development, and thus it’s not widely available. To make scanning easier and more accessible, we’re working to adapt more ubiquitous imaging technologies like ultrasound to detect tiny quantities of tumor cells in the implant.
Not just detecting, but quarantining cancer
Besides providing a way to detect tiny numbers of cancer cells before they can form new tumors in other parts of the body, our implant offers an even more intriguing possibility: diverting metastatic cells away from vital organs, and sequestering them where they cannot cause any damage.
In our mouse studies, we found that metastatic cells got caught in the implant before they were apparent in vital organs. When metastatic cells eventually made their way into the organs, the mice with implants still had significantly fewer tumor cells in their organs than implant-free controls. Thus, the implant appears to provide a therapeutic benefit, most likely by taking the metastatic cells it catches out of the circulation, preventing them from lodging anywhere vital.
Interestingly, we have not seen cancer cells leave the implant once trapped, or form a secondary tumor in the implant. Ongoing work aims to learn why this is. Whether the cells can stay safely immobilized in the implant or if it would need to be removed periodically will be important questions to answer before the implant could be used in human patients.
What the future may hold
For now, our work aims to make the implant more effective at drawing and detecting cancer cells. Since we tested the implant with metastatic breast cancer cells, we also want to see if it will work on other types of cancer. Additionally, we’re studying the cells the implant traps, and learning how the implant interacts with the body as a whole. This basic research should give us insight into the process of metastasis and how to treat it.
In the future (and it might still be far off), we envision a world where recovering cancer patients can receive a detector implant to stand guard for disease recurrence and prevent it from happening. Perhaps the patient could even scan their implant at home with a smartphone and get treatment early, when the disease burden is low and the available therapies may be more effective. Better yet, perhaps the implant could continually divert all the cancer cells away from vital organs on its own, like Iron Man’s electromagnet that deflects shrapnel from his heart.
This solution is still not a “cure.” But it would transform a formidable disease that one out of three cancer survivors would otherwise ultimately die from into a condition with which they could easily live.
A mysterious set of 9,000-year-old bones, unearthed nearly 20 years ago in Washington, is finally going home. Following bitter disputes, five Native American groups in the Pacific Northwest have come together to facilitate the reburial of an individual they know as “Ancient One.” One of the most complete prehistoric human skeletons discovered in North America, “Kennewick Man” also became the most controversial.
Two teenagers searching out a better view of a Columbia River speedboat race in 1996 were the first to spot Kennewick Man’s remains. Since then, the bones have mostly been stored away from public view, carefully preserved in museum storerooms while subject to hotly contested legal battles.
Some anthropologists were eager to scientifically test the bones hoping for clues about who the first Americans were and where they came from. But many Native Americans hesitated to support this scientific scrutiny (including tests which permanently destroy or damage the original bone), arguing it was disrespectful to their ancient ancestor. They wanted him laid to rest.
This high-profile discovery served as an important, if maddening, test case for a significant new law, the Native American Graves Protection and Repatriation Act (NAGPRA). It aimed to address the problematic history behind museum human remains collections. First it mandated inventories – many museums, in fact, were unaware how large their skeletal collections really were. Then, in certain cases, it called for returning skeletons and mummies to their closest descendant group. Since NAGPRA passed in 1990, the National Park Service estimates over 50,000 sets of human remains have been repatriated in the United States.
The legal framework fits well in cases where ancestry could be determined – think remains found on a specific 19th-century battlefield – but other instances became more contentious. Scientists sometimes argued that very old remains, including Kennewick Man, represented earlier migrations into the Americas by groups who might have moved on long ago. This point of view often clashed with indigenous perspectives, particularly beliefs that their ancestors have lived in specific places since the dawn of time.
Drawn against this complex background, it’s no wonder it’s taken almost two decades to bring the Kennewick Man story into better focus.
Long history of scientizing some human remains
Museums in the U.S. and Europe have collected and studied human remains for well over a century, with the practice gaining considerable momentum after the Civil War. Archaeologists, anatomists and a mishmash of amateurs – influenced by an array of emergent sciences and pseudosciences – gathered bones by the thousands, shipping them in boxes to museums in an effort to systematically study race and, gradually, human prehistory.
Museum “bone rooms,” organized to collect and study human remains, helped facilitate new scientific work in the late 19th and early 20th century. The skeletons provided better data about diseases and migration, as well as information about historic diet, with potential impact for living populations.
But building museum bone collections also represented major breaches in ethics surrounding traditional death and burial practices for many indigenous people across the Americas and around the world. For them, data gathering was simply not a priority. Instead, they sought to return their ancestors to the earth.
Considered in context, the concerns raised by many Native Americans are not particularly difficult to comprehend. For example, doing archival research for my book “Bone Rooms,” I learned of the case of several naturally mummified bodies discovered in the American Southwest in the 1870s. The dried corpses were paraded around San Francisco, before being exhibited for the public in Philadelphia and Chicago. Once the immense popularity of the exhibitions died down, the bodies were distributed to several museums across the country where they were put into storage.
Presenting human remains as purely scientific specimens and historical curiosities hurt living descendants by treating entire populations as scientific resources rather than human beings. And by focusing mainly on nonwhite groups, the practice reinforced in subtle and direct ways the scientific racism permeating the era. While some European American skeletons were collected by these museums for comparative purposes, their number was vastly outpaced by the number of Native American bodies collected during this same period.
Anthropologists and other scientists have worked to address some of these negative legacies. But the vestiges of past wrongdoings have left their mark on many museums across the country. Returning ancestral human remains, sacred artifacts and special objects considered to hold collective cultural value attempts to serve as partial redress for these problematic histories.
Kennewick Man’s odyssey
Inaccurate initial media reports muddled the Kennewick Man story. After the first anthropologist who looked at the skull proclaimed a resemblance to European Americans (specifically the actor Patrick Stewart), a New York Times headline in 1998 announced, “Old Skull Gets White Looks, Stirring Dispute.” Indeed, as the paper commented, the bogus reports leading people to believe Kennewick Man might be a white person “heightened an already bitter and muddled battle over the rights to Kennewick Man’s remains and his origins.”
Hidden away from public view, the prehistoric remains were anything but forgotten. Many indigenous people came to view Kennewick Man as a symbol for the failings of the new NAGPRA law.
Some scientists, on the other hand, made impassioned arguments that the bones did not fall under the purview of the new rules. Their extreme age meant the remains were unlikely to be a direct ancestor of any living group. Following this logic, several influential scientists argued the bones should therefore be available for scientific study. Indeed, extensive scientific tests were carried out on the skeleton.
Two years after his discovery, Kennewick Man moved to the behind-the-scenes bone rooms at the Burke Museum on the campus of the University of Washington in Seattle. The long tradition of gathering and interpreting human bones in museums made the decision seem almost natural. Still, it proved a highly problematic (and temporary) “solution” for many Native Americans who wanted the remains buried.
Reconciling scientific curiosity with scientific ethics
Should human remains – including the rare, ancient or abnormal bodies sometimes considered especially valuable for science – ever be made into scientific specimens without their approval or that of their descendants? If we do choose to collect and study them for science, who controls the knowledge drawn from these bodies?
These are big questions. I argue that the effort to scientize the dead brings about distinct and specific responsibilities unique to human remains collections. Careful consideration is necessary. Cultural and historical context simply cannot be ignored.
By some estimates, museums today house more than half a million individual Native American remains. Probably hundreds if not thousands of sets of skeletal remains will face these big questions in the coming decades.
Indicative of changing attitudes and ethical approaches to museum exhibition, recent calls to display Kennewick Man’s remains have largely been rebuked, despite potential for engaging large audiences. The prospect for new knowledge or effective popular education is tantalizing, but these objectives should never eclipse basic human and civil rights.
Two-and-a-half decades after NAGPRA, museums in the United States – including the American Museum of Natural History, Smithsonian Institution and the Cleveland Museum of Natural History – join the Burke Museum in continuing to maintain sizable human remains collections. Kennewick Man may be among the most high-profile cases of human remains going under the microscope – both in terms of the scientific study he was subject to and the intensity of the debate surrounding him – but he is certainly far from alone.
Skeletons wait patiently while the living attempt to work these problems out, but this patience is granted only because the bones have no other choice.
The way animals move their tails reveals a lot about their emotional state, particularly the frustration they feel when they can’t solve a problem.
“Our results demonstrate the universality of emotional responses across species,” says study lead author Mikel Delgado, a doctoral student in psychology at the University of California, Berkeley. “After all, what do you do when you put a dollar in a soda machine and don’t get your soda? Curse and try different tactics.”
For the study, published in the Journal of Comparative Psychology, researchers tracked 22 fox squirrels in their leafy habitats, putting them through a series of foraging tasks that had them puzzle their way into various open and locked containers to get to nuts or grains.
The more frustrated the squirrels became—especially if the container was locked—the more they flicked their bushy tails.
On a positive note, these stages of tail-flagging irritation, and even aggression, led fox squirrels to try new strategies, such as biting, flipping, and dragging the box in an attempt to land a reward. The results imply that acts of frustration may be necessary and beneficial to problem-solving, Delgado says.
“Animals in nature likely face situations that are frustrating in that they cannot always predict what will happen. Their persistence and aggression could lead them to try new behaviors while keeping competitors away. While not a direct intelligence test, we think these findings demonstrate some of the key building blocks to problem-solving in animals—persistence and trying multiple strategies.”
Frustration has been observed in chimpanzees, pigeons, and even fish, “but we don’t know much about what function they serve,” Delgado says.
NUTS! THE BOX IS LOCKED
To find out, she and psychology professor Lucia Jacobs, trained campus fox squirrels to open containers to get walnuts.
After nine trials of being rewarded with easy-to-obtain walnuts, the squirrels were faced with the unexpected: Some found an empty box, others a locked box, and still others a piece of corn instead of a walnut. As predicted, their tails flicks increased with each disappointment. The locked box was the most irritating.
Researchers videotaped the squirrels’ foraging trials and found that once the critters got over their frustration, they tried new tactics, such as biting the box, flipping it, dragging, and spending time puzzling over how to get it open.
“This study shows that squirrels are persistent when facing a challenge,” Delgado says. “When the box was locked, rather than giving up, they kept trying to open it, and tried multiple methods to do so.” Their results were published in the Journal of Comparative Psychology.
Overweight people who took a capsule for eight weeks that contained two compounds found in red grapes and oranges saw improvements in blood sugar levels and artery function, researchers report.
“This is an incredibly exciting development and could have a massive impact on our ability to treat these diseases,” says Paul Thornalley, a professor in systems biology at the University of Warwick Medical School. “As well as helping to treat diabetes and heart disease, it could defuse the obesity time bomb.”
When participants received both compounds—trans-resveratrol (tRES) in red grapes and hesperetin (HESP) in oranges—at pharmaceutical doses, the compounds acted in tandem to decrease blood glucose, improve the action of insulin, and boost the health of arteries.
After eight weeks on the treatment, researchers noted an improvement in insulin resistance in trial participants that was similar to improvements seen six months after bariatric surgery.
The compounds work by increasing a protein called glyoxalase 1 (Glo1) in the body that neutralizes a damaging sugar-derived compound called methylglyoxal (MG).
For the study, researchers increased Glo1 expression in cell culture and then tested the formulation in a randomized, placebo-controlled crossover clinical trial.
Thirty-two overweight and obese people between the ages of 18 and 80 age who had a BMI between 25 to 40 took part in the trial. They were given the supplement in capsule form once a day for eight weeks. They were asked to maintain their usual diet and their food intake was monitored via a dietary questionnaire. They were also asked not to alter their daily physical activity.
Changes to their sugar levels were assessed by blood samples, artery health measured by artery wall flexibility, and other assessments by analysis of blood markers.
It’s been almost a month since the paper I co-authoured on the synthesis of research into the scientific consensus on climate change was published. Surveying the many studies into scientific agreement, we found that more than 90% of climate scientists agree that humans are causing global warming.
My co-authors and I even participated in an Ask Me Anything (AMA) session on the online forum Reddit, answering questions about the scientific consensus.
While my own research indicates that explaining the scientific consensus isn’t that effective with those who reject climate science, it does have a positive effect for people who are open to scientific evidence.
Among this “undecided majority”, there was clearly much interest with the session generating 154,000 page views and our AMA briefly featuring on the Reddit homepage (where it was potentially viewed by 14 million people).
Here is an edited selection of some of the questions posed by Reddit readers and our answers.
Q: Why is this idea of consensus so important in climate science? Science isn’t democracy or consensus, the standard of truth is experiment.
If this were actually true, wouldn’t every experiment have to reestablish every single piece of knowledge from first principles before moving on to something new? That’s obviously not how science actually functions.
Consensus functions as a scaffolding allowing us to continue to build knowledge by addressing things that are actually unknown.
Q: Does that 97% all agree to what degree humans are causing global warming?
Different studies use different definitions. Some use the phrase “humans are causing global warming” which carries the implication that humans are a dominant contributor to global warming. Others are more explicit, specifying that humans are causing most global warming.
Within some of our own research, several definitions are used for the simple reason that different papers endorse the consensus in different ways. Some are specific about quantifying the percentage of human contribution, others just say “humans are causing climate change” without specific quantification.
We found that no matter which definition you used, you always found an overwhelming scientific consensus.
Q: It’s very difficult to become/remain a well-respected climate scientist if you don’t believe in human-caused climate change. Your papers don’t get published, you don’t get funding, and you eventually move on to another career. The result being that experts either become part of the 97% consensus, or they cease to be experts.
Ask for evidence for this claim and enjoy the silence (since they won’t have any).
As a scientist, the pressure actually is mostly reversed: you get rewarded if you prove an established idea wrong.
I’ve heard from contrarian scientists that they don’t have any trouble getting published and getting funded, but of course that also is only anecdotal evidence.
You can’t really disprove this thesis, since it has shades of conspiratorial thinking to it, but the bottom line is there’s no evidence for it and the regular scientific pressure is to be adversarial and critical towards other people’s ideas, not to just repeat what the others are saying.
Q: What’s the general reasoning of the other 3%?
Interesting question. It is important and diagnostic that there is no coherent theme among the reasoning of the other 3%. Some say “there is no warming”, others blame the sun, cosmic rays or the oceans.
Those opinions are typically mutually contradictory or incoherent: Stephan Lewandowsky has written elsewhere about a few of the contradictions.
Q: Do we have any insight on what non-climate scientists have to say about climate change being caused by CO2?
In a paper published last year, Stuart Carlton and colleagues surveyed biophysical scientists across many disciplines at major research universities in the US.
They found that about 92% of the scientists believed in anthropogenic climate change and about 89% of respondents disagreed with the statement: “Climate change is independent of CO2 levels”. In other words, about 89% of respondents felt that climate change is affected by CO2.
Q: It could be argued that climate scientists may be predisposed to seeing climate change as more serious, because they want more funding. What’s your perspective on that?
Any climate scientist who could convincingly argue that climate change is not a threat would:
get a Nobel prize
plus a squintillion dollars in funding
a dinner date with the Queen
lifelong gratitude of billions of people.
So if there is any incentive, it’s for a scientist to show that climate change is not a threat.
Q: I was discussing politics with my boss the other day, and when I got to the topic of global warming he got angry, said it’s all bullshit, and that the climate of the planet has been changing for millennia. Where should I go to best understand all of the facts?
But often facts are not enough, especially when people are angry and emotional. The Skeptical Science team has made a free online course that addresses both the facts and the psychology of climate denial.