Moving beyond pro/con debates over genetically engineered crops

By Pamela Ronald, University of California, Davis.

Since the 1980s biologists have used genetic engineering to express novel traits in crop plants. Over the last 20 years, these crops have been grown on more than one billion acres in the United States and globally. Despite their rapid adoption by farmers, genetically engineered (GE) crops remain controversial among many consumers, who have sometimes found it hard to obtain accurate information.

Last month the U.S. National Academies of Sciences, Engineering, and Medicine released a review of 20 years of data regarding GE crops. The report largely confirms findings from previous National Academies reports and reviews produced by other major scientific organizations around the world, including the World Health Organization and the European Commission.

I direct a laboratory that studies rice, a staple food crop for half the world’s people. Researchers in my lab are identifying genes that control tolerance to environmental stress and resistance to disease. We use genetic engineering and other genetic methods to understand gene function.

I strongly agree with the NAS report that each crop, whether bred conventionally or developed through genetic engineering, should be evaluated on a case-by-case basis. Every crop is different, each trait is different and the needs of each farmer are different too. More progress in crop improvement can be made by using both conventional breeding and genetic engineering than using either approach alone.

Modern cultivated corn was domesticated from teosinte, an ancient grass, over more than 6,000 years through conventional breeding.
Nicole Rager Fuller, National Science Foundation

Convergence between biotech and conventional breeding

New molecular tools are blurring the distinction between genetic improvements made with conventional breeding and those made with modern genetic methods. One example is marker assisted breeding, in which geneticists identify genes or chromosomal regions associated with traits desired by farmers and/or consumers. Researchers then look for particular markers (patterns) in a plant’s DNA that are associated with these genes. Using these genetic markers, they can efficiently identify plants carrying the desired genetic fingerprints and eliminate plants with undesirable genetics.

Ten years ago my collaborators and I isolated a gene, called Sub1, that controls tolerance to flooding. Million of rice farmers in South and Southeast Asia grow rice in flood prone regions, so this trait is extremely valuable. Most varieties of rice will die after three days of complete submergence but plants with the Sub1 gene can withstand two weeks of complete submergence. Last year, nearly five million farmers grew Sub1 rice varieties developed by my collaborators at the International Rice Research Institute using marker assisted breeding.

In another example, researchers identified genetic variants that are associated with hornlessness (referred to as “polled”) in cattle – a trait that is common in beef breeds but rare in dairy breeds. Farmers routinely dehorn dairy cattle to protect their handlers and prevent the animals from harming each other. Because this process is painful and frightening for the animals, veterinary experts have called for research into alternative options.

In a study published last month, scientists used genome editing and reproductive cloning to produce dairy cows that carried a naturally occurring mutation for hornlessness. This approach has the potential to improve the welfare of millions of cattle each year.

Reducing chemical insecticides and enhancing yield

In assessing how GE crops affect crop productivity, human health and the environment, the NAS study primarily focused on two traits that have been engineered into plants: resistance to insect pests and tolerance of herbicides.

The study found that farmers who planted crops engineered to contain the insect-resistant trait – based on genes from the bacterium Bacillus thuringiensis, or Bt – generally experienced fewer losses and applied fewer chemical insecticide sprays than farmers who planted non-Bt varieties. It also concluded that farms where Bt crops were planted had more insect biodiversity than farms where growers used broad-spectrum insecticides on conventional crops.

Genetically modified crops currently grown in the United States (IR=insect resistant, HT=herbicide tolerant, DT=drought tolerant, VR=virus resistant).
Colorado State University Extension

The committee found that herbicide-resistant (HR) crops contribute to greater yields because weeds can be controlled more easily. For example, farmers that planted HR canola reaped greater yields and returns, which led to wide adoption of this crop variety.

Another benefit of planting of HR crops is reduced tillage – the process of turning the soil. Before planting, farmers must kill the weeds in their fields. Before the advent of herbicides and HR crops, farmers controlled weeds by tilling. However, tilling causes erosion and runoff, and requires energy to fuel the tractors. Many farmers prefer reduced tillage practices because they enhance sustainable management. With HR crops, farmers can control weeds effectively without tilling.

The committee noted a clear association between the planting of HR crops and reduced-till agricultural practices over the last two decades. However, it is unclear if the adoption of HR crops resulted in decisions by farmers to use conservation tillage, or if farmers who were using conservation tillage adopted HR crops more readily.

In areas where planting of HR crops led to heavy reliance on the herbicide glyphosate, some weeds evolved resistance to the herbicide, making it difficult for farmers to control weeds using this herbicide. The NAS report concluded that sustainable use of Bt and HR crops will require use of integrated pest management strategies.

The report also discusses seven other GE food crops grown in 2015, including apple (Malus domestica), canola (Brassica napus), sugar beet (Beta vulgaris), papaya (Carica papaya), potato, squash (Cucurbita pepo) and eggplant (Solanum melongena).

Papaya is a particularly important example. In the 1950s, papaya ringspot virus wiped out nearly all papaya production on the Hawaiian island of Oahu. As the virus spread to other islands, many farmers feared that it would wipe out the Hawaiian papaya crop.

Papaya infected with ringspot virus. Scot Nelson/Flickr, CC BY-SA

In 1998 Hawaiian plant pathologist Dennis Gonsalves used genetic engineering to splice a small snippet of ringspot virus DNA into the papaya genome. The resulting genetically engineered papaya trees were immune to infection and produced 10-20 fold more fruit than infected crops. Dennis’ pioneering work rescued the papaya industry. Twenty years later, this is still the only method for controlling papaya ringspot virus. Today, despite protests by some consumers, 80 percent of the Hawaiian papaya crop is genetically engineered.

Scientists have also used genetic engineering to combat a pest called the fruit and shoot borer, which preys on eggplant in Asia. Farmers in Bangladesh often spray insecticides every 2-3 days, and sometimes as often as twice daily, to control it. The World Health Organization estimates that some three million cases of pesticide poisoning and over than 250,000 deaths occur worldwide every year.

To reduce chemical sprays on eggplant, scientists at Cornell University and in Bangladesh engineered Bt into the eggplant genome. Bt brinjal (eggplant) was introduced in Bangladesh in 2013. Last year 108 Bangladeshi farmers grew it and were able to drastically reduce insecticides sprays.

Feed the world in an ecologically based manner

Genetically improved crops have benefited many farmers, but it is clear that genetic improvement alone cannot address the wide variety of complex challenges that farmers face. Ecologically based farming approaches as well as infrastructure and appropriate policies are also needed.

Instead of worrying about the genes in our food, we need to focus on ways to help families, farmers and rural communities thrive. We must be sure that everyone can afford the food and we must minimize environmental degradation. I hope that the NAS report can help move the discussions beyond distracting pro/con arguments about GE crops and refocus them on using every appropriate technology to feed the world in an ecologically based manner.

The ConversationPamela Ronald, Professor of Plant Pathology, University of California, Davis

This article was originally published on The Conversation. Read the original article.

Now, Check Out:

Breakthrough Microchip Uses Sound to Amplify Light Signals

Scientists have found a way to boost the intensity of light waves on a silicon microchip using the power of sound.

Writing in the journal Nature Photonics, a team led by Peter Rakich describes a new waveguide system that harnesses the ability to precisely control the interaction of light and sound waves. This work solves a long-standing problem of how to utilize this interaction in a robust manner on a silicon chip as the basis for powerful new signal-processing technologies.

The prevalence of silicon chips in today’s technology makes the new system particularly advantageous, the researchers note.

“Silicon is the basis for practically all microchip technologies,” says Rakich, who is an assistant professor of applied physics and physics at Yale University. “The ability to combine both light and sound in silicon permits us to control and process information in new ways that weren’t otherwise possible.”

Rakich says combining the two capabilities “is like giving a UPS driver an amphibious vehicle—you can find a much more efficient route for delivery when traveling by land or water.”

These opportunities have motivated numerous groups around the world to explore such hybrid technologies on a silicon chip. However, progress was stifled because those devices weren’t efficient enough for practical applications.

The Yale group lifted this roadblock using new device designs that prevent light and sound from escaping the circuits.

“Figuring out how to shape this interaction without losing amplification was the real challenge,” says Eric Kittlaus, a graduate student in Rakich’s lab and the study’s first author. “With precise control over the light-sound interaction, we will be able to create devices with immediate practical uses, including new types of lasers.”

The researchers say there are commercial applications for the technology in a number of areas, including fiber-optic communications and signal processing. The system is part of a larger body of research the Rakich lab has conducted for the past five years, focused on designing new microchip technologies for light.

The US Department of Defense’s Defense Advanced Research Projects Agency supported the project.

Source: Republished from Futurity.org as a derivative work under the Attribution 4.0 International license. Original article posted to Futurity by 

Featured Image Credit: Rudolf Getel/flickr

Now, Check Out:

This Sex-switching Fish Mates for Life

For tiny hermaphrodite fish found in coral reefs off Panama, a lifelong monogamous relationship comes with a bit of give and take.

The pair switch reproductive roles at least 20 times a day.

The strategy allows individual fish to fertilize about as many eggs as it produces, giving the fish a reproductive edge.

“Our study indicates that animals in long-term partnerships are paying attention to whether their partner is contributing to the relationship fairly—something many humans may identify with from their own long-term relationships,” says Mary Hart, adjunct professor of biology at the University of Florida.

The duo motivate one another to contribute eggs to the relationship. If one partner lacks eggs, the other will simply match whatever it produces. The only way for a partner to convince its mate to produce more eggs, is to pick up the slack and generate more itself, she says.

Scientists observed the short-lived chalk bass, Serranus tortugarum, for six months—and were surprised that every couple stayed together for the duration.

With only 3 to 5 percent of animals known to live monogamously, this is a rare find—and one of the first for a fish living in a high-density social group, says coauthor Andrew Kratter, an ornithologist with the Florida Museum of Natural History.

“I found it fascinating that fish with a rather unconventional reproductive strategy would end up being the ones who have these long-lasting relationships,” he says. “They live in large social groups with plenty of opportunities to change partners, so you wouldn’t necessarily expect this level of partner fidelity.”

Published in the journal of Behavioral Ecology, the new research lays the groundwork for studies that investigate mechanisms that govern partnerships in the wild.

An occasional fling

Scientists have long studied cooperative behavior in animals, like primates that groom each other or vampire bats that regurgitate food for relatives in need of a blood meal. But it has remained a point of debate among scientists whether or not these animals are paying attention to the amount of resources being exchanged. For the chalk bass, matching reproductive chores helps partners succeed, even when there are opportunities to mate with other fish, Hart says.

“We initially expected individuals with partners that were producing less eggs would be more likely to switch partners over time—trading up, so to speak. Instead we found that partners matched egg production and remained in primary partnerships for the long term.”

For their entire adult lives, the fish mating partners come together for two hours each day before dusk in their refuge area, or spawning territory. They chase away other fish and begin with a half-hour foreplay ritual of nipping and hovering around each other, an activity that helps strengthen the partners’ bond. Eventually it becomes apparent which fish is going to take on the female role for the first of many spawning rounds.

Finding a new mate every evening is time-consuming and risky for a fish that only lives for about a year. Having a safe partner may help ensure that individuals get to fertilize a similar number of eggs as they produce, rather than risk ending up with a partner with fewer eggs.

But all of this doesn’t mean the chalk bass is completely opposed to an occasional fling.

If one partner has more eggs than the other, it may share the extra with other couples, an option that, while infrequent,can add stability to the system of simultaneous hermaphroditism paired with monogamy.

Scientists are only beginning to understand how mutually beneficial relationships among animals are maintained, much as humans in general still strive to determine what makes long-term relationships last.

“Not even one of the original pairs that I observed switched mates while its partner was still alive,” Hart says. “That strong matching between partners and the investment into the partnership was surprising.”

Source: Republished from Futurity.org as a derivative work under the Attribution 4.0 International license. Original article posted to Futurity by .

Featured Photo Credit:  Kevin Bryant/Flickr

Now, Check Out: 

Roots of opioid epidemic can be traced back to two key changes in pain management

By Theodore Cicero, Washington University in St Louis and Matthew S. Ellis, Washington University in St Louis.

Ascetics preparing and smoking opium outside a rural dwelling in India. Wellcome Library, London, , CC BY

Abuse of opium products obtained from poppy plants dates back centuries, but today we are witnessing the first instance of widespread abuse of legal, prescribed drugs that, while structurally similar to illicit opioids such as heroin, are used for sound medical practices.

So how did we get here?

We can trace the roots of today’s epidemic back to two well-intentioned changes in how we treat pain: early recognition and proactive treatment of pain and the introduction of OxyContin, the first extended release opioid painkiller.

Pain as the fifth vital sign

Fifteen years ago, a report by the Joint Commission on Accreditation of Healthcare Organizations, a nationally recognized medical society which accredits hospitals, stressed that pain was vastly undertreated in the United States. The report recommended that physicians routinely assess pain at every patient visit. It also suggested that opioids could be effectively and more broadly used without fear of addiction. This latter assumption was entirely mistaken, as we now understand. The report was part of a trend in medicine through the 1980s and 1990s toward treating pain more proactively.

The report was heavily publicized, and today it is widely acknowledged that it led to massive – and sometimes inappropriate – increases in the use of prescription opioid drugs to treat pain.

With more opioids being prescribed by well-meaning doctors, some were diverted from the legal supply chain – through theft from medicine cabinets or trade on the black market – to the street for illicit use. As more opioids leaked out, more people started to experiment with them for recreational purposes.

This increase in supply certainly explains a large part of the current opioid abuse epidemic, but it doesn’t explain all of it.

Introduction of OxyContin®

The second major factor was the introduction of an extended release formulation of the potent opioid oxycodone in the 1996. You may know this drug by its brand name, OxyContin. In fact, you might have been prescribed it after having surgery.

The drug was designed to provide 12-24 hours of pain relief, as opposed to just four hours or so for an immediate release formulation. It meant that patients in pain could just take one or two pills a day rather than having to remember to take an immediate release drug every four hours or so. This also meant that OxyContin tablets contained a large amount of oxycodone – far more than would be found in several individual immediate release tablets.

And within 48 hours of OxyContin’s release on the market, drug users realized that crushing the tablet could easily breach the extended-release formulation, making the pure drug available in large quantities, free from harmful additives such as acetaminophen, which most recreational and chronic abusers find irritating, particularly if they inject it intravenously. This made it an attractive option for those who wanted to snort or inject their drugs. Surprisingly, neither the manufacturer nor the Food and Drug Administration foresaw this possibility.

Prescription opioids are perceived as safer than drugs like heroin. Brendan McDermid/Reuters

Purdue, the company holding the patent for the drug, continued to market it as having low abuse potential, highlighting that patients needed to take fewer pills a day than with immediate-release formulations.

By 2012, OxyContin represented 30 percent of the painkiller market.

The change in pain treatment ushered in by the Joint Commission report lead to an increase in the number of opioid prescriptions in the U.S., and the increase in prescriptions for this particular high dose opioid helped to introduce an unprecedented amount of prescription drugs into the marketplace, generating a whole new population of opioid users.

What is it about prescription drugs?

Compared to heroin and the stigma it carries, prescription drugs are viewed as safe. They have a consistent purity and dose, and can be relatively easily obtained from drug dealers. There was, at least throughout the 1990s and 2000s, little social stigma attached to swallowing a medically provided, legal drug.

The irony here is that prescription opioid abuse has actually been associated with an increase in heroin users. People who are addicted to prescription opioids might try heroin because it is cheaper and more readily available, often using them interchangeably depending on which is easier to get. However, the number of people who convert to heroin exclusively is relatively small.

The majority of individuals who abuse opioid drugs swallow them whole. The remainder snort or inject these drugs, which is much riskier. Snorting, for instance, leads to destruction of nasal passages, amongst other problems, whereas IV injection – and the common practice of sharing needles – can transmit blood-borne pathogens, HIV and Hepatitis C (currently a national problem of epidemic proportions).

Although people can also get high by just swallowing the pills, the addictive potential of drugs injected or snorted is far greater. There is good evidence to indicate that drugs which deliver their impact on the brain quickly, through snorting and especially through IV injection, are much more addictive and harder to quit.

No OxyContin here.
jennifer durban/Flickr, CC BY-NC

What are authorities doing to stop the epidemic?

Government and regulatory agencies such as the Food and Drug Administration are trying to curb the epidemic, in part by tightening access to prescription opioids. The Centers for Disease Control and Prevention recently issued new guidelines for prescribing opioids to treat chronic pain, aimed at preventing abuse and overdoses. Whether these recommendations will be supported by major medical associations remains to be seen.

For example, there have been local and national crackdowns on unethical doctors who run “pill mills,” clinics whose sole purpose is to provide opioid prescriptions to users and dealers.

In addition, prescription monitoring programs have helped identify irregular prescribing practices.

In 2010 an abuse-deterrent formulation (ADF) of OxyContin was released, replacing the original formulation. The ADF prevents the full dose of the opioid from being released if the pill is crushed or dissolved in some solvent, reducing the incentive to snort or take the drugs intravenously. These formulations have cut down on abuse, but they alone won’t solve the epidemic. Most people who are addicted to prescription opioids swallow pills anyway instead of snorting or injecting them, and abuse-deterrent technology isn’t effective when the drug is swallowed whole.

And, as with the release of the original OxyContin formulation in the 1990s, websites are populated by drug users with the procedures necessary to “defeat” the ADF mechanisms, although these are labor-intensive and take quite a bit more time.

Should we just restrict the use of opioid painkillers?

After reading all of this, you might be wondering why we don’t simply cut the use of opioids for pain management back to bare bones? This move would certainly help reduce the supply of opioids and slow the inevitable diversion for nontherapeutic purposes. However, it would come with a heavy price.

Millions of Americans suffer from either acute or chronic pain, and despite their potential for abuse, opioid drugs remain the most effective drugs on the market for treating pain, although there are some who disagree with their long-term use.

And most people who get a prescription for an opioid do not become addicted. Going backwards to restricting therapeutic use to keep them from the small fraction of individuals who would misuse them means that millions of people won’t get adequate pain management. This is an unacceptable trade-off.

New painkillers that can treat pain as well as opioids but don’t get people high would seem like the ideal solution.

For almost 100 years now there has been a concerted effort to develop a narcotic drug that has all of the efficacy of existing drugs, but without the potential for abuse. Unfortunately, this effort, it can be safely concluded, has failed. In short, it appears that the two properties – pain relief and abuse – are inextricably linked.

In the interest of public health, we must learn better ways to manage pain with these drugs, and particularly to recognize which individuals are likely to abuse their medications, before starting opioid therapy.

The ConversationTheodore Cicero, Professor of Psychology , Washington University in St Louis and Matthew S. Ellis, Clinical Lab Manager, Washington University in St Louis

This article was originally published on The Conversation. Read the original article.

Now, Check Out:

Newly Discovered “Hot Jupiter” is getting Cannibalized by its Star

A search for the galaxy’s youngest planets has turned up one unlike any other—a newborn “hot Jupiter” whose outer layers are being torn away by the star it orbits every 11 hours.

“A handful of known planets are in similarly small orbits, but because this star is only 2 million years old this is one of the most extreme examples,” says Christopher Johns-Krull, an astronomer at Rice University.

“We don’t yet have absolute proof this is a planet because we don’t yet have a firm measure of the planet’s mass, but our observations go a long way toward verifying this really is a planet,” says Johns-Krull. He is the lead author of a new study in the Astrophysical Journal that makes a case for a tightly orbiting gas giant around the star PTFO8-8695 in the constellation Orion.

“We compared our evidence against every other scenario we could imagine, and the weight of the evidence suggests this is one of the youngest planets yet observed.”

The suspected planet orbits a star about 1,100 light years from Earth and is at most twice the mass of Jupiter.

“We don’t know the ultimate fate of this planet,” Johns-Krull says. “It likely formed farther away from the star and has migrated in to a point where it’s being destroyed. We know there are close-orbiting planets around middle-aged stars that are presumably in stable orbits. What we don’t know is how quickly this young planet is going to lose its mass and whether it will lose too much to survive.”

Astronomers have discovered more than 3,300 exoplanets, but almost all of them orbit middle-aged stars like the sun. On May 26, Johns-Krull and colleagues announced the discovery of “CI Tau b,” the first exoplanet found to orbit a star so young that it still retains a disk of circumstellar gas.

Finding such young planets is challenging because there are relatively few candidate stars that are young enough and bright enough to view in sufficient detail with existing telescopes. The search is further complicated by the fact that young stars are often active, with visual outbursts and dimmings, strong magnetic fields and enormous starspots that can make it appear that planets exist where they do not.

Is the planet real?

PTFO8-8695 b was identified as a candidate planet in 2012 by the Palomar Transit Factory’s Orion survey. The planet’s orbit sometimes causes it to pass between its star and our line of sight from Earth, therefore astronomers can use a technique known as the transit method to determine both the presence and approximate radius of the planet based on how much the star dims when the planet “transits,” or passes in front of the star.

“In 2012, there was no solid evidence for planets around 2 million-year-old stars,” says Lisa Prato, an astronomer at the Lowell Observatory. “Light curves and variations of this star presented an intriguing technique to confirm or refute such a planet.

“The other thing that was very intriguing about it was that the orbital period was only 11 hours. That meant we wouldn’t have to come back night after night after night, year after year after year. We could potentially see something happen in one night. So that’s what we did. We just sat on the star for a whole night.”

A spectroscopic analysis of the light coming from the star revealed excess emission in the H-alpha spectral line, a type of light emitted from highly energized hydrogen atoms. The team found that the H-alpha light is emitted in two components, one that matches the very small motion of the star and another than seems to orbit it.

“We saw one component of the hydrogen emission start on one side of the star’s emission and then move over to the other side,” Prato says. “When a planet transits a star, you can determine the orbital period of the planet and how fast it is moving toward you or away from you as it orbits. So, we said, ‘If the planet is real, what is the velocity of the planet relative to the star?’ And it turned out that the velocity of the planet was exactly where this extra bit of H-alpha emission was moving back and forth.”

Transit observations revealed that the planet is only about 3 to 4 percent the size of the star, but the H-alpha emission from the planet appears to be almost as bright as the emission coming from the star, Johns-Krull says.

“There’s no way something confined to the planet’s surface could produce that effect. The gas has to be filling a much larger region where the gravity of the planet is no longer strong enough to hold on to it. The star’s gravity takes over, and eventually the gas will fall onto the star.”

Other researchers from Rice and from California Institute of Technology, the University of Texas at Austin, NASA, and Spain’s National Institute of Aerospace Technology are coauthors of the work that was funded by NASA and the National Science Foundation and is published in the Astrophysical Journal.

Source: Republished from Futurity.org as a derivative work under the Attribution 4.0 International license. Original article published to Futurity by  .

Featured Image Credit: A. Passwaters/Rice, based on original by Skyhawk92/Wikimedia Commons (Artist’s impression)

Now, Check Out:

Putting CO2 away for good by turning it into stone

By Martin Stute, Columbia University.

We seriously need to do something about CO2 emissions. Besides shifting to renewable energy sources and increasing energy efficiency, we need to start putting some of the CO2 away before it reaches the atmosphere. Perhaps the impacts of human-induced climate change will be so severe that we might even have to capture CO2 from the air and convert it into useful products such as plastic materials or put it someplace safe.

A group of scientists from several European countries and the United States including myself met in the middle, in Iceland, to figure out how CO2 could be put away safely – in the ground. In a recently published study, we demonstrated that two years after injecting CO2 underground at our pilot test site in Iceland, almost all of it has been converted into minerals.

The injection well that pumped waste CO2 and hydrogen sulfide gas from a geothermal well underground. Martin Stute, Author provided

Mineralization

Iceland is a very green country; almost all of its electricity comes from renewable sources including geothermal energy. Hot water from rocks beneath the surface is converted into steam which drives a turbine to generate electricity. However, geothermal power plants there do emit CO2 (much less than a comparable coal-fired power plant) because the hot steam from deep wells that runs the turbines also contains CO2 and sometimes hydrogen sulfide (H2S). Those gases usually just get released into the air.

Is there another place we could put these gases?

Conventional carbon sequestration deposits CO2 into deep saline aquifers or into depleted oil and natural gas reservoirs. CO2 is pumped under very high pressure into these formations and, since they held gases and fluids already over millions of year in place, the probability of CO2 leaking out is minuscule, as many studies have shown.

In a place like Iceland with its daily earthquakes cracking the volcanic rocks (basalts), this approach would not work. The CO2 could bubble up through cracks and leak back into the atmosphere.

However, basalt also has a great advantage: it reacts with CO2 and converts it into carbonate minerals. These carbonates form naturally and can be found as white spots in the basalt. The reactions also have been demonstrated in laboratory experiments.

Dissolving CO2 in water

For the first test, we used pure CO2 and pumped it through a pipe into an existing well that tapped an aquifer containing fresh water at about 1,700 feet of depth. Six months later we injected a mixture of CO2 and hydrogen sulfide piped in from the turbines of the power plant. Through a separate pipe we also pumped water into the well.

In the well, we released the CO2 through a sparger – a device for introducing gases into liquids similar to a bubble stone in an aquarium – into water. The CO2 dissolved completely within a couple of minutes in the water because of the high pressure at depth. That mixture then entered the aquifer.

We also added tiny quantities of tracers (gases and dissolved substances) that allow us to differentiate the injected water and CO2 from what’s already in the aquifer. The CO2 dissolved in water was then carried away by the slowly flowing groundwater.

Downstream, we had installed monitoring wells that allowed us to collect samples to figure out what happened to the CO2. Initially, we saw some of the CO2 and tracers coming through. After a few months, though, the tracers kept arriving but very little of the injected CO2 showed up.

Where was it going? Our pump in the monitoring well stopped working periodically, and when we brought it to the surface, we noticed that it was covered by white crystals. We analyzed the crystals and found they contained some of the tracers we had added and, best of all, they turned out to be mostly carbonate minerals! We had turned CO2 into rocks.

The CO2 dissolved in water had reacted with the basalt in the aquifer and more than 95 percent of the CO2 precipitated out as solid carbonate minerals – and it all happened much faster than anticipated, in less than two years.

The fracture in this basalt rock shows the white calcium carbonate crystals that form from the injection of CO2 with water at the test site.
Annette K. Mortensen, CC BY

This is the safest way to put CO2 away. By dissolving it in water, we already prevent CO2 gas from bubbling up toward the surface through cracks in the rocks. Finally, we convert it into stone that cannot move or dissolve under natural conditions.

One downside of this approach is that water needs to be injected alongside the CO2. However, because of the very rapid removal of the CO2 from the water in mineral form, this water could be pumped back out of the ground downstream and reused at the injection site.

Will it work elsewhere?

Ours was a small-scale pilot study, and the question is whether these reactions would continue into the future or pores and cracks in the subsurface basalt stone would eventually clog up and no longer be able to convert CO2 to carbonate.

Our Iceland geothermal power plant has increased the amount of gas injected several times in the years since our experiment was started using a different nearby location. No clogging has been encountered yet, and the plan is to soon inject almost all waste gases into the basalt. This process will also prevent the toxic and corrosive gas hydrogen sulfide from going into the atmosphere, which currently still can be detected at low levels near the power plant because of its characteristic rotten egg smell.

The very reactive rocks found in Iceland are quite common on Earth; about 10 percent of the continents and almost all of the ocean floors are made of basalt. This technology, in other words, is not limited to emissions from geothermal power plants but could also be used for other CO2 sources, such as fossil fuel power plants.

The commercial viability of the process still needs to be established in different locations. Carbon mineralization adds costs to a power plant’s operation, so this, like any form of carbon sequestration, needs an economic incentive to make it feasible.

People like to live near coasts, and many power plants have been built near their customers. Perhaps this technology could be used to put away CO2 emissions in coastal areas in nearby offshore basalt formations. Of course, there would be no shortage of water to co-inject with the CO2.

If we are forced to lower atmospheric CO2 levels in the future because we underestimate the damaging effects of climate change, we could perhaps use wind or solar-powered devices on an ocean platform to capture CO2 from the air and then inject the CO2 into basalt formations underneath.

Carbon mineralization, as demonstrated in Iceland, could be part of the solution of our carbon problem.

The ConversationMartin Stute, Professor of Environmental Science, Columbia University

This article was originally published on The Conversation. Read the original article.

Now, Check Out:

Why do only some people get ‘skin orgasms’ from listening to music?

By Mitchell Colver, Utah State University.

Have you ever been listening to a great piece of music and felt a chill run up your spine? Or goosebumps tickle your arms and shoulders?

The experience is called frisson (pronounced free-sawn), a French term meaning “aesthetic chills,” and it feels like waves of pleasure running all over your skin. Some researchers have even dubbed it a “skin orgasm.”

Listening to emotionally moving music is the most common trigger of frisson, but some feel it while looking at beautiful artwork, watching a particularly moving scene in a movie or having physical contact with another person. Studies have shown that roughly two-thirds of the population feels frisson, and frisson-loving Reddit users have even created a page to share their favorite frisson-causing media.

But why do some people experience frisson and not others?

Working in the lab of Dr. Amani El-Alayli, a professor of Social Psychology at Eastern Washington University, I decided to find out.

What causes a thrill, followed by a chill?

While scientists are still unlocking the secrets of this phenomenon, a large body of research over the past five decades has traced the origins of frisson to how we emotionally react to unexpected stimuli in our environment, particularly music.

Musical passages that include unexpected harmonies, sudden changes in volume or the moving entrance of a soloist are particularly common triggers for frisson because they violate listeners’ expectations in a positive way, similar to what occurred during the 2009 debut performance of the unassuming Susan Boyle on “Britain’s Got Talent.”

‘You didn’t expect that, did you?’

If a violin soloist is playing a particularly moving passage that builds up to a beautiful high note, the listener might find this climactic moment emotionally charged, and feel a thrill from witnessing the successful execution of such a difficult piece.

But science is still trying to catch up with why this thrill results in goosebumps in the first place.

Some scientists have suggested that goosebumps are an evolutionary holdover from our early (hairier) ancestors, who kept themselves warm through an endothermic layer of heat that they retained immediately beneath the hairs of their skin. Experiencing goosebumps after a rapid change in temperature (like being exposed to an unexpectedly cool breeze on a sunny day) temporarily raises and then lowers those hairs, resetting this layer of warmth.

Why do a song and a cool breeze produce the same physiological response?
EverJean/flickr, CC BY

Since we invented clothing, humans have had less of a need for this endothermic layer of heat. But the physiological structure is still in place, and it may have been rewired to produce aesthetic chills as a reaction to emotionally moving stimuli, like great beauty in art or nature.

Research regarding the prevalence of frisson has varied widely, with studies showing anywhere between 55 percent and 86 percent of the population being able to experience the effect.

Monitoring how the skin responds to music

We predicted that if a person were more cognitively immersed in a piece of music, then he or she might be more likely to experience frisson as a result of paying closer attention to the stimuli. And we suspected that whether or not someone would become cognitively immersed in a piece of music in the first place would be a result of his or her personality type.

To test this hypothesis, participants were brought into the lab and wired up to an instrument that measures galvanic skin response, a measure of how the electrical resistance of people’s skin changes when they become physiologically aroused.

Participants were then invited to listen to several pieces of music as lab assistants monitored their responses to the music in real time.

Examples of pieces used in the study include:

Each of these pieces contains at least one thrilling moment that is known to cause frisson in listeners (several have been used in previous studies). For example, in the Bach piece, the tension built up by the orchestra during the first 80 seconds is finally released by the entrance of the choir – a particularly charged moment that’s likely to elicit frisson.

As participants listened to these pieces of music, lab assistants asked them to report their experiences of frisson by pressing a small button, which created a temporal log of each listening session.

By comparing these data to the physiological measures and to a personality test that the participants had completed, we were, for the first time, able to draw some unique conclusions about why frisson might be happening more often for some listeners than for others.

This graph shows the reactions of one listener in the lab. The peaks of each line represent moments when the participant was particularly cognitively or emotionally aroused by the music. In this case, each of these peaks of excitement coincided with the participant reporting experiencing frisson in reaction to the music. This participant scored high on a personality trait called ‘Openness to Experience.’ Author provided

The role of personality

Results from the personality test showed that the listeners who experienced frisson also scored high for a personality trait called Openness to Experience.

Studies have shown that people who possess this trait have unusually active imaginations, appreciate beauty and nature, seek out new experiences, often reflect deeply on their feelings, and love variety in life.

Some aspects of this trait are inherently emotional (loving variety, appreciating beauty), and others are cognitive (imagination, intellectual curiosity).

While previous research had connected Openness to Experience with frisson, most researchers had concluded that listeners were experiencing frisson as a result of a deeply emotional reaction they were having to the music.

In contrast, the results of our study show that it’s the cognitive components of “Openness to Experience” – such as making mental predictions about how the music is going to unfold or engaging in musical imagery (a way of processing music that combines listening with daydreaming) – that are associated with frisson to a greater degree than the emotional components.

These findings, recently published in the journal Psychology of Music, indicate that those who intellectually immerse themselves in music (rather than just letting it flow over them) might experience frisson more often and more intensely than others.

And if you’re one of the lucky people who can feel frisson, the frisson Reddit group has identified Lady Gaga’s rendition of the Star-Spangled Banner at the 2016 Super Bowl and a fan-made trailer for the original Star Wars trilogy as especially chill-inducing.

The ConversationMitchell Colver, Ph.D. Student in Education, Utah State University

This article was originally published on The Conversation. Read the original article.

Now, Check Out:

What are septic shock and sepsis? The facts behind these deadly conditions

By Hallie Prescott, University of Michigan and Theodore Iwashyna, University of Michigan.

Most Americans have never heard of it, but according to recent federal data, sepsis is the most expensive cause of hospitalization in the U.S., and is now the most common cause of ICU admission among older Americans.

Sepsis is a complication of infection that leads to organ failure. More than one million patients are hospitalized for sepsis each year. This is more than the number of hospitalizations for heart attack and stroke combined. People with chronic medical conditions, such as neurological disease, cancer, chronic lung disease and kidney disease, are at particular risk for developing sepsis.

And it is deadly. Between one in eight and one in four patients with sepsis will die during hospitalization – as most notably Muhammad Ali did in June 2016. In fact sepsis contributes to one-third to one-half of all in-hospital deaths. Despite these grave consequences, fewer than half of Americans know what the word sepsis means.

What is sepsis and why is it so dangerous?

Sepsis a severe health problem sparked by your body’s reaction to infection. When you get an infection, your body fights back, releasing chemicals into the bloodstream to kill the harmful bacteria or viruses. When this process works the way it is supposed to, your body takes care of the infection and you get better. With sepsis, the chemicals from your body’s own defenses trigger inflammatory responses, which can impair blood flow to organs, like the brain, heart or kidneys. This in turn can lead to organ failure and tissue damage.

At its most severe, the body’s response to infection can cause dangerously low blood pressure. This is called septic shock.

Sepsis can result from any type of infection. Most commonly, it starts as a pneumonia, urinary tract infection or intra-abdominal infection such as appendicitis. It is sometimes referred to as “blood poisoning,” but this is an outdated term. Blood poisoning is an infection present in the blood, while sepsis refers to the body’s response to any infection, wherever it is.

Once a person is diagnosed with sepsis, she will be treated with antibiotics, IV fluids and support for failing organs, such as dialysis or mechanical ventilation. This usually means a person needs to be hospitalized, often in an ICU. Sometimes the source of the infection must be removed, as with appendicitis or an infected medical device.

It can be difficult to distinguish sepsis from other diseases that can make one very sick, and there is no lab test that can confirm sepsis. Many conditions can mimic sepsis, including severe allergic reactions, bleeding, heart attacks, blood clots and medication overdoses. Sepsis requires particular prompt treatments, so getting the diagnosis right matters.

Back so soon?
Hospital hallway image via www.shutterstock.com.

The revolving door of sepsis care

As recently as a decade ago, doctors believed that sepsis patients were out of the woods if they could just survive to hospital discharge. But that isn’t the case – 40 percent of sepsis patients go back into the hospital within just three months of heading home, creating a “revolving door” that gets costlier and riskier each time, as patients get weaker and weaker with each hospital stay. Sepsis survivors also have an increased risk of dying for months to years after the acute infection is cured.

If sepsis wasn’t bad enough, it can lead to another health problem: Post-Intensive Care Syndrome (PICS), a chronic health condition that arises from critical illness. Common symptoms include weakness, forgetfulness, anxiety and depression.

Post-Intensive Care Syndrome and frequent hospital readmissions mean that we have dramatically underestimated how much sepsis care costs. On top of the US$5.5 billion we now spend on initial hospitalization for sepsis, we must add untold billions in rehospitalizations, nursing home and professional in-home care, and unpaid care provided by devoted spouses and families at home.

Unfortunately, progress in improving sepsis care has lagged behind improvements in cancer and heart care, as attention has shifted to the treatment of chronic diseases. However, sepsis remains a common cause of death in patients with chronic diseases. One way to help reduce the death toll of these chronic diseases may be to improve our treatment of sepsis.

Rethinking sepsis identification

Raising public awareness increases the likelihood that patients will get to the hospital quickly when they are developing sepsis. This in turn allows prompt treatment, which lowers the risk of long-term problems.

Beyond increasing public awareness, doctors and policymakers are also working to improve the care of sepsis patients in the hospital.

For instance, a new sepsis definition was released by several physician groups in February 2016. The goal of this new definition is to better distinguish people with a healthy response to infection from those who are being harmed by their body’s response to infection.

As part of the sepsis redefinition process, the physician groups also developed a new prediction tool called qSOFA. This instrument identifies patients with infection who are at high risk of death or prolonged intensive care. The tools uses just three factors: thinking much less clearly than usual, quick breathing and low blood pressure. Patients with infection and two or more of these factors are at high risk of sepsis. In contrast to prior methods of screening patients at high risk of sepsis, the new qSOFA tool was developed through examining millions of patient records.

Life after sepsis

Even with great inpatient care, some survivors will still have problems after sepsis, such as memory loss and weakness.

Doctors are wrestling with how to best care for the growing number of sepsis survivors in the short and long term. This is no easy task, but there are several exciting developments in this area.

The Society of Critical Care Medicine’s THRIVE initiative is now building a network of support groups for patients and families after critical illness. THRIVE will forge new ways for survivors to work with each other, like how cancer patients provide each other advice and support.

As medical care is increasingly complex, many doctors contribute to a patient’s care for just a week or two. Electronic health records let doctors see how the sepsis hospitalization fits into the broader picture – which in turn helps doctors counsel patients and family members on what to expect going forward.

The high number of repeat hospitalizations after sepsis suggests another opportunity for improving care. We could analyze data about patients with sepsis to target the right interventions to each individual patient.

Better care.
Intensive care image via www.shutterstock.com.

Better care through better policy

In 2012, New York state passed regulations to require every hospital to have a formal plan for identifying sepsis and providing prompt treatment. It is too early to tell if this is a strong enough intervention to make things better. However, it serves as a clarion call for hospitals to end the neglect of sepsis.

The Centers for Medicare & Medicaid Services (CMS) are also working to improve sepsis care. Starting in 2017, CMS will adjust hospital payments by quality of sepsis treatment. Hospitals with good report cards will be paid more, while hospitals with poor marks will be paid less.

To judge the quality of sepsis care, CMS will require hospitals to publicly report compliance with National Quality Forum’s “Sepsis Management Bundle.” This includes a handful of proven practices such as heavy-duty antibiotics and intravenous fluids.

While policy fixes are notorious for producing unintended consequences, the reporting mandate is certainly a step in the right direction. It would be even better if the mandate focused on helping hospitals work collaboratively to improve their detection and treatment of sepsis.

Right now, sepsis care varies greatly from hospital to hospital, and patient to patient. But as data, dollars and awareness converge, we may be at a tipping point that will help patients get the best care, while making the best use of our health care dollars.

This is an updated version of an article originally published on July 1, 2015. You can read the original version here.

The ConversationHallie Prescott, Assistant Professor in Internal Medicine, University of Michigan and Theodore Iwashyna, Associate Professor, University of Michigan

This article was originally published on The Conversation. Read the original article.

Now, Check Out:

Here’s the Scientifically #1 Way to Prevent Dementia

Abnormalities in brain tissue begin several decades before the onset of cognitive decline, but little is known about the lifestyle factors that might slow the onset of decline in middle age.

However, a new longitudinal study from the University of Melbourne has found that regular exercise in middle age is the best lifestyle change a person can make to prevent cognitive decline in their later years, according to this 20-year study.

“The message from our study is very simple. Do more physical activity, it doesn’t matter what.”

As the incidence of Alzheimer’s disease diagnosis doubles every five years after 65, most longitudinal studies examining risk factors and cognitive disease are with adults who are over the age of 60 or 70.

The new study, published in the American Journal of Geriatric Psychiatry, tracked 387 Australian women from the Women’s Healthy Aging Project for two decades. The women were aged 45-55 when the study began in 1992.

Researchers were interested to find out how lifestyle and biomedical factors—such as weight, BMI, and blood pressure—affected memory 20 years down the track, says Cassandra Szoeke, associate professor at the University of Melbourne and director of the Women’s Healthy Aging Project.

“There are few research studies which have data on participants from midlife and have assessed cognition in all their participants in later life. This research is really important because we suspect half the cases of dementia worldwide are most likely due to some type of modifiable risk factor.

“Unlike muscle and vessels, which have the capacity to remodel and reverse atrophy and damage, neuronal cells are not nearly so versatile with damage and cell loss is irreversible.”

Over two decades, Szoeke and colleagues took a wide range of measurements from study participants, taking note of lifestyle factors—including exercise and diet, education, marital and employment status, number of children, physical activity, and smoking.

A slow moving freight train

They also measured hormone levels, cholesterol, height, weight, body mass index, and blood pressure at 11 points throughout the study. Hormone replacement therapy was factored in.

The women were given a Verbal Episodic Memory test in which they were asked to learn a list of 10 unrelated words and attempt to recall them 30 minutes later.

When measuring the amount of memory loss over 20 years, frequent physical activity, normal blood pressure, and high good cholesterol were all strongly associated with better recall.

Once dementia occurs, it is a slow moving freight train to permanent memory loss, Szoeke says. “In our study more weekly exercise was associated with better memory. We now know that brain changes associated with dementia take 20 to 30 years to develop.

“The evolution of cognitive decline is slow and steady, so we needed to study people over a long time period. We used a verbal memory test because that’s one of the first things to decline when you develop Alzheimer’s disease.”

Regular exercise of any type, from walking the dog to mountain climbing, emerged as the number one protective factor against memory loss.

In fact, the beneficial influence of physical activity and blood pressure together compensates the negative influence of age on a person’s mental faculties.

The best effects come from cumulative exercise, that is, how much you do and how often over the course of your life, Szoeke says.

“The message from our study is very simple. Do more physical activity, it doesn’t matter what, just move more and more often. It helps your heart, your body, and prevents obesity and diabetes and now we know it can help your brain. It could even be something as simple as going for a walk, we weren’t restrictive in our study about what type.”

But the key is to start as soon as possible.

“We expected it was the healthy habits later in life that would make a difference but we were surprised to find that the effect of exercise was cumulative. So every one of those 20 years mattered.

“If you don’t start at 40, you could miss one or two decades of improvement to your cognition because every bit helps. That said, even once you’re 50 you can make up for lost time. There is no doubt that intervention is better late than never, but the results of our work indicate that an intervention after 65 will have missed at least 20 years of risk.”

The National Health and Medical Research Council and the Alzheimer’s Association funded the work, which was published in the American Journal of Geriatric Psychiatry.

Source: Republished from Futurity.org as a derivative work under the Attribution 4.0 International license. Original article posted to Futurity by .

Now, Check Out:

Pluto’s “Heart” Slowly Bubbles Like a Lava Lamp

Like a cosmic lava lamp, a large section of Pluto’s icy surface is renewed by a process called convection that replaces older ices with fresher material.

Combining computer models with topographic and compositional data gathered by NASA’s New Horizons spacecraft last summer, New Horizons team members have been able to determine the depth of this layer of solid nitrogen ice within Pluto’s distinctive “heart” feature—a large plain informally known as Sputnik Planum—and how fast that ice is flowing.

Nh-pluto-in-true-color_2x_JPEG-edit-frame-760x760
Global view of Pluto reconstructed from images made during the July 14, 2015 flyby of the dwarf planet. The pristine “heart,” to the lower right, so unlike the features of other icy planets, begs for explanation. (Photo: NASA/Johns Hopkins University Applied Physics Laboratory/Southwest Research Institute)

Mission scientists used state-of-the-art computer simulations to show that the surface of Sputnik Planum is covered with icy, churning, convective “cells” 10-30 miles across, and less than a million years old. The findings offer additional insight into the unusual and active geology on Pluto and, perhaps, other bodies like it on the planetary outskirts of the solar system.

“For the first time, we can really determine what these strange welts of the icy surface of Pluto really are,” says William B. McKinnon, professor of earth and planetary sciences at Washington University in St. Louis, who led the study. “We found evidence that even on a distant cold planet billions of miles from Earth, there is sufficient energy for vigorous geological activity, as long as you have something as soft and pliable as nitrogen ice.” McKinnon is also deputy lead for geology, geophysics, and imaging for New Horizons.

planuncolorcorrect1024x1102
Close-up of Sputnik Planum shows the slowly overturning cells of nitrogen ice. Boulders of water ice and methane debris (red) that have broken off hills surrounding the heart have collected at the boundaries of the cells. (Photo: NASA/Johns Hopkins University Applied Physics Laboratory/Southwest Research Institute)

McKinnon and colleagues believe the pattern of these cells stems from the slow thermal convection of the nitrogen-dominated ices that fill Sputnik Planum. In a reservoir that’s likely several miles deep in some places, the solid nitrogen is warmed by Pluto’s modest internal heat, becomes buoyant, and rises up in great blobs—think of a lava lamp—before cooling off and sinking again to renew the cycle.

The computer models show that ice need only be a few miles deep for this process to occur, and that the convection cells are very broad. The models also show that these blobs of overturning solid nitrogen can slowly evolve and merge over millions of years. Ridges that mark where cooled nitrogen ice sinks back down can be pinched off and abandoned, resulting in Y- or X-shaped features in junctions where three or four convection cells once met.

“I was very surprised that what I learned about convection during my recent PhD work at Washington University could be applied to Pluto, because nobody thought Pluto was so active (or convecting at all),” says Teresa Wong, a postdoctoral research associate at Washington University and a coauthor on the study.

These convective surface motions average only a few centimeters a year—about as fast as your fingernails grow—which means cells recycle their surfaces every 500,000 years or so. Slow on human clocks, but a rapid clip on geological timescales.

“This activity probably helps support Pluto’s atmosphere by continually refreshing the surface of ‘the heart,’” McKinnon says. “It wouldn’t surprise us to see this process on other dwarf planets in the Kuiper Belt. Hopefully, we’ll get a chance to find out someday with future exploration missions there.”

New Horizons also could potentially take a close-up look at a smaller, more ancient object much farther out in the Kuiper Belt: the disk-shaped region beyond the orbit of Neptune believed to contain comets, asteroids, and other small, icy bodies. New Horizons flew through the Pluto system on July 14, 2015, making the first close observations of Pluto and its family of five moons.

The spacecraft is on course for an ultra-close flyby of another Kuiper Belt object, 2014 MU69, on Jan. 1, 2019, should NASA approve funding for an extended mission.

This study is published in the journal Nature.

Source: Republished from Futurity.org as a derivative work under the Attribution 4.0 International license. Original article posted to Futurity by 

Featured Image Credit: Ged Carroll/flickr

Now, Check Out: