The global impact of air conditioning: big and getting bigger

By Lucas Davis, University of California, Berkeley.

With a heat wave pushing the heat index well above 100 degrees Fahrenheit (38 Celsius) through much of the U.S., most of us are happy to stay indoors and crank the air conditioning. And if you think it’s hot here, try 124°F in India. Globally, 2016 is poised to be another record-breaking year for average temperatures. This means more air conditioning. Much more.

In a paper published in the Proceedings of the National Academy of Science (PNAS), Paul Gertler and I examine the enormous global potential for air conditioning. As incomes rise around the world and global temperatures go up, people are buying air conditioners at alarming rates. In China, for example, sales of air conditioners have nearly doubled over the last five years. Each year now more than 60 million air conditioners are sold in China, more than eight times as many as are sold annually in the United States.

A ‘heat dome’ arrives in the U.S.
NOAA Forecast Daily Maximum Heat Index

This is mostly great news. People are getting richer, and air conditioning brings great relief on hot and humid days. However, air conditioning also uses vast amounts of electricity. A typical room air conditioner, for example, uses 10-20 times as much electricity as a ceiling fan.

Meeting this increased demand for electricity will require billions of dollars of infrastructure investments and result in billions of tons of increased carbon dioxide emissions. A new study by Lawrence Berkeley Lab also points out that more ACs means more refrigerants that are potent greenhouse gases.

Evidence from Mexico

To get an idea of the global impact of higher air conditioner use, we looked at Mexico, a country with highly varied climate ranging from hot and humid tropical to arid deserts to high-altitude plateaus. Average year-round temperatures range from the high 50’s Fahrenheit in the high-altitude plateaus to low 80’s in the Yucatan Peninsula.

Graphic shows the range of average temperatures in Fahrenheit in different parts of Mexico.
Davis and Gertler, PNAS, 2015. Copyright 2015 National Academy of Sciences, USA.

Patterns of air conditioning vary widely across Mexico. There is little air conditioning in cool areas of the country; even at high-income levels, penetration never exceeds 10 percent. In hot areas, however, the pattern is very different. Penetration begins low but then increases steadily with income to reach near 80 percent.

 

Davis and Gertler, PNAS, 2015. Copyright 2015 National Academy of Sciences, USA.

As Mexicans grow richer, many more will buy air conditioners. And as average temperatures increase, the reach of air conditioning will be extended, even to the relatively cool areas where saturation is currently low. Our model predicts that near 100 percent of households will have air conditioning in all the warm areas within just a few decades.

Global air conditioning potential

We expect this pattern to hold not only in Mexico but around the world. When you look around, there are a lot of hot places where people are getting richer. In our study, we ranked countries in terms of air conditioning potential. We defined potential as the product of population and cooling degree days (CDDs), a unit used to determine the demand for energy to cool buildings.

Davis and Gertler, PNAS, 2015. Copyright 2015 National Academy of Sciences, USA.

Number one on the list is India. India is massive, with four times the population of the United States. It is also extremely hot. Annual CDDs are 3,120, compared to only 882 in the United States. That is, India’s total air conditioning potential is more than 12 times that of the United States.

Mexico ranks #12 but has fewer than half the CDDs experienced by India, Indonesia, Philippines and Thailand. These countries currently have lower GDP per capita, but our research predicts rapid air conditioning adoption in these countries over the next couple of decades.

Carbon cliff?

What does all this mean for carbon dioxide emissions? It depends on the pace of technological change, both for cooling equipment and for electricity generation.

Today’s air conditioners use only about half as much electricity now as in 1990, and continued advances in energy efficiency could reduce the energy consumption impacts substantially. Likewise, continued development of solar, wind and other low-carbon sources of electricity generation could mitigate the increases in carbon dioxide emissions.

ACs cranking in Shanghai, China. question_everything/flickr, CC BY-NC-ND

As an economist, my view is that the best way to get there is a carbon tax. Higher-priced electricity would slow the adoption and use of air conditioning, while spurring innovation in energy efficiency. A carbon tax would also give a boost to renewable generating technologies, increasing their deployment. Low- and middle-income countries are anticipating large increases in energy demand over the next several decades, and carbon legislation along the lines of carbon tax is the most efficient approach to meeting that demand with low-carbon technologies.

Pricing carbon would also lead to broader behavioral changes. Our homes and businesses tend to be very energy-intensive. In part, this reflects the fact that carbon emissions are free. Energy would be more expensive with a price on carbon, so more attention would go to building design. Natural shade, orientation, building materials, insulation and other considerations can have a big impact on energy consumption. We need efficient markets if we are going to stay cool without heating up the planet.

The ConversationLucas Davis, Associate Professor, University of California, Berkeley

This article was originally published on The Conversation. Read the original article.

Now, Check Out:

The Universe’s 7 Biggest Unanswerable Questions [Video]

Have you ever wondered about life, the universe, and everything? According to Douglas Adams’ A Hitchhiker’s Guide to the Galaxy series, the answer is 42, but divide that by 6 and now we can examine in depth 7 of the most perplexing questions about the universe that science will probably never answer.

Our gratitude and appreciation to the Hybrid Librarian YouTube channel for creating this thought-provoking video.

PS: Yes, we know our intro text to this one was a bit strange… Probably because after watching this video, our minds were a little blown!  😉

Now, Check Out:

X-Ray Study Obtains Clear View of How Lithium-ion Batteries Work

Despite decades of research, scientists haven’t been able to fully understand how batteries work at the smallest of scales.

In a paper published in the journal Science, researchers describe a way to peer as never before into the electrochemical reaction that fuels the most common rechargeable cell in use today: the lithium-ion battery.

By visualizing the fundamental building blocks of batteries—small particles typically measuring less than 1/100th of a human hair in size—the team from Stanford University has illuminated a process that is far more complex than once thought. Both the method they developed to observe the battery in real time and their improved understanding of the electrochemistry could have far-reaching implications for battery design, management, and beyond.

“It gives us fundamental insights into how batteries work,” says Jongwoo Lim, a co-lead author of the paper and postdoctoral researcher at the Department of Energy’s SLAC National Accelerator Laboratory. “Previously, most studies investigated the average behavior of the whole battery. Now, we can see and understand how individual battery particles charge and discharge.”

Make batteries last longer

At the heart of every lithium-ion battery is a simple chemical reaction in which positively charged lithium ions nestle in the lattice-like structure of a crystal electrode as the battery is discharging, receiving negatively charged electrons in the process. In reversing the reaction by removing electrons, the ions are freed and the battery is charged.

These basic processes—known as lithiation (discharge) and delithiation (charge)—are hampered by an electrochemical Achilles heel. Rarely do the ions insert uniformly across the surface of the particles.

battery_gif_lim_v04
Greatly magnified nanoscale particles are shown here charging (red to green) and discharging (green to red). The animation shows regions of faster and slower charge. (Credit: SLAC National Accelerator Laboratory)

Instead, certain areas take on more ions, and others fewer. These inconsistencies eventually lead to mechanical stress as areas of the crystal lattice become overburdened with ions and develop tiny fractures, sapping battery performance and shortening battery life.

“Lithiation and delithiation should be homogenous and uniform,” says Yiyang Li, a doctoral candidate and co-lead author of the paper. “In reality, however, they’re very non-uniform. In our better understanding of the process, this paper lays out a path toward suppressing the phenomenon.”

For researchers hoping to improve batteries, counteracting these detrimental forces could lead to batteries that charge faster and more fully, lasting much longer than today’s models.

This study visualizes the charge/discharge reaction in real-time—something scientists refer to as operando—at fine detail and scale. The team utilized brilliant X-rays and cutting-edge microscopes at Lawrence Berkeley National Laboratory’s Advanced Light Source.

“The phenomenon revealed by this technique, I thought would never be visualized in my lifetime. It’s quite game-changing in the battery field,” says Martin Bazant, a professor of chemical engineering and of mathematics at MIT who led the theoretical aspect of the study.

A transparent battery

The researchers fashioned a transparent battery using the same active materials as ones found in smartphones and electric vehicles. It was designed and fabricated in collaboration with Hummingbird Scientific. It consists of two very thin, transparent silicon nitride “windows.”

The battery electrode, made of a single layer of lithium iron phosphate nanoparticles, sits on the membrane inside the gap between the two windows. A salty fluid, known as an electrolyte, flows in the gap to deliver the lithium ions to the nanoparticles.

Artist’s rendition shows lithium-ion battery particles under the illumination of a finely focused X-ray beam. (Credit: Courtesy Chueh Lab)
Artist’s rendition shows lithium-ion battery particles under the illumination of a finely focused X-ray beam. (Credit: Courtesy Chueh Lab)

“This was a very, very small battery, holding ten billion times less charge than a smartphone battery,” says William Chueh, an assistant professor of materials science and engineering at Stanford and a faculty scientist at SLAC, who led the team. “But it allows us a clear view of what’s happening at the nanoscale.”

The researchers discovered that the charging process (delithiation) is significantly less uniform than discharge (lithiation). Intriguingly, the researchers also found that faster charging improves uniformity, which could lead to new and better battery designs and power management strategies.

“The improved uniformity lowers the damaging mechanical stress on the electrodes and improves battery cyclability,” Chueh says. “Beyond batteries, this work could have far-reaching impact on many other electrochemical materials.”

He points to catalysts, memory devices, and so-called smart glass, which transitions from translucent to transparent when electrically charged.

“What we’ve learned here is not just how to make a better battery, but offers us a profound new window on the science of electrochemical reactions at the nanoscale,” Bazant says.

The US Department of Energy, Office of Basic Energy Sciences, and the Ford-Stanford Alliance funded the work. Bazant was a visiting professor at Stanford and was supported by the Global Climate and Energy Project. The team’s work was published in the journal Science.

Source: Republished from Futurity.org as a derivative work under the Attribution 4.0 International license. Original article posted to Futurity by .

Featured Image Credit: Solomon203 via Wikimedia Commons

Now, Check Out:

Geomythology: Can geologists relate ancient stories of great floods to real events?

By David R. Montgomery, University of Washington.

Modern people have long wondered about ancient stories of great floods. Do they tell of real events in the distant past, or are they myths rooted in imagination? Most familiar to many of us in the West is the biblical story of Noah’s flood. But cultures around the world have passed down their own tales of devastating natural disasters.

New research recently published in Science by a group of mostly Chinese researchers led by Qinglong Wu reports geological evidence for an event they propose may be behind China’s story of a great flood. This new research delves into the field of geomythology, which relates oral traditions and folklore to natural phenomena like earthquakes, volcanic eruptions and floods.

A view of Jishi Gorge, upstream from the landslide dam researchers say unleashed a great flood in China almost 4,000 years ago. Gray silt deposits are visible dozens of meters above the water.
Wu Qinglong, CC BY-NC

“Great Yu controls the waters”

The story of Emperor Yu, the legendary founder of China’s first dynasty, centers on his ability to drain persistent floodwaters from lowland areas, bringing order to the land. This ancient flood story centers on the triumph of human ingenuity and labor over the chaotic forces of the natural world. It’s strikingly different from other flood traditions in that its hero didn’t survive a world-destroying flood but rather pulled off feats of river engineering that brought order to the land and paved the way for lowland agriculture. But was Emperor Yu a real historic person, and if so what triggered the great flood so central to his story?

Diagram of the hypothesized dam outburst process in the Jishi Gorge. Wu Qinglong,, CC BY-ND

In their new analysis, Wu and colleagues build on previous studies of landslides in the Jishi Gorge that
dammed the Yellow River where it flows down off the Tibetan Plateau. They marshal geological and archaeological evidence to argue that when a landslide dam failed, a flood ripped down China’s Yellow River around 1920 B.C. They dated lake sediments trapped upstream of the landslide dam and flood sediments deposited downstream at elevations of up to 165 feet above river level. They estimated the landslide dam’s failure sent almost a half million cubic meters of water per second surging down the Yellow River and on across early China. They also note that the timing of this flood coincides with a major archaeological transition from the Neolithic to Bronze Age in the downstream lowlands along the Yellow River.

Detail of hanging scroll of Emperor Yu. Ma Lin

The Science study not only reports evidence of a great flood at the right time and place to be Yu’s flood, but also notes how it coincides with a previously identified shift in the course of the Yellow River to a new outlet across the North China plain. The researchers suggest the flood they identified may have breached the levees on the lowland river and triggered this shift.

And this, in turn, would help explain a unique aspect of the story of Yu’s flood. A large river rerouted to a new course could trigger persistent lowland flooding. A longer route to the sea would impose a gentler slope that would promote deposition of sediment, clogging the channel, and splitting flow into multiple channels – all of which would exacerbate flooding of lowland areas. This sounds like a pretty good setup for the story of Yu’s long labor to drain the floodwaters and channel them to the sea.

Flood stories from cultures around the globe

When I researched the potential geological origins of the world’s flood stories for my book “The Rocks Don’t Lie: A Geologist Investigates Noah’s Flood,” I was impressed with how the geography of seemingly curious details in many local myths was consistent with geological processes that cause disastrous floods in different regions. Even along the Nile, where the annual flood is quite predictable, the lack of flood stories is consistent with how droughts were the real danger in ancient Egypt. There, failure to flood would have been catastrophic.

Around the tsunami-prone Pacific, flood stories tell of disastrous waves that rose from the sea. Early Christian missionaries were perplexed as to why flood traditions from South Pacific islands didn’t mention the Bible’s 40 days and nights of rain, but instead told of great waves that struck without warning. A traditional story from the coast of Chile described how two great snakes competed to see which could make the sea rise more, triggering an earthquake and sending a great wave ashore. Native American stories from coastal communities in the Pacific Northwest tell of great battles between Thunderbird and Whale that shook the ground and sent great waves crashing ashore. These stories sound like prescientific descriptions of a tsunami: an earthquake-triggered wave that can catastrophically inundate shorelines without warning.

Glacial dams can give way unexpectedly, releasing massive amounts of water that had been held back by the ice.
Dominic Alves, CC BY

Other flood stories evoke the failure of ice and debris dams on the margins of glaciers that suddenly release the lakes they held back. A Scandinavian flood story, for example, tells of how Odin and his brothers killed the ice giant Ymir, causing a great flood to burst forth and drown people and animals. It doesn’t take a lot of imagination to see how this might describe the failure of a glacial dam.

While doing fieldwork in Tibet, I learned of a local story about a great guru draining a lake in the valley of the Tsangpo River on the edge of the Tibetan Plateau – after our team had discovered terraces made of lake sediments perched high above the valley floor. The 1,200-year-old carbon dates from wood fragments we collected from the lake sediments correspond to the time when the guru arrived in the valley and converted the local populace to Buddhism by defeating, so the story goes, the demon of the lake to reveal the fertile lake bottom that the villagers still farm.

The most deadly and disruptive floods would be talked about for years to come. Here Aztecs perform a ritual to appease the angry gods who had flooded their capital.

Don’t expect definitive proof

Of course, attempts to bring science to bear on relating ancient tales to actual events are fraught with speculation. But it is clear that stories of great floods are some of humanity’s oldest. And the global pattern of tsunamis, glacial outburst floods, and catastrophic flooding of lowlands fits rather well with unusual details within many flood stories.

And even though geological evidence put the idea of a global flood to rest almost two centuries ago, there are options for a rational explanation of the biblical flood. One is a catastrophic inundation that oceanographers Bill Ryan and Walter Pitman propose happened when the post-glacial rise in sea level breached the Bosporus and decanted the Mediterranean into a lowland freshwater valley, forming the Black Sea. Or perhaps it could relate to cataclysmic lowland flooding in estuarine Mesopotamia like that which inundated the Irrawaddy Delta in 2008, killing more than 130,000 people.

Does the new study by Wu and his colleagues prove that the great flood they reconstruct was in fact Emperor Yu’s flood? No, but it does make an intriguing case for the possibility. Yet previous researchers studying landslide dams in the Jishi gorge have concluded that ancient lakes there drained slowly and dated to more than 1,000 years before the dates reported in this latest article. Was there more than one generation of landslide dams and floods? No doubt geologists will continue to argue about the evidence. That is, after all, what we do.

It’s always been part of human nature to be fascinated by and pay attention to the natural world. Great floods and other natural disasters were long seen as the work of angry deities or supernatural entities or powers. But now that we are learning that some stories once viewed as folklore and myth may be rooted in real events, scientists are paying a little more attention to the storytellers of old.

The ConversationDavid R. Montgomery, Professor of Earth and Space Sciences, University of Washington

This article was originally published on The Conversation. Read the original article.

Now, Check Out:

Traveling to Mars with immortal plasma rockets

By Gary Li, University of California, Los Angeles.

Nearly 50 years after landing on the moon, mankind has now set its sights on sending the first humans to Mars. The moon trip took three days; a Mars trip will likely take most of a year. The difference is in more than just time.

We’ll need many more supplies for the trip itself, and when we get to the Red Planet, we’re going to need to set up camp and stay for a while. Carrying all this material will require a revolutionary rocket technology.

Saturn V rocket drawn to scale with Statue of Liberty. Apollo spacecraft and the moon are not to scale.
CC BY-ND

The Saturn V was the largest rocket ever built. It consumed an enormous amount of fuel in explosive chemical reactions that propelled the Apollo spacecraft into orbit. After reaching orbit, Apollo ejected the empty fuel tanks and turned on its own chemical rockets that used even more fuel to get to the moon. It took nearly a million gallons of various fuels just to send a few people on a day trip to our nearest extraterrestrial body.

So how could we send a settlement to Mars, which is more than 100 times farther away than the moon? The Saturn-Apollo combination could deliver only the mass equivalent of one railroad boxcar to the moon; it would take dozens of those rockets just to build a small house on Mars. Sadly, there are no alternatives for the “chemical” launch rocket; only powerful chemical explosions can provide enough force to overcome Earth’s gravity. But once in space, a new fuel-efficient rocket technology can take over: plasma rockets.

Gary Li’s University of California Grad Slam 2016 talk about his research.

The ‘electric vehicles’ of space

Plasma rockets are a modern technology that transforms fuel into a hot soup of electrically charged particles, known as plasma, and ejects it to push a spacecraft. Using plasma rockets instead of the traditional chemical rockets can reduce total in-space fuel usage by 90 percent. That means we could deliver 10 times the amount of cargo using the same fuel mass. NASA mission planners are already looking into using plasma rocket transport vehicles for ferrying cargo between Earth and Mars.

6 kW Hall Thruster. NASA JPL

The main downside to plasma rockets is their low thrust. Thrust is a measure of how strong a “push” the rocket can supply to the spacecraft. The most powerful plasma rocket flown in space, called a Hall thruster, would produce only enough thrust to lift a piece of paper against Earth’s gravity. Believe it or not, a Hall thruster would take many years of continuous pushing to reach Mars.

But don’t worry, weak thrust is not a deal breaker. Thanks to its revolutionary fuel efficiency, plasma rockets have enabled NASA to perform missions that would otherwise not be possible with chemical rockets. Just recently, the Dawn mission demonstrated the potential of plasma rockets by becoming the first spacecraft to orbit two different extraterrestrial bodies.

While the future of plasma rockets is bright, the technology still has unsolved problems. For example, what’s going to happen to a thruster that runs for the many years it takes to perform round-trip cargo missions to Mars? Most likely, it’ll break.

That’s where my research comes in. I need to find out how to make plasma rockets immortal.

Understanding plasma rockets

Model plasma rocket diagram. Most similar to an ion thruster design. Author provided,  CC BY-ND

To do this, we need to understand how a plasma rocket works. The rocket creates a plasma by injecting electrical energy into a gaseous fuel, stripping negatively charged electrons from the positively charged ions. The ions are then shot out the back of the rocket, pushing the spacecraft forward.

Unfortunately, all that energy in plasma does more than propel spaceships – it wants to destroy any material it comes into contact with. Electric forces from the negatively charged walls cause the ions to slam into the wall at very high speeds. These collisions break atoms off the wall, slowly weakening it over time. Eventually, enough ions hit the wall that the entire wall breaks, the thruster stops functioning and your spacecraft is now stuck in space.

It’s not enough to use tougher materials to withstand the bombardment: There will always be some amount of damage regardless of how strong the material is. We need a clever way of manipulating the plasma, and the wall material, to avoid damage.

A self-healing wall

Wouldn’t it be great if the chamber wall could repair itself? It turns out there are two physical effects that can allow this to happen.

Illustration of three possible scenarios for a wall atom that comes off: 1) it’s lost forever, 2) it intercepts a wall and deposits or 3) it becomes ionized and is accelerated by electric forces to deposit on the wall.
CC BY-ND

The first is known as ballistic deposition and is present in materials with microscopic surface variations, like spikes or columns. When an ion hits the wall, a piece of these microfeatures that breaks off can fly in any direction. Some of these pieces will hit nearby protruding parts of the surface and stick, leaving the wall effectively undamaged. However, there will always be atoms that fly away from the wall and are lost forever.

Microstructures on a material sample viewed under a Scanning Electron Microscope. Chris Matthes (UCLA),  CC BY-ND

The second phenomenon is less intuitive and depends on the plasma conditions. Imagine the same scenario where the wall particle breaks off and flies into the plasma. However, instead of being lost forever, the particle suddenly turns around and goes straight back to the wall.

This is similar to how a baseball tossed straight up into the air turns around and drops back to your hand. With the baseball, gravity stops the ball from going up any higher and pulls it back down to the ground. In a thruster, it’s the electric force between the negatively charged wall and the wall particle itself. It comes off neutrally charged, but can lose its electron in the plasma, becoming positively charged. The result is that the particle is pulled back toward the wall, in a phenomenon known as plasma redeposition. This process can be controlled by changing the density and temperature of the plasma.

Testing different materials

Sample materials being assessed in the UCLA Plasma-interactions test facility.
CC BY-ND

Here at UCLA, I create a plasma and smash it into microfeatured materials, to measure the effects of ballistic deposition and plasma redeposition. Remember, ballistic deposition depends on the wall’s surface structures, while plasma redeposition depends on the plasma. For my initial study, I adjusted the plasma conditions so there was no plasma redeposition, and only ballistic deposition occurred.

Then I turned my attention from the plasma to the wall. The first microfeatured sample I tested had its damage reduced by 20 percent. By improving the design of the microfeatures, the damage can be reduced even further, potentially as much as 50 percent. Such a material on a thruster could make the difference between getting to Mars and getting stuck halfway. The next step is to include the effects of plasma redeposition and to determine whether a truly immortal wall can be achieved.

As plasma thrusters become ever more powerful, they become more able to damage their own walls, too. That increases the importance of a self-healing wall. My ultimate goal is to design a thruster using advanced materials that can last 10 times as long as any Mars mission requirement, making it effectively immortal. An immortal wall would solve this problem of thruster failure, and allow us to ferry the cargo we need to begin building mankind’s first outpost on Mars.

The ConversationGary Li, Ph.D. Candidate in Mechanical and Aerospace Engineering, University of California, Los Angeles

This article was originally published on The Conversation. Read the original article.

Now, Check Out:

The future of genetic enhancement is not in the West

By G. Owen Schaefer, National University of Singapore.

Would you want to alter your future children’s genes to make them smarter, stronger or better-looking? As the state of the science brings prospects like these closer to reality, an international debate has been raging over the ethics of enhancing human capacities with biotechnologies such as so-called smart pills, brain implants and gene editing. This discussion has only intensified in the past year with the advent of the CRISPR-cas9 gene editing tool, which raises the specter of tinkering with our DNA to improve traits like intelligence, athleticism and even moral reasoning.

So are we on the brink of a brave new world of genetically enhanced humanity? Perhaps. And there’s an interesting wrinkle: It’s reasonable to believe that any seismic shift toward genetic enhancement will not be centered in Western countries like the U.S. or the U.K., where many modern technologies are pioneered. Instead, genetic enhancement is more likely to emerge out of China.

Attitudes toward enhancement

Numerous surveys among Western populations have found significant opposition to many forms of human enhancement. For example, a recent Pew study of 4,726 Americans found that most would not want to use a brain chip to improve their memory, and a plurality view such interventions as morally unacceptable.

Public expresses more worry than enthusiasm about each of these potential human enhancements.

A broader review of public opinion studies found significant opposition in countries like Germany, the U.S. and the U.K. to selecting the best embryos for implantation based on nonmedical traits like appearance or intelligence. There is even less support for editing genes directly to improve traits in so-called designer babies.

Opposition to enhancement, especially genetic enhancement, has several sources. The above-mentioned Pew poll found that safety is a big concern – in line with experts who say that tinkering with the human genome carries significant risks. These risks may be accepted when treating medical conditions, but less so for enhancing nonmedical traits like intelligence and appearance. At the same time, ethical objections often arise. Scientists can be seen as “playing God” and tampering with nature. There are also worries about inequality, creating a new generation of enhanced individuals who are heavily advantaged over others. “Brave New World” is a dystopia, after all.

However, those studies have focused on Western attitudes. There has been much less polling in non-Western countries. There is some evidence that in Japan there is similar opposition to enhancement as in the West. Other countries, such as China and India, are more positive toward enhancement. In China, this may be linked to more generally approving attitudes toward old-fashioned eugenics programs such as selective abortion of fetuses with severe genetic disorders, though more research is needed to fully explain the difference. This has led Darryl Macer of the Eubios Ethics Institute to posit that Asia will be at the forefront of expansion of human enhancement.

Restrictions on gene editing

In the meantime, the biggest barrier to genetic enhancement will be broader statutes banning gene editing. A recent study found bans on germline genetic modification – that is, those that are passed on to descendants – are in effect throughout Europe, Canada and Australia. China, India and other non-Western countries, however, have laxer regulatory regimes – restrictions, if they exist, are often in the form of guidelines rather than statutes.

The U.S. may appear to be an exception to this trend. It lacks legal restriction of gene editing; however, federal funding of germline gene editing research is prohibited. Because most geneticists rely on government grants for their research, this acts as a significant restriction on germline editing studies.

By contrast, it was Chinese government funding that led China to be the first to edit the genes of human embryos using the CRISPR-cas9 tool in 2015. China has also been leading the way in using CRISPR-cas9 for non-germline genetic modifications of human tissue cells for use in treatment of cancer patients.

There are, then, two primary factors contributing to emergence of genetic enhancement technologies – research to develop the technologies and popular opinion to support their deployment. In both areas, Western countries are well behind China.

Different countries have different expectations about working with human genes.
Michael Dalder/Reuters

What makes China a probable petri dish

A further, more political factor may be at play. Western democracies are, by design, sensitive to popular opinion. Elected politicians will be less likely to fund controversial projects, and more likely to restrict them. By contrast, countries like China that lack direct democratic systems are thereby less sensitive to opinion, and officials can play an outsize role in shaping public opinion to align with government priorities. This would include residual opposition to human enhancement, even if it were present. International norms are arguably emerging against genetic enhancement, but in other arenas China has proven willing to reject international norms in order to promote its own interests.

Indeed, if we set ethical and safety objections aside, genetic enhancement has the potential to bring about significant national advantages. Even marginal increases in intelligence via gene editing could have significant effects on a nation’s economic growth. Certain genes could give some athletes an edge in intense international competitions. Other genes may have an effect on violent tendencies, suggesting genetic engineering could reduce crime rates.

Many of these potential benefits of enhancement are speculative, but as research advances they may move into the realm of reality. If further studies bear out the reliability of gene editing in improving such traits, China is well-poised to become a leader in the area of human enhancement.

Does this matter?

Aside from a preoccupation with being the best in everything, is there reason for Westerners to be concerned by the likelihood that genetic enhancement is apt to emerge out of China?

If the critics are correct that human enhancement is unethical, dangerous or both, then yes, emergence in China would be worrying. From this critical perspective, the Chinese people would be subject to an unethical and dangerous intervention – a cause for international concern. Given China’s human rights record in other areas, it is questionable whether international pressure would have much effect. In turn, enhancement of its population may make China more competitive on the world stage. An unenviable dilemma for opponents of enhancement could emerge – fail to enhance and fall behind, or enhance and suffer the moral and physical consequences.

Conversely, if one believes that human enhancement is actually desirable, this trend should be welcomed. As Western governments hem and haw, delaying development of potentially great advances for humanity, China leads the way forward. Their increased competitiveness, in turn, would pressure Western countries to relax restrictions and thereby allow humanity as a whole to progress – becoming healthier, more productive and generally capable.

Either way, this trend is an important development. We will see if it is sustained – public opinion in the U.S. and other countries could shift, or funding could dry up in China. But for now, it appears that China holds the future of genetic enhancement in its hands.

The ConversationG. Owen Schaefer, Research Fellow in Biomedical Ethics, National University of Singapore

This article was originally published on The Conversation. Read the original article.

Now, Check Out:

The power of rewards and why we seek them out

By Rachel Grieve, University of Tasmania and Emily Lowe-Calverley, University of Tasmania.

Any dog owner will tell you that we can use a food reward as a motivation to change a dog’s behaviour. But humans are just as susceptible to rewards too.

When we get a reward, special pathways in our brain become activated. Not only does this feel good, but the activation also leads us to seek out more rewarding stimuli.

Humans show these neurological responses to many types of rewards, including food, social contact, music and even self-affirmation.

But there is more to reward than physiology: differences in how often and when we get rewarded can also have a big impact on our experience of reward. In turn, this influences the likelihood that we will engage in that activity again. Psychologists describe these as schedules of reinforcement.

It’s not (just) what you do, it’s when you do it

The simplest type of reinforcement is continuous reinforcement, where a behaviour is rewarded every time it occurs. Continuous reinforcement is a particularly good way to train a new behaviour.

But intermittent reinforcement is the strongest way to maintain a behaviour. In intermittent reinforcement, the reward is delivered after some of the behaviours, but not all of the behaviours.

There are four main intermittent schedules of reinforcement, and some of these are more powerful than others.

Fixed Ratio

In the Fixed Ratio schedule of reinforcement, a specific number of actions must occur before the behaviour is rewarded. For example, your local coffee shop tells you that after you stamp your card nine times, your tenth drink is free.

Fixed Interval

Similarly, in the Fixed Interval schedule, a specific time must pass before the behaviour is rewarded. It is easy to think about this schedule in terms of work paid on an hourly basis – you are rewarded with money for every 60 minutes of work you complete.

Variable Ratio

For the Variable Ratio schedule, rewards are given after a varying number of behaviours – sometimes after four, sometimes five and other times 20 – making the reward more unpredictable.

This principle can be seen in poker (slot) machine gambling. The machine has an average win ratio, but that doesn’t guarantee a consistent rate of reward, so players continue in the hope that the next press of the button is the one that pays off.

Variable Interval

The Variable Interval schedule works on the same unpredictable principle, but in terms of time. So rewards are given after varying intervals of time – sometimes five minutes, sometimes 30 and sometimes after a longer period. So at work, when your boss drops in at random points of the day, your hard work is reinforced.

It is easy to see that rewards given on a variable ratio would reinforce behaviours far more effectively – if you don’t know when you will be rewarded, you continue to act, just in case!

Psychologists describe this persistent behaviour as a resistance to extinction. Even after the reward is completely taken away, the behaviour will remain for a while because you aren’t sure if this is just a longer interval before the reward than usual.

We all respond to rewards, but only if they are rewarding enough.
Keith Williamson/Flickr, CC BY-NC-ND

Do rewards have a ‘dark side’?

You can certainly use these principles to shape someone’s behaviour. Loyalty cards for supermarkets, airlines, and restaurants all increase the likelihood of our continued use of those services.

Marketers can also use reward to their advantage. If you can make someone feel anxious because they don’t own a particular product – maybe the latest or greatest version of something they already have – when the person buys the new product, the reward comes from the reduction in anxiety.

Want more help around the house? Start off with praising your partner/kids every time they do the desired behaviour, and once they are doing it regularly, slip into a comfortable variable ratio mode.

And of course, sometimes rewards can result in addiction.

Addiction used to be seen in the context of substance use, and there is indeed substantial evidence for the role of reward pathways in alcohol and other drug addiction.

Obviously, the nature of addiction is complex. But more recently, there is evidence of addiction that can be based on behaviour, rather than ingesting a substance.

For example, people show addiction-like behaviours related to their mobile phone use, shopping and even love relationships.

Pokémon GO rewards

Recently the world has watched the introduction of the mobile game Pokémon GO. Cleverly, this game employs multiple schedules of reinforcement which ensure users continue to feel the need to “catch ‘em all”.

On the fixed ratio schedule, users know that if they catch enough Pokemon they will level up, or possess enough candy to evolve. The hatching of eggs also follows a fixed interval, in this case it’s distance walked.

Discovering a rare Pokémon can keep players hooked.
But on the variable ratio and interval schedules, users never know how far they need to wander before they will find a new Pokemon, or how long it will be before something other than a wild Pidgey appears!

So they continue to check the app regularly throughout the day. No wonder Pokemon GO is so addictive.

But it’s not just Pokemon masters who fall prey to online reward schedules.

Checking our emails at various points of the day is reinforced when there is something in our inbox – a variable interval schedule. This makes us more likely to check for emails again.

Our social media posts are reinforced with “likes” on an variable ratio schedule. You may be rewarded with likes on most posts (continuous reinforcement), but occasionally (and importantly, unpredictably) a post will be rewarded with much more attention than other posts, which encourages more posting in the future.

Now, if you will excuse us, we just need to click “refresh” on our inbox. Again.

The ConversationRachel Grieve, Senior Lecturer in Psychology, University of Tasmania and Emily Lowe-Calverley, PhD Candidate in Cyberpsychology, University of Tasmania

This article was originally published on The Conversation. Read the original article.

Now, Check Out:

How do you know you’re not living in a computer simulation?

By Laura D’Olimpio, University of Notre Dame Australia.

Consider this: right now, you are not where you think you are. In fact, you happen to be the subject of a science experiment being conducted by an evil genius.

Your brain has been expertly removed from your body and is being kept alive in a vat of nutrients that sits on a laboratory bench.

The nerve endings of your brain are connected to a supercomputer that feeds you all the sensations of everyday life. This is why you think you’re living a completely normal life.

Do you still exist? Are you still even “you”? And is the world as you know it a figment of your imagination or an illusion constructed by this evil scientist?

Sounds like a nightmare scenario. But can you say with absolute certainty that it’s not true?

Could you prove to someone that you aren’t actually a brain in a vat?

Deceiving demons

The philosopher Hilary Putnam proposed this famous version of the brain-in-a-vat thought experiment in his 1981 book, Reason, Truth and History, but it is essentially an updated version of the French philosopher René Descartes’ notion of the Evil Genius from his 1641 Meditations on First Philosophy.

While such thought experiments might seem glib – and perhaps a little unsettling – they serve a useful purpose. They are used by philosophers to investigate what beliefs we can hold to be true and, as a result, what kind of knowledge we can have about ourselves and the world around us.

Descartes thought the best way to do this was to start by doubting everything, and building our knowledge from there. Using this sceptical approach, he claimed that only a core of absolute certainty will serve as a reliable foundation for knowledge. He said:

If you would be a real seeker after truth, it is necessary that at least once in your life you doubt, as far as possible, all things.

Descartes believed everyone could engage in this kind of philosophical thinking. In one of his works, he describes a scene where he is sitting in front of a log fire in his wooden cabin, smoking his pipe.

He asks if he can trust that the pipe is in his hands or his slippers are on his feet. He notes that his senses have deceived him in the past, and anything that has been deceptive once previously cannot be relied upon. Therefore he cannot be sure that his senses are reliable.

Perhaps you’re really just a brain in a vat?
Shutterstock

Down the rabbit hole

It is from Descartes that we get classical sceptical queries favoured by philosophers such as: how can we be sure that we are awake right now and not asleep, dreaming?

To take this challenge to our assumed knowledge further, Descartes imagines there exists an omnipotent, malicious demon that deceives us, leading us to believe we are living our lives when, in fact, reality could be very different to how it appears to us.

I shall suppose that some malicious demon of the utmost power and cunning has employed all his energies in order to deceive me.

The brain-in-a-vat thought experiment and the challenge of scepticism has also been employed in popular culture. Notable contemporary examples include the 1999 film The Matrix and Christopher Nolan’s 2010 film Inception.

By watching a screened version of a thought experiment, the viewer may imaginatively enter into a fictional world and safely explore philosophical ideas.

For example, while watching The Matrix, we identify with the protagonist, Neo (Keanu Reeves), who discovers the “ordinary” world is a computer-simulated reality and his atrophied body is actually suspended in a vat of life-sustaining liquid.

Even if we cannot be absolutely certain that the external world is how it appears to our senses, Descartes commences his second meditation with a small glimmer of hope.

At least we can be sure that we ourselves exist, because every time we doubt that, there must exist an “I” that is doing the doubting. This consolation results in the famous expression cogito ergo sum, or “I think therefore I am”.

So, yes, you may well be a brain in a vat and your experience of the world may be a computer simulation programmed by an evil genius. But, rest assured, at least you’re thinking!

The ConversationLaura D’Olimpio, Senior Lecturer in Philosophy, University of Notre Dame Australia

This article was originally published on The Conversation. Read the original article.

Now, Check Out:

How old is too old for a safe pregnancy?

By Hannah Brown, University of Adelaide.

This week, an Australian woman delivered a baby at the age of 62 after having in vitro fertilisation (IVF) abroad.

Few women can naturally conceive a baby later in life without the help of IVF – and these are rarely first pregnancies. These women go through menopause later, and have lower risks of heart disease, osteoporosis and dementia.

But does that mean that it’s safe to start a family later in life? Are there other risks and complications associated with pregnancy and childbirth in your 50s and 60s – or even your 40s?

Changing demographics

A woman’s reproductive capacity has a finite lifespan. Her eggs initially grow when she is inside her mother’s womb, and are stored inside her ovaries until she begins to menstruate. Each month, more than 400 eggs are lost by attrition until the four million she originally had are gone, and menopause begins.

Social and financial pressures are driving many Australian women who want to have children to wait until later in life. The number of women having babies in their 30s or later has almost doubled in the past 25 years in Australia, from 23% in 1991 to 43% in 2011.

Around one in 1,000 births occur to women 45 years or older. This rate is likely to increase as new technologies emerge, including egg donation.

What are the risks?

Women aged over 30 are more than twice as likely to suffer from life-threatening high blood pressure (pre-eclampsia) during pregnancy than under 30s (5% compared with 2%) and are twice as likely to have gestational diabetes (5-10% compared with 1-2.5%).

More than half of women aged over 40 will require their baby to be delivered by caesarean section.

Increasing maternal age increases the chance of dying during the pregnancy, or during childbirth. Mothers in their 40s and 50s are also between three and six times more likely to die in the six weeks following the birth of the baby than their younger counterparts, from complications associated with the pregnancy such as bleeding and clots.

Mothers aged over 40 are more than twice as likely to suffer a stillbirth. And for a woman aged 40, the risk of miscarriage is greater than the chance of a live birth.

Finally, babies born from older mothers are 1.5-2 times more likely to be born too soon (before 36 weeks) and to be born small (low birthweight). Low birthweight and prematurity carry both immediate risks for the babies including problems with lung development, and obesity and diabetes as an adult.

Postmenopausal pregnancy

Through advances to the IVF industry, it is possible to take a donor egg and embryo from a younger, fertile woman, to help a woman who has undergone menopause become pregnant.

But this comes with greater risks. Pregnancy puts extra stress and strain on the heart and blood vessels and emerging evidence suggests older mothers are more likely to suffer a stroke later in life.

When is pregnancy safest?

While there are no specific age cut-offs for IVF treatment in Australia, many clinics stop treatment at 50. At 30, the chance of conceiving each month (without IVF) is about 20%. At 40 it’s around 5% and this declines throughout the decade.

A wealth of scientific knowledge says that risks to the baby and mother during pregnancy are lowest in your 20s. Women in their 20s are less likely to have health risks and conditions such as obesity and diabetes which negatively influence pregnancy.

As a woman ages, her egg quality also declines. Poor egg quality is directly associated with genetic errors that result in both miscarriage and birth defects.

So while it’s possible to conceive later in life, it’s a risky decision.

The ConversationHannah Brown, Post-doctoral Fellow; Reproductive Epigenetics, University of Adelaide

This article was originally published on The Conversation. Read the original article.

Now, Check Out: