Bikini islanders still deal with fallout of US nuclear tests, 70 years later [Video]

By Timothy J. Jorgensen, Georgetown University.

In 1946, French fashion designer Jacques Heim released a woman’s swimsuit he called the “Atome” (French for “atom”) – a name selected to suggest its design would be as shocking to people that summer as the atomic bombings of Japan had been the summer before.

The scandalous ‘Bikini,’ small enough to fit in a matchbox like the one she’s holding.

Not to be outdone, competitor Louis Réard raised the stakes, quickly releasing an even more skimpy swimsuit. The Vatican found Réard’s swimsuit more than shocking, declaring it to actually be “sinful.” So what did Réard consider an appropriate name for his creation? He called it the “Bikini” – a name meant to shock people even more than “Atome.” But why was this name so shocking?

In the summer of 1946, “Bikini” was all over the news. It’s the name of a small atoll – a circular group of coral islands – within the remote mid-Pacific island chain called the Marshall Islands. The United States had assumed control of the former Japanese territory after the end of World War II, just a few months earlier.

The United States soon came up with some very big plans for the little atoll of Bikini. After forcing the 167 residents to relocate to another atoll, they started to prepare Bikini as an atomic bomb test site. Two test bombings scheduled for that summer were intended to be very visible demonstrations of the United States’ newly acquired nuclear might. Media coverage of the happenings at Bikini was extensive, and public interest ran very high. Who could have foreseen that even now – 70 years later – the Marshall Islanders would still be suffering the aftershocks from the nuclear bomb testing on Bikini Atoll?

The big plan for tiny Bikini

According to the testing schedule, the U.S. plan was to demolish a 95-vessel fleet of obsolete warships on June 30, 1946 with an airdropped atomic bomb. Reporters, U.S. politicians, and representatives from the major governments of the world would witness events from distant observation ships. On July 24, a second bomb, this time detonated underwater, would destroy any surviving naval vessels.

These two sequential tests were intended to allow comparison of air-detonated versus underwater-detonated atomic bombs in terms of destructive power to warships. The very future of naval warfare in the advent of the atomic bomb was in the balance. Many assumed the tests would clearly show that naval ships were now obsolete, and that air forces represented the future of global warfare.

Slow motion film of atomic bomb airdropped on Bikini Atoll.

But when June 30 arrived, the airdrop bombing didn’t go as planned. The bomber missed his target by more than a third of a mile, so the bomb caused much less ship damage than anticipated.

Color film of underwater atomic bomb near Marshall Islands.

The subsequent underwater bomb detonation didn’t go so well either. It unexpectedly produced a spray of highly radioactive water that extensively contaminated everything it landed on. Naval inspectors couldn’t even return to the area to assess ship damage because of the threat of deadly radiation doses from the bomb’s “fallout” – the radioactivity produced by the explosion. All future bomb testing was canceled until the military could evaluate what had gone wrong and come up with another testing strategy.

And even more bombings to follow

The United States did not, however, abandon little Bikini. It had even bigger plans with bigger bombs in mind. Ultimately, there would be 23 Bikini test bombings, spread over 12 years, comparing different bomb sizes, before the United States finally moved nuclear bomb testing to other locations, leaving Bikini to recover as best it could.

1956 Operation Redwing bombing at Enewetak Atoll.
National Nuclear Security Administration / Nevada Field Office

The most dramatic change in the testing at Bikini occurred in 1954, when the bomb designs switched from fission to fusion mechanisms. Fission bombs – the type dropped on Japan – explode when heavy elements like uranium split apart. Fusion bombs, in contrast, explode when light atoms like deuterium join together. Fusion bombs, often called “hydrogen” or “thermonuclear” bombs, can produce much larger explosions.

The United States military learned about the power of fusion energy the hard way, when they first tested a fusion bomb on Bikini. Based on the expected size of the explosion, a swath of the Pacific Ocean the size of Wisconsin was blockaded to protect ships from entering the fallout zone.

On March 1, 1954, the bomb detonated just as planned – but still there were a couple of problems. The bomb turned out to be 1,100 times larger than the Hiroshima bomb, rather than the expected 450 times. And the prevailing westerly winds turned out to be stronger than meteorologists had predicted. The result? Widespread fallout contamination to islands hundreds of miles downwind from the test site and, consequently, high radiation exposures to the Marshall Islanders who lived on them.

Dealing with the fallout, for decades

Three days after the detonation of the bomb, radioactive dust had settled on the ground of downwind islands to depths up to half an inch. Natives from badly contaminated islands were evacuated to Kwajalein – an upwind, uncontaminated atoll that was home to a large U.S. military base – where their health status was assessed.

Residents of the Rongelap Atoll – Bikini’s downwind neighbor – received particularly high radiation doses. They had burns on their skin and depressed blood counts. Islanders from other atolls did not receive doses high enough to induce such symptoms. However, as I explain in my book “Strange Glow: The Story of Radiation,” even those who didn’t have any radiation sickness at the time received doses high enough to put them at increased cancer risk, particularly for thyroid cancers and leukemia.

A Marshall Islands resident has his body levels of radioactivity checked in a U.S. government lab. Argonne National Laboratory, CC BY

What happened to the Marshall Islanders next is a sad story of their constant relocation from island to island, trying to avoid the radioactivity that lingered for decades. Over the years following the testing, the Marshall Islanders living on the fallout-contaminated islands ended up breathing, absorbing, drinking and eating considerable amounts of radioactivity.

In the 1960s, cancers started to appear among the islanders. For almost 50 years, the United States government studied their health and provided medical care. But the government study ended in 1998, and the islanders were then expected to find their own medical care and submit their radiation-related health bills to a Nuclear Claims Tribunal, in order to collect compensation.

Marshall Islanders still waiting for justice

By 2009, the Nuclear Claims Tribunal, funded by Congress and overseen by Marshall Islands judges to pay compensation for radiation-related health and property claims, exhausted its allocated funds with US$45.8 million in personal injury claims still owed the victims. At present, about half of the valid claimants have died waiting for their compensation. Congress shows no inclination to replenish the empty fund, so it’s unlikely the remaining survivors will ever see their money.

Ten years after bombing ended, the U.S. government assured Marshall Islanders a safe return.
Department of Energy

But if the Marshall Islanders cannot get financial compensation, perhaps they can still win a moral victory. They hope to force the United States and eight other nuclear weapons states into keeping another broken promise, this one made via the Treaty on the Non-Proliferation of Nuclear Weapons.

This international agreement between 191 sovereign nations entered into force in 1970 and was renewed indefinitely in 1995. It aims to prevent the spread of nuclear weapons and work toward disarmament.

In 2014, the Marshall Islands claimed that the nine nuclear-armed nations – China, Britain, France, India, Israel, North Korea, Pakistan, Russia and the United States – have not fulfilled their treaty obligations. The Marshall Islanders are seeking legal action in the United Nations International Court of Justice in The Hague. They’ve asked the court to require these countries to take substantive action toward nuclear disarmament. Despite the fact that India, North Korea, Israel and Pakistan are not among the 191 nations that are signatories of the treaty, the Marshall Islands’ suit still contends that these four nations “have the obligation under customary international law to pursue [disarmament] negotiations in good faith.”

The process is currently stalled due to jurisdictional squabbling. Regardless, experts in international law say the prospects for success through this David versus Goliath approach are slim.

But even if they don’t win in the courtroom, the Marshall Islands might shame these nations in the court of public opinion and draw new attention to the dire human consequences of nuclear weapons. That in itself can be counted as a small victory, for a people who have seldom been on the winning side of anything. Time will tell how this all turns out, but after 70 years since the first bomb test, the Marshall Islanders are well accustomed to waiting.

The ConversationTimothy J. Jorgensen, Director of the Health Physics and Radiation Protection Graduate Program and Associate Professor of Radiation Medicine, Georgetown University

This article was originally published on The Conversation. Read the original article.

Now, Check Out:

How do food manufacturers pick those dates on their product packaging – and what do they mean?

By Londa Nwadike, Kansas State University.

No one wants to serve spoiled food to their families. Conversely, consumers don’t want to throw food away unnecessarily – but we certainly do. The United States Department of Agriculture estimates Americans toss out the equivalent of US$162 billion in food every year, at the retail and consumer levels. Plenty of that food is discarded while still safe to eat.

Part of these losses are due to consumers being confused about the “use-by” and “best before” dates on food packaging. Most U.S. consumers report checking the date before purchasing or consuming a product, even though we don’t seem to have a very good sense of what the dates are telling us. “Sell by,” “best if used by,” “use by” – they all mean different things. Contrary to popular impression, the current system of food product dating isn’t really designed to help us figure out when something from the fridge has passed the line from edible to inedible.

For now, food companies are not required to use a uniform system to determine which type of date to list on their food product, how to determine the date to list or even if they need to list a date on their product at all. The Food Date Labeling Act of 2016, now before Congress, aims to improve the situation by clearly distinguishing between foods that may be past their peak but still ok to eat and foods that are unsafe to consume.

Aside from the labeling issues, how are these dates even generated? Food producers, particularly small-scale companies just entering the food business, often have a difficult time knowing what dates to put on their items. But manufacturers have a few ways – both art and science – to figure out how long their foods will be safe to eat.

Dates can be about rotating product, not necessarily when it’s safe to eat the food.
MdAgDept, CC BY

Consumer confusion

One study estimated 20 percent of food wasted in U.K. households is due to misinterpretation of date labels. Extending the same estimate to the U.S., the average household of four is losing $275-455 per year on needlessly trashed food.

Out of a mistaken concern for food safety, 91 percent of consumers occasionally throw food away based on the “sell by” date – which isn’t really about product safety at all. “Sell by” dates are actually meant to let stores know how to rotate their stock.

A survey conducted by the Food Marketing Institute in 2011 found that among their actions to keep food safe, 37 percent of consumers reported discarding food “every time” it’s past the “use by” date – even though the date only denotes “peak quality” as determined by the manufacturer.

The most we can get from the dates currently listed on food products is a general idea of how long that particular item has been in the marketplace. They don’t tell consumers when the product shifts from being safe to not safe.

Here’s how producers come up with those dates in the first place.

Figuring out when food’s gone foul

A lot of factors determine the usable life of a food product, both in terms of safety and quality. What generally helps foods last longer? Lower moisture content, higher acidity, higher sugar or salt content. Producers can also heat-treat or irradiate foods, use other processing methods or add preservatives such as benzoates to help products maintain their safety and freshness longer.

But no matter the ingredients, additives or treatments, no food lasts forever. Companies need to determine the safe shelf life of a product.

Larger food companies may conduct microbial challenge studies on food products. Researchers add a pathogenic (one that could make people sick) microorganism that’s a concern for that specific product. For example, they could add Listeria moncytogenes to refrigerated packaged deli meats. This bacterium causes listeriosis, a serious infection of particular concern for pregnant women, older adults and young children.

The researchers then store the contaminated food in conditions it’s likely to experience in transportation, in storage, at the store, and in consumers’ homes. They’re thinking about temperature, rough handling and so on.

Every harmful microorganism has a different infective dose, or amount of that organism that would make people sick. After various lengths of storage time, the researchers test the product to determine at what point the level of microorganisms present would likely be too high for safety.

Based on the shelf life determined in a challenge study, the company can then label the product with a “use by” date that would ensure people would consume the product long before it’s no longer safe. Companies usually set the date at least several days earlier than product testing indicated the product will no longer be safe. But there’s no standard for the length of this “safety margin”, it’s set at the manufacturer’s discretion.

Do you even know what the manufacturer meant by this date?
Sascha Grant, CC BY-NC-ND

Another option for food companies is to use mathematical modeling tools that have been developed based on the results of numerous earlier challenge studies. The company can enter information such as the specific type of product, moisture content and acidity level, and expected storage temperatures into a “calculator.” Out comes an estimate of the length of time the product should still be safe under those conditions.

Companies may also perform what’s called a static test. They store their product for an extended period of time under typical conditions the product may face in transport, in storage, at the store, and in consumer homes. This time they don’t add any additional microorganisms.

They just sample the product periodically to check it for safety and quality, including physical, chemical, microbiological, and sensory (taste and smell) changes. When the company has established the longest possible time the product could be stored for safety and quality, they will label the product with a date that is quite a bit earlier to be sure it’s consumed long before it is no longer safe or of the best quality.

Companies may also store the product in special storage chambers which control the temperature, oxygen concentration, and other factors to speed up its deterioration so the estimated shelf life can be determined more quickly (called accelerated testing). Based on the conditions used for testing, the company would then calculate the actual shelf life based on formulas using the estimated shelf life from the rapid testing.

Smaller companies may list a date on their product based on the length of shelf life they have estimated their competitors are using, or they may use reference materials or ask food safety experts for advice on the date to list on their product.

Sometimes it’s an obvious call.
Steven Depolo, CC BY

Even the best dates are only guidelines

Consumers themselves hold a big part of food safety in their own hands. They need to handle food safely after they purchase it, including storing foods under sanitary conditions and at the proper temperature. For instance, don’t allow food that should be refrigerated to be above 40℉ for more than two hours.

If a product has a use-by date on the package, consumers should follow that date to determine when to use or freeze it. If it has a “sell-by” or no date on the package, consumers should follow storage time recommendations for foods kept in the refrigerator or freezer and cupboard.

And use your common sense. If something has visible mold, off odors, the can is bulging or other similar signs, this spoilage could indicate the presence of dangerous microorganisms. In such cases, use the “If in doubt, throw it out” rule. Even something that looks and smells normal can potentially be unsafe to eat, no matter what the label says.

The ConversationLonda Nwadike, Assistant Professor of Food Safety, Extension Food Safety Specialist at University of Missouri, Kansas State University

This article was originally published on The Conversation. Read the original article.

Now, Check Out:

Roots of opioid epidemic can be traced back to two key changes in pain management

By Theodore Cicero, Washington University in St Louis and Matthew S. Ellis, Washington University in St Louis.

Ascetics preparing and smoking opium outside a rural dwelling in India. Wellcome Library, London, , CC BY

Abuse of opium products obtained from poppy plants dates back centuries, but today we are witnessing the first instance of widespread abuse of legal, prescribed drugs that, while structurally similar to illicit opioids such as heroin, are used for sound medical practices.

So how did we get here?

We can trace the roots of today’s epidemic back to two well-intentioned changes in how we treat pain: early recognition and proactive treatment of pain and the introduction of OxyContin, the first extended release opioid painkiller.

Pain as the fifth vital sign

Fifteen years ago, a report by the Joint Commission on Accreditation of Healthcare Organizations, a nationally recognized medical society which accredits hospitals, stressed that pain was vastly undertreated in the United States. The report recommended that physicians routinely assess pain at every patient visit. It also suggested that opioids could be effectively and more broadly used without fear of addiction. This latter assumption was entirely mistaken, as we now understand. The report was part of a trend in medicine through the 1980s and 1990s toward treating pain more proactively.

The report was heavily publicized, and today it is widely acknowledged that it led to massive – and sometimes inappropriate – increases in the use of prescription opioid drugs to treat pain.

With more opioids being prescribed by well-meaning doctors, some were diverted from the legal supply chain – through theft from medicine cabinets or trade on the black market – to the street for illicit use. As more opioids leaked out, more people started to experiment with them for recreational purposes.

This increase in supply certainly explains a large part of the current opioid abuse epidemic, but it doesn’t explain all of it.

Introduction of OxyContin®

The second major factor was the introduction of an extended release formulation of the potent opioid oxycodone in the 1996. You may know this drug by its brand name, OxyContin. In fact, you might have been prescribed it after having surgery.

The drug was designed to provide 12-24 hours of pain relief, as opposed to just four hours or so for an immediate release formulation. It meant that patients in pain could just take one or two pills a day rather than having to remember to take an immediate release drug every four hours or so. This also meant that OxyContin tablets contained a large amount of oxycodone – far more than would be found in several individual immediate release tablets.

And within 48 hours of OxyContin’s release on the market, drug users realized that crushing the tablet could easily breach the extended-release formulation, making the pure drug available in large quantities, free from harmful additives such as acetaminophen, which most recreational and chronic abusers find irritating, particularly if they inject it intravenously. This made it an attractive option for those who wanted to snort or inject their drugs. Surprisingly, neither the manufacturer nor the Food and Drug Administration foresaw this possibility.

Prescription opioids are perceived as safer than drugs like heroin. Brendan McDermid/Reuters

Purdue, the company holding the patent for the drug, continued to market it as having low abuse potential, highlighting that patients needed to take fewer pills a day than with immediate-release formulations.

By 2012, OxyContin represented 30 percent of the painkiller market.

The change in pain treatment ushered in by the Joint Commission report lead to an increase in the number of opioid prescriptions in the U.S., and the increase in prescriptions for this particular high dose opioid helped to introduce an unprecedented amount of prescription drugs into the marketplace, generating a whole new population of opioid users.

What is it about prescription drugs?

Compared to heroin and the stigma it carries, prescription drugs are viewed as safe. They have a consistent purity and dose, and can be relatively easily obtained from drug dealers. There was, at least throughout the 1990s and 2000s, little social stigma attached to swallowing a medically provided, legal drug.

The irony here is that prescription opioid abuse has actually been associated with an increase in heroin users. People who are addicted to prescription opioids might try heroin because it is cheaper and more readily available, often using them interchangeably depending on which is easier to get. However, the number of people who convert to heroin exclusively is relatively small.

The majority of individuals who abuse opioid drugs swallow them whole. The remainder snort or inject these drugs, which is much riskier. Snorting, for instance, leads to destruction of nasal passages, amongst other problems, whereas IV injection – and the common practice of sharing needles – can transmit blood-borne pathogens, HIV and Hepatitis C (currently a national problem of epidemic proportions).

Although people can also get high by just swallowing the pills, the addictive potential of drugs injected or snorted is far greater. There is good evidence to indicate that drugs which deliver their impact on the brain quickly, through snorting and especially through IV injection, are much more addictive and harder to quit.

No OxyContin here.
jennifer durban/Flickr, CC BY-NC

What are authorities doing to stop the epidemic?

Government and regulatory agencies such as the Food and Drug Administration are trying to curb the epidemic, in part by tightening access to prescription opioids. The Centers for Disease Control and Prevention recently issued new guidelines for prescribing opioids to treat chronic pain, aimed at preventing abuse and overdoses. Whether these recommendations will be supported by major medical associations remains to be seen.

For example, there have been local and national crackdowns on unethical doctors who run “pill mills,” clinics whose sole purpose is to provide opioid prescriptions to users and dealers.

In addition, prescription monitoring programs have helped identify irregular prescribing practices.

In 2010 an abuse-deterrent formulation (ADF) of OxyContin was released, replacing the original formulation. The ADF prevents the full dose of the opioid from being released if the pill is crushed or dissolved in some solvent, reducing the incentive to snort or take the drugs intravenously. These formulations have cut down on abuse, but they alone won’t solve the epidemic. Most people who are addicted to prescription opioids swallow pills anyway instead of snorting or injecting them, and abuse-deterrent technology isn’t effective when the drug is swallowed whole.

And, as with the release of the original OxyContin formulation in the 1990s, websites are populated by drug users with the procedures necessary to “defeat” the ADF mechanisms, although these are labor-intensive and take quite a bit more time.

Should we just restrict the use of opioid painkillers?

After reading all of this, you might be wondering why we don’t simply cut the use of opioids for pain management back to bare bones? This move would certainly help reduce the supply of opioids and slow the inevitable diversion for nontherapeutic purposes. However, it would come with a heavy price.

Millions of Americans suffer from either acute or chronic pain, and despite their potential for abuse, opioid drugs remain the most effective drugs on the market for treating pain, although there are some who disagree with their long-term use.

And most people who get a prescription for an opioid do not become addicted. Going backwards to restricting therapeutic use to keep them from the small fraction of individuals who would misuse them means that millions of people won’t get adequate pain management. This is an unacceptable trade-off.

New painkillers that can treat pain as well as opioids but don’t get people high would seem like the ideal solution.

For almost 100 years now there has been a concerted effort to develop a narcotic drug that has all of the efficacy of existing drugs, but without the potential for abuse. Unfortunately, this effort, it can be safely concluded, has failed. In short, it appears that the two properties – pain relief and abuse – are inextricably linked.

In the interest of public health, we must learn better ways to manage pain with these drugs, and particularly to recognize which individuals are likely to abuse their medications, before starting opioid therapy.

The ConversationTheodore Cicero, Professor of Psychology , Washington University in St Louis and Matthew S. Ellis, Clinical Lab Manager, Washington University in St Louis

This article was originally published on The Conversation. Read the original article.

Now, Check Out:

Why do only some people get ‘skin orgasms’ from listening to music?

By Mitchell Colver, Utah State University.

Have you ever been listening to a great piece of music and felt a chill run up your spine? Or goosebumps tickle your arms and shoulders?

The experience is called frisson (pronounced free-sawn), a French term meaning “aesthetic chills,” and it feels like waves of pleasure running all over your skin. Some researchers have even dubbed it a “skin orgasm.”

Listening to emotionally moving music is the most common trigger of frisson, but some feel it while looking at beautiful artwork, watching a particularly moving scene in a movie or having physical contact with another person. Studies have shown that roughly two-thirds of the population feels frisson, and frisson-loving Reddit users have even created a page to share their favorite frisson-causing media.

But why do some people experience frisson and not others?

Working in the lab of Dr. Amani El-Alayli, a professor of Social Psychology at Eastern Washington University, I decided to find out.

What causes a thrill, followed by a chill?

While scientists are still unlocking the secrets of this phenomenon, a large body of research over the past five decades has traced the origins of frisson to how we emotionally react to unexpected stimuli in our environment, particularly music.

Musical passages that include unexpected harmonies, sudden changes in volume or the moving entrance of a soloist are particularly common triggers for frisson because they violate listeners’ expectations in a positive way, similar to what occurred during the 2009 debut performance of the unassuming Susan Boyle on “Britain’s Got Talent.”

‘You didn’t expect that, did you?’

If a violin soloist is playing a particularly moving passage that builds up to a beautiful high note, the listener might find this climactic moment emotionally charged, and feel a thrill from witnessing the successful execution of such a difficult piece.

But science is still trying to catch up with why this thrill results in goosebumps in the first place.

Some scientists have suggested that goosebumps are an evolutionary holdover from our early (hairier) ancestors, who kept themselves warm through an endothermic layer of heat that they retained immediately beneath the hairs of their skin. Experiencing goosebumps after a rapid change in temperature (like being exposed to an unexpectedly cool breeze on a sunny day) temporarily raises and then lowers those hairs, resetting this layer of warmth.

Why do a song and a cool breeze produce the same physiological response?
EverJean/flickr, CC BY

Since we invented clothing, humans have had less of a need for this endothermic layer of heat. But the physiological structure is still in place, and it may have been rewired to produce aesthetic chills as a reaction to emotionally moving stimuli, like great beauty in art or nature.

Research regarding the prevalence of frisson has varied widely, with studies showing anywhere between 55 percent and 86 percent of the population being able to experience the effect.

Monitoring how the skin responds to music

We predicted that if a person were more cognitively immersed in a piece of music, then he or she might be more likely to experience frisson as a result of paying closer attention to the stimuli. And we suspected that whether or not someone would become cognitively immersed in a piece of music in the first place would be a result of his or her personality type.

To test this hypothesis, participants were brought into the lab and wired up to an instrument that measures galvanic skin response, a measure of how the electrical resistance of people’s skin changes when they become physiologically aroused.

Participants were then invited to listen to several pieces of music as lab assistants monitored their responses to the music in real time.

Examples of pieces used in the study include:

Each of these pieces contains at least one thrilling moment that is known to cause frisson in listeners (several have been used in previous studies). For example, in the Bach piece, the tension built up by the orchestra during the first 80 seconds is finally released by the entrance of the choir – a particularly charged moment that’s likely to elicit frisson.

As participants listened to these pieces of music, lab assistants asked them to report their experiences of frisson by pressing a small button, which created a temporal log of each listening session.

By comparing these data to the physiological measures and to a personality test that the participants had completed, we were, for the first time, able to draw some unique conclusions about why frisson might be happening more often for some listeners than for others.

This graph shows the reactions of one listener in the lab. The peaks of each line represent moments when the participant was particularly cognitively or emotionally aroused by the music. In this case, each of these peaks of excitement coincided with the participant reporting experiencing frisson in reaction to the music. This participant scored high on a personality trait called ‘Openness to Experience.’ Author provided

The role of personality

Results from the personality test showed that the listeners who experienced frisson also scored high for a personality trait called Openness to Experience.

Studies have shown that people who possess this trait have unusually active imaginations, appreciate beauty and nature, seek out new experiences, often reflect deeply on their feelings, and love variety in life.

Some aspects of this trait are inherently emotional (loving variety, appreciating beauty), and others are cognitive (imagination, intellectual curiosity).

While previous research had connected Openness to Experience with frisson, most researchers had concluded that listeners were experiencing frisson as a result of a deeply emotional reaction they were having to the music.

In contrast, the results of our study show that it’s the cognitive components of “Openness to Experience” – such as making mental predictions about how the music is going to unfold or engaging in musical imagery (a way of processing music that combines listening with daydreaming) – that are associated with frisson to a greater degree than the emotional components.

These findings, recently published in the journal Psychology of Music, indicate that those who intellectually immerse themselves in music (rather than just letting it flow over them) might experience frisson more often and more intensely than others.

And if you’re one of the lucky people who can feel frisson, the frisson Reddit group has identified Lady Gaga’s rendition of the Star-Spangled Banner at the 2016 Super Bowl and a fan-made trailer for the original Star Wars trilogy as especially chill-inducing.

The ConversationMitchell Colver, Ph.D. Student in Education, Utah State University

This article was originally published on The Conversation. Read the original article.

Now, Check Out:

The 7 Ways to Time Travel [Video]

Time travel is a staple topic of science fiction, however, there are currently seven known ways to time travel, at least theoretically. Although so many technologies of today are yesterday’s sci-fi, an interesting question to ponder is: will time travel ever make it out of the world of fiction and become real?  If it does, then perhaps that future technology will be based on one of the seven ways to time travel explored in this super-interesting video:

Many thanks to the Hybrid Librarian YouTube channel for creating this awesome video. 

Now, Check Out:

 

How nanotechnology can help us grow more food using less energy and water

By Ramesh Raliya, Washington University in St Louis and Pratim Biswas, Washington University in St Louis.

With the world’s population expected to exceed nine billion by 2050, scientists are working to develop new ways to meet rising global demand for food, energy and water without increasing the strain on natural resources. Organizations including the World Bank and the U.N. Food and Agriculture Organization are calling for more innovation to address the links between these sectors, often referred to as the food-energy-water (FEW) nexus.

Nanotechnology – designing ultrasmall particles – is now emerging as a promising way to promote plant growth and development. This idea is part of the evolving science of precision agriculture, in which farmers use technology to target their use of water, fertilizer and other inputs. Precision farming makes agriculture more sustainable because it reduces waste.

We recently published results from research in which we used nanoparticles, synthesized in our laboratory, in place of conventional fertilizer to increase plant growth. In our study we successfully used zinc nanoparticles to increase the growth and yield of mung beans, which contain high amounts of protein and fiber and are widely grown for food in Asia. We believe this approach can reduce use of conventional fertilizer. Doing so will conserve natural mineral reserves and energy (making fertilizer is very energy-intensive) and reduce water contamination. It also can enhance plants’ nutritional values.

Applying fertilizer the conventional way can waste resources and contribute to water pollution.
Fotokostic/Shutterstock.com

Impacts of fertilizer use

Fertilizer provides nutrients that plants need in order to grow. Farmers typically apply it through soil, either by spreading it on fields or mixing it with irrigation water. A major portion of fertilizer applied this way gets lost in the environment and pollutes other ecosystems. For example, excess nitrogen and phosphorus fertilizers become “fixed” in soil: they form chemical bonds with other elements and become unavailable for plants to take up through their roots. Eventually rain washes the nitrogen and phosphorus into rivers, lakes and bays, where it can cause serious pollution problems.

Fertilizer use worldwide is increasing along with global population growth. Currently farmers are using nearly 85 percent of the world’s total mined phosphorus as fertilizer, although plants can uptake only an estimated 42 percent of the phosphorus that is applied to soil. If these practices continue, the world’s supply of phosphorus could run out within the next 80 years, worsening nutrient pollution problems in the process.

Phosphate mine near Flaming Gorge, Utah.
Jason Parker-Burlingham/Wikipedia, CC BY

In contrast to conventional fertilizer use, which involves many tons of inputs, nanotechnology focuses on small quantities. Nanoscale particles measure between 1 and 100 nanometers in at least one dimension. A nanometer is equivalent to one billionth of a meter; for perspective, a sheet of paper is about 100,000 nanometers thick.

These particles have unique physical, chemical and structural features, which we can fine-tune through engineering. Many biological processes, such as the workings of cells, take place at the nano scale, and nanoparticles can influence these activities.

Scientists are actively researching a range of metal and metal oxide nanoparticles, also known as nanofertilizer, for use in plant science and agriculture. These materials can be applied to plants through soil irrigation and/or sprayed onto their leaves. Studies suggest that applying nanoparticles to plant leaves is especially beneficial for the environment because they do not come in contact with soil. Since the particles are extremely small, plants absorb them more efficiently than via soil. We synthesized the nanoparticles in our lab and sprayed them through a customized nozzle that delivered a precise and consistent concentration to the plants.

We chose to target zinc, which is a micronutrient that plants need to grow, but in far smaller quantities than phosphorus. By applying nano zinc to mung bean leaves after 14 days of seed germination, we were able to increase the activity of three important enzymes within the plants: acid phosphatase, alkaline phosphatase and phytase. These enzymes react with complex phosphorus compounds in soil, converting them into forms that plants can take up easily.

Algae bloom in Lake Erie in 2011, caused by phosphorus in runoff from surrounding farms.
NASA Earth Observatory/Flickr, CC BY

When we made these enzymes more active, the plants took up nearly 11 percent more phosphorus that was naturally present in the soil, without receiving any conventional phosphorous fertilization. The plants that we treated with zinc nanoparticles increased their biomass (growth) by 27 percent and produced 6 percent more beans than plants that we grew using typical farm practices but no fertilizer.

Nanofertilizer also has the potential to increase plants’ nutritional value. In a separate study, we found that applying titanium dioxide and zinc oxide nanoparticles to tomato plants increased the amount of lycopene in the tomatoes by 80 to 113 percent, depending on the type of nanoparticles and concentration of dosages. This may happen because the nanoparticles increase plants’ photosynthesis rates and enable them to take up more nutrients.

Lycopene is a naturally occurring red pigment that acts as an antioxidant and may prevent cell damage in humans who consume it. Making plants more nutrition-rich in this way could help to reduce malnutrition. The quantities of zinc that we applied were within the U.S. government’s recommended limits for zinc in foods.

Next questions: health and environmental impacts of nanoparticles

Nanotechnology research in agriculture is still at an early stage and evolving quickly. Before nanofertilizers can be used on farms, we will need a better understanding of how they work and regulations to ensure they will be used safely. The U.S. Food and Drug Administration has already issued guidance for the use of nanomaterials in animal feed.

Manufacturers also are adding engineered nanoparticles to foods, personal care and other consumer products. Examples include silica nanoparticles in baby formula, titanium dioxide nanoparticles in powdered cake donuts, and other nanomaterials in paints, plastics, paper fibers, pharmaceuticals and toothpaste.

Many properties influence whether nanoparticles pose risks to human health, including their size, shape, crystal phase, solubility, type of material, and the exposure and dosage concentration. Experts say that nanoparticles in food products on the market today are probably safe to eat, but this is an active research area.

Addressing these questions will require further studies to understand how nanoparticles behave within the human body. We also need to carry out life cycle assessments of nanoparticles’ impact on human health and the environment, and develop ways to assess and manage any risks they may pose, as well as sustainable ways to manufacture them. However, as our research on nanofertilizer suggests, these materials could help solve some of the word’s most pressing resource problems at the food-energy-water nexus.

The ConversationRamesh Raliya, Research Scientist, Washington University in St Louis and Pratim Biswas, Chairman, Department of Energy, Environmental and Chemical Engineering, Washington University in St Louis

This article was originally published on The Conversation. Read the original article.

Featured Image Credit: Chad Zuber/Shutterstock.com

Now, Check Out:

Science Rocks My Week: Our Most Popular Stories of the Week

It was another mixed bag of science this week, including the resurgence of an amazing article (including video) on a recent breakthrough from neuroscience research team that was able to grow neurons in the lab that are implantable.

Other surprising breakthroughs and first discoveries this week included:

  • The first fossilized heart was discovered
  • A scorpion turns out to have venom that may be beneficial to humans
  • A robotic diver that is getting close to being a real avatar
  • More very interesting news about what’s happening at Chernobyl 30 years later.

And… with no further ado, here are this week’s most popular stories on Science Rocks My World as voted by your clicks:

Neuroscience Breakthrough: Artificial, Implantable Neurons [Video]

neuron-modelScientists at McGill University have achieved a huge breakthrough in neuroscience – they have discovered how to make artificial neurons that are indistinguishable from normal human neurons and can be implanted to make new connections in the nervous system.

This is the first time scientists have managed to create new functional connections between neurons…

READ MORE…


The first fossilised heart ever found in a prehistoric animal

fossilzed-fishPalaeontologists and the famous Tin Man in The Wizard of Oz were once in search of the same thing: a heart. But in our case, it was the search for a fossilised heart. And now we’ve found one.

A new discovery, announced today in the journal eLife, shows the perfectly preserved 3D fossilised heart in a 113-119 million-year-old fish from Brazil called Rhacolepis.

This is the first definite fossilised heart found in any prehistoric animal…

READ MORE…


This Scorpion’s Sting May Be Good for You

Scorpio-maurus_1170-770x460Because of their venomous sting, scorpions are usually avoided at all costs. But a new discovery suggests the toxins found in some venom might actually have a unique benefit.

Published in the Proceedings of the National Academy of Sciences, the findings show that when a toxin produced by Scorpio maurus—a scorpion species found in North Africa and the Middle East—permeates the cell membrane it loses its potency and may actually become healthful.

“This is the first time a toxin has been shown to chemically reprogram once inside a cell, becoming something that may be beneficial…”

READ MORE…


How Astronomers Determined Whether This Object is an Exoplanet or a Brown Dwarf

WISEA1147-770x460Our galaxy may have billions of free-floating planets that exist in the darkness of space without any companion planets or even a host sun.

But scientists say it’s possible that some of those lonely worlds aren’t actually planets but rather light-weight stars called brown dwarfs.

Take, for example, the newfound object called WISEA 1147. It’s estimated to be between roughly five to 10 times the mass of Jupiter…

READ MORE…


Losing your virginity: how we discovered that genes could play a part

Couple_on_beachAs far as big life decisions go, choosing when to lose your virginity or the best time start a family are probably right up there for most people. It may seem that such decisions are mostly driven by social factors, such as whether you’ve met the right partner, social pressure or even your financial situation. But scientists are increasingly realising that such sexual milestones are also influenced by our genes…

READ MORE…


To fight Zika, let’s genetically modify mosquitoes – the old-fashioned way

An Aedes Aegypti mosquito is seen in a lab of the International Training and Medical Research Training Center (CIDEIM) in Cali, ColombiaThe near panic caused by the rapid spread of the Zika virus has brought new urgency to the question of how best to control mosquitoes that transmit human diseases. Aedes aegypti mosquitoes bite people across the globe, spreading three viral diseases: dengue,chikungunya and Zika. There are no proven effective vaccines or specific medications to treat patients after contracting these viruses.

Mosquito control is the only way, at present, to limit them. But that’s no easy task…

READ MORE…


This Robot ‘Mermaid’ can Grab Shipwreck Treasures [Video]

This Robot ‘Mermaid’ can Grab Shipwreck Treasures [Video]A robot called OceanOne with artificial intelligence and haptic feedback systems gives human pilots an unprecedented ability to explore the depths of the oceans.

Oussama Khatib held his breath as he swam through the wreck of La Lune, over 300 feet below the Mediterranean. The flagship of King Louis XIV sank here in 1664, 20 miles off the southern coast of France, and no human had touched the ruins—or the countless treasures and artifacts the ship once carried—in the centuries since…

READ MORE…


At Chernobyl and Fukushima, radioactivity has seriously harmed wildlife

Chernobyl-storksThe largest nuclear disaster in history occurred 30 years ago at the Chernobyl Nuclear Power Plant in what was then the Soviet Union. The meltdown, explosions and nuclear fire that burned for 10 days injected enormous quantities of radioactivity into the atmosphere and contaminated vast areas of Europe and Eurasia. The International Atomic Energy Agency estimates that Chernobyl released 400 times more radioactivity into the atmosphere than the bomb dropped on Hiroshima in 1945…

READ MORE…


Also Trending This Week, Check Out:

At Chernobyl and Fukushima, radioactivity has seriously harmed wildlife

By Timothy A. Mousseau, University of South Carolina.

The largest nuclear disaster in history occurred 30 years ago at the Chernobyl Nuclear Power Plant in what was then the Soviet Union. The meltdown, explosions and nuclear fire that burned for 10 days injected enormous quantities of radioactivity into the atmosphere and contaminated vast areas of Europe and Eurasia. The International Atomic Energy Agency estimates that Chernobyl released 400 times more radioactivity into the atmosphere than the bomb dropped on Hiroshima in 1945.

Radioactive cesium from Chernobyl can still be detected in some food products today. And in parts of central, eastern and northern Europe many animals, plants and mushrooms still contain so much radioactivity that they are unsafe for human consumption.

The first atomic bomb exploded at Alamogordo, New Mexico more than 70 years ago. Since then, more than 2,000 atomic bombs have been tested, injecting radioactive materials into the atmosphere. And over 200 small and large accidents have occurred at nuclear facilities. But experts and advocacy groups are still fiercely debating the health and environmental consequences of radioactivity.

However, in the past decade population biologists have made considerable progress in documenting how radioactivity affects plants, animals and microbes. My colleagues and I have analyzed these impacts at Chernobyl, Fukushima
and naturally radioactive regions of the planet.

Our studies provide new fundamental insights about consequences of chronic, multigenerational exposure to low-dose ionizing radiation. Most importantly, we have found that individual organisms are injured by radiation in a variety of ways. The cumulative effects of these injuries result in lower population sizes and reduced biodiversity in high-radiation areas.

Broad impacts at Chernobyl

Radiation exposure has caused genetic damage
and increased mutation rates in many organisms in the Chernobyl region. So far, we have found little convincing evidence that many organisms there are evolving to become more resistant to radiation.

Organisms’ evolutionary history may play a large role in determining how vulnerable they are to radiation. In our studies, species that have historically shown high mutation rates, such as the barn swallow (Hirundo rustica), the icterine warbler (Hippolais icterina) and the Eurasian blackcap (Sylvia atricapilla), are among the most likely to show population declines in Chernobyl. Our hypothesis is that species differ in their ability to repair DNA, and this affects both DNA substitution rates and susceptibility to radiation from Chernobyl.

Much like human survivors of the Hiroshima and Nagasaki atomic bombs, birds and mammals
at Chernobyl have cataracts in their eyes and smaller brains. These are direct consequences of exposure to ionizing radiation in air, water and food. Like some cancer patients undergoing radiation therapy, many of the birds have malformed sperm. In the most radioactive areas, up to 40 percent of male birds are completely sterile, with no sperm or just a few dead sperm in their reproductive tracts during the breeding season.

Tumors, presumably cancerous, are obvious on some birds in high-radiation areas. So are developmental abnormalities in some plants and insects.

Chernobyl reactor No. 4 building, encased in steel and concrete to limit radioactive contamination.
Vadim Mouchkin, IAEA/Flickr, CC BY-SA

Given overwhelming evidence of genetic damage and injury to individuals, it is not surprising that populations of many organisms in highly contaminated areas have shrunk. In Chernobyl, all major groups of animals that we surveyed were less abundant in more radioactive areas. This includes birds, butterflies, dragonflies, bees, grasshoppers, spiders and large and small mammals.

Not every species shows the same pattern of decline. Many species, including wolves, show no effects of radiation on their population density. A few species of birds appear to be more abundant in more radioactive areas. In both cases, higher numbers may reflect the fact that there are fewer competitors or predators for these species in highly radioactive areas.

Moreover, vast areas of the Chernobyl Exclusion Zone are not presently heavily contaminated, and appear to provide a refuge for many species. One report published in 2015 described game animals such as wild boar and elk as thriving in the Chernobyl ecosystem. But nearly all documented consequences of radiation in Chernobyl and Fukushima have found that individual organisms exposed to radiation suffer serious harm.

Map of the Chernobyl region of Ukraine. Note the highly heterogeneous deposition patterns of radioactivity in the region. Areas of low radioactivity provide refuges for wildlife in the region. Shestopalov, V.M., 1996. Atlas of Chernobyl exclusion zone. Kiev: Ukrainian Academy of Science.

There may be exceptions. For example, substances called antioxidants can defend against the damage to DNA, proteins and lipids caused by ionizing radiation. The levels of antioxidants that individuals have available in their bodies may play an important role in reducing the damage caused by radiation. There is evidence that some birds may have adapted to radiation by changing the way they use antioxidants in their bodies.

Parallels at Fukushima

Recently we have tested the validity of our Chernobyl studies by repeating them in Fukushima, Japan. The 2011 power loss and core meltdown at three nuclear reactors there released about one-tenth as much radioactive material as the Chernobyl disaster.

Overall, we have found similar patterns of declines in abundance and diversity of birds, although some species are more sensitive to radiation than others. We have also found declines in some insects, such as butterflies, which may reflect the accumulation of harmful mutations over multiple generations.

Our most recent studies at Fukushima have benefited from more sophisticated analyses of radiation doses received by animals. In our most recent paper, we teamed up with radioecologists to reconstruct the doses received by about 7,000 birds. The parallels we have found between Chernobyl and Fukushima provide strong evidence that radiation is the underlying cause of the effects we have observed in both locations.

Some members of the radiation regulatory community have been slow to acknowledge how nuclear accidents have harmed wildlife. For example, the U.N.-sponsored Chernobyl Forum instigated the notion that the accident has had a positive impact on living organisms in the exclusion zone because of the lack of human activities. A more recent report of the United Nations Scientific Committee on the Effects of Atomic Radiation predicts minimal consequences for the biota animal and plant life of the Fukushima region.

Unfortunately these official assessments were largely based on predictions from theoretical models, not on direct empirical observations of the plants and animals living in these regions. Based on our research, and that of others, it is now known that animals living under the full range of stresses in nature are far more sensitive to the effects of radiation than previously believed. Although field studies sometimes lack the controlled settings needed for precise scientific experimentation, they make up for this with a more realistic description of natural processes.

Our emphasis on documenting radiation effects under “natural” conditions using wild organisms has provided many discoveries that will help us to prepare for the next nuclear accident or act of nuclear terrorism. This information is absolutely needed if we are to protect the environment not just for man, but also for the living organisms and ecosystem services that sustain all life on this planet.

There are currently more than 400 nuclear reactors in operation around the world, with 65 new ones under construction and another 165 on order or planned. All operating nuclear power plants are generating large quantities of nuclear waste that will need to be stored for thousands of years to come. Given this, and the probability of future accidents or nuclear terrorism, it is important that scientists learn as much as possible about the effects of these contaminants in the environment, both for remediation of the effects of future incidents and for evidenced-based risk assessment and energy policy development.

The ConversationTimothy A. Mousseau, Professor of Biological Sciences, University of South Carolina

This article was originally published on The Conversation. Read the original article.

Featured Photo Credit:  Tim Mousseau, Author provided. Taken near Chernobyl.

Now, Check Out:

Science Rocks My Week: Our Most Popular Stories of the Week

From exploring octopus consciousness to an amazing breakthrough in nanotube technology; from the first fossilized heart ever found to a dark galaxy discovered by the ALMA telescope; and from the reclassification of a type of thyroid cancer so it isn’t cancer anymore to climatologists clashing over the affect of climate change on the weather and peoples’ attitudes about it….

Yes, it’s been yet another fascinating and amazing week in the world of science!

And here are this week’s most popular stories on Science Rocks My World as voted by your clicks:

Octopuses are super-smart… but are they conscious?

1200px-Octopus_vulgaris2Inky the wild octopus has escaped from the New Zealand National Aquarium. Apparently, he made it out of a small opening in his tank, and suction cup prints indicate he found his way to a drain pipe that emptied to the ocean.

Nice job Inky. Your courage gives us the chance to reflect on just how smart cephalopods really are. In fact, they are real smart…

READ MORE…


How Nanotubes Can Self-Assemble into Tiny Wires [Video]

TeslaphoresisThe strong force field emitted by a Tesla coil causes carbon nanotubes to self-assemble into long wires, a phenomenon scientists are calling “Teslaphoresis.”

Chemist Paul Cherukuri of Rice University, who led the team that made the discovery, thinks the research sets a clear path toward scalable assembly of nanotubes from the bottom up…

READ MORE…


The first fossilised heart ever found in a prehistoric animal

fossilzed-fishPalaeontologists and the famous Tin Man in The Wizard of Oz were once in search of the same thing: a heart. But in our case, it was the search for a fossilised heart. And now we’ve found one.

A new discovery, announced today in the journal eLife, shows the perfectly preserved 3D fossilised heart in a 113-119 million-year-old fish from Brazil called Rhacolepis.

This is the first definite fossilised heart found in any prehistoric animal…

READ MORE…


Astronomers Find a ‘Dark Galaxy’ Lurking in This Image

dark-galaxy-ALMANew analysis of an image taken by the ALMA telescope in Chile reveals evidence that a dwarf dark galaxy—a tiny halo companion of a much larger galaxy—is lurking nearly 4 billion light-years away.

This discovery paves the way for ALMA to find many more such objects, which could help astronomers address important questions on the nature of dark matter…

READ MORE…


This Noninvasive Thyroid ‘Cancer’ isn’t Cancer Anymore

Female_neckThe reclassification of a noninvasive type of thyroid cancer that has a low risk of recurrence is expected to reduce the fears and the unnecessary interventions that come with a cancer diagnosis, experts say.

The incidence of thyroid cancer has been rising partly due to early detection of tumors that are indolent or non-progressing, despite the presence of certain cellular abnormalities that are traditionally considered cancerous, says senior investigator Yuri Nikiforov, professor of pathology at the University of Pittsburgh.

“This phenomenon is known as overdiagnosis,” Nikiforov says. “To my knowledge, this is the first time in the modern era a type of cancer is being reclassified as a non-cancer…”

READ MORE…


Has climate change really improved U.S. weather?

image-20160422-17371-pe00z3According to a new report published in “Nature” on April 20, 2016 by Patrick Egan and Megan Mullin, weather conditions have “improved” for the vast majority of Americans over the past 40 years. This, they argue, explains why there has been little public demand so far for a policy response to climate change.

Egan and Mullin do note that this trend is projected to reverse over the course of the coming century, and that Americans will become more concerned about climate change as they perceive more negative impact from weather. However, they estimate that such a shift may not occur in time to spur policy responses that could avert catastrophic impacts…

READ MORE…


[Video] This Octopus has an Odd Way of Grabbing a Meal

Pacific-striped-octopus_1170-770x460Unlike most octopuses, which tackle their prey with all eight arms, a rediscovered tropical octopus subtly taps its prey on the shoulder and startles it into its arms.

“I’ve never seen anything like it,” says Roy Caldwell, professor of integrative biology at the University of California, Berkeley. “Octopuses typically pounce on their prey or poke around in holes until they find something…

READ MORE…


Also Trending This Week, Check Out:

Paris climate deal signing ceremony: what it means and why it matters

By Damon Jones, University of Cologne and Bill Hare, Potsdam Institute for Climate Impact Research.

The world took a collective sigh of relief in the last days of 2015, when countries came together to adopt the historic Paris agreement on climate change.

The international treaty was a much-needed victory for multilateralism, and surprised many with its more-ambitious-than-expected agreement to pursue efforts to limit global warming to 1.5°C.

The next step in bringing the agreement into effect happens in New York on Friday 22 April, with leaders and dignitaries from more than 150 countries attending a high-level ceremony at the United Nations to officially sign it.

The New York event will be an important barometer of political momentum leading into the implementation phase – one that requires domestic climate policies to be drawn up, as well as further international negotiations.

It comes a week after scientists took a significant step to assist with the process. On April 13 in Nairobi, the Intergovernmental Panel on Climate Change agreed to prepare a special report on the impacts of global warming of 1.5°C above pre-industrial levels. This will provide scientific guidance on the level of ambition and action needed to implement the Paris agreement.

The heady days of the agreement in Paris.
Stephane Mahe/Reuters

Why the ceremony?

The signing ceremony in New York sets in motion the formal, legal processes required for the Paris Agreement to “enter into force”, so that it can become legally binding under international law.

Although the agreement was adopted on December 12 2015 in Paris, it has not yet entered into force. This will happen automatically 30 days after it has both been ratified by at least 55 countries, and by countries representing at least 55% of global greenhouse gas emissions. Both conditions of this threshold have to be met before the agreement is legally binding.

So, contrary to some concerns after Paris, the world does not have to wait until 2020 for the agreement to enter into force. It could happen as early as this year.

Signing vs ratification

When a country signs the agreement, it is obliged to refrain from acts that would defeat its object and purpose. The next step, ratification, signifies intent to be legally bound by the terms of the treaty.

The decision on timing for ratification by each country will largely be determined by domestic political circumstances and legislative requirements for international agreements.

Those countries that have already completed their domestic processes for international agreements can choose to sign and ratify on the same day in New York.

Who is going to sign and ratify in New York?

Early adopter: the Maldives.
Nattu, CC BY

It is perhaps no surprise that the countries which are particularly vulnerable to the impacts of climate change and who championed the need for high ambition in Paris will be first out of the gate to ratify in New York.

Thirteen Small Island Developing States (SIDS) from the Caribbean, Indian Ocean and Pacific have signalled their intent to sign and ratify in New York: Barbados, Belize, Fiji, Grenada, Maldives, Marshall Islands, Nauru, Palau, Samoa, Saint Lucia, St Vincent and the Grenadines, the Seychelles and Tuvalu.

While these countries make up about a quarter of the 55 countries needed, they only account for 0.02% of the emissions that count towards the required 55% global emissions total.

Bringing the big emitters on board

China and the United States have recently jointly announced their intentions to sign in New York and to take the necessary domestic steps to formally join the agreement by ratifying it later this year. Given that they make up nearly 40% of the agreed set of global emissions for entry into force, that will go a significant way to meeting the 55% threshold.

We can expect more announcements of intended ratification schedules on 22 April. Canada (1.95%) has signalled its intent to ratify this year and there are early signs for many others. Unfortunately the European Union, long a leader on climate change, seems unlikely to be amongst the first movers due to internal political difficulties, including the intransigence of the Polish government.

The double threshold means that even if all of the SIDS and Least Developed Countries (LDCs) ratified, accounting for more than 75 countries but only around 4% of global emissions, the agreement would not enter into force until countries with a further 51% of global emissions also ratified.

Consequently, many more of the large emitters will need to ratify to ensure that the Paris agreement enters into force. This was a key design feature – it means a small number of major emitters cannot force a binding agreement on the rest of the world, and a large number of smaller countries cannot force a binding agreement on the major emitters.

The 55% threshold was set in order to ensure that it would be hard for a blocking coalition to form – a group of countries whose failure to ratify could ensure that an emissions threshold could not be met in practice. A number much above 60% of global emissions could indeed have led to such a situation.

The countries that appear likely to ratify this year, including China, the USA, Canada, many SIDS and LDCs, members of the Climate Vulnerable Forum along with several Latin American and African countries – around 90 in all – still fall about 5-6% short of the 55% emissions threshold.

It will take one more large emitter, such as the Russian Federation (7.53%), or two such as India (4.10%) and Japan (3.79%) to get the agreement over the line. The intent of these countries is not yet known.

Why is early action important?

The Paris agreement may be ambitious, but it will only be as good as its implementation. That will depend on the political momentum gained in Paris being maintained. Early entry into force for the treaty would be a powerful signal in this direction.

We know from the Climate Action Tracker analyses that the present commitments are far from adequate. If all countries fully implement the national emission reduction targets brought to the climate negotiations last year, we are still on track for temperature increases of around 2.7°C. Worse, we also know that current policies adopted by countries are insufficient to meet these targets and are heading to around 3.6°C of global warming.

With average global annual temperature increase tipping over 1°C above pre-industrial levels for the first time last year, it is clear that action to reduce emissions has never been more urgent.

We are already seeing more evidence this year: increases in the monthly global averages of February and March 2016 far exceeded 1°C, record coral reef bleaching, heatwaves, and unprecedented early melting of the Greenland ice sheet this northern spring.

Huge swathes of the Great Barrier Reef have suffered coral bleaching in 2016.
University of Oregon, CC BY-SA

Early entry into force will unlock the legally binding rights and obligations for parties to the agreement. These go beyond just obligations aimed at delivering emissions reductions through countries’ Nationally Determined Contributions to the critical issues of, for example, adaptation, climate finance, loss and damage, and transparency in reporting on and reviewing action and support.

The events in New York this week symbolise the collective realisation that rapid, transformative action is required to decarbonise the global economy by 2050.

Climate science tells us that action must increase significantly within the next decade if we are to rein in the devastating impacts of climate change, which the most vulnerable countries are already acutely experiencing.

For an up-to-date picture of which countries have ratified the Paris Agreement, see our Ratification Tracker.The Conversation

Damon Jones, Lecturer, University of Cologne and Bill Hare, Visiting scientist, Potsdam Institute for Climate Impact Research

This article was originally published on The Conversation. Read the original article.

Now, Check Out: