Learning to live with wildfires: how communities can become ‘fire-adapted’

By Susan J. Prichard, University of Washington.

In recent years wildfire seasons in the western United States have become so intense that many of us who make our home in dry, fire-prone areas are grappling with how to live with fire.

When I moved to a small town in eastern Washington in 2004, I thought I was prepared for the reality of wildfires. As a fire ecologist, I had studied climate change and knew the predictions of hotter, drier and longer fire seasons.

But the severity and massive size of recent wildfires in our area have highlighted the importance of making our communities more resilient to fire.

In addition to better preparing for the inevitability of fire, my research and related studies have shown that prescribed burns and proactive thinning can make our neighboring forests less susceptible to large fire events.

A history of frequent fire

The valley where I live in eastern Washington is so special that I hesitate to share its name. In spite of record-breaking wildfire seasons in recent years, many people are still moving here to build cabins in the woods.

The Methow Valley is stunningly beautiful, with shrub steppe and ponderosa pine lowlands grading into mixed conifer forests at higher elevations, topped by high mountain peaks. Our valley was named by Native Americans for the balsamroot sunflower blossoms that wash the springtime hillsides in brilliant gold.

Warmer and drier springs are contributing to more extreme fire events, such as the Tripod Complex fire of 2006, which was the largest in 50 years.
US Forest Service

The native plants here depend on fire for growing space and regeneration. The arrowleaf balsamroot, for example, is deeply rooted and easily resprouts following fire. Ponderosa pine trees have thick, deeply grooved bark, and can shed their lower branches. If surface fires burn them, thick bark insulates their living tissue, and the lack of lower branches can prevent fires from spreading to crowns.

Historically, most semi-arid landscapes of western North America evolved with frequent fire. Ever-changing patterns of forest and rangeland vegetation were created by past burns. Grasslands, shrublands, open-grown and closed-canopy forests were all part of the patchwork.

Prior wildfire patterns constrained future fire spread through a mosaic of forest and nonforest vegetation that, in general, did not let fire burn contagiously across vast areas. While fires burned frequently, they were small to medium in size. Large fires, those of more than 10,000 acres, were infrequent by comparison and occurred during prolonged droughts, often under hot and windy conditions.

Today, in the absence of frequent fire, the same semi-arid landscapes have much more continuous forest cover. And fires, when they do burn, tend to be larger and more severe. My community lived through two such fire events in the past two summers.

How the forests has changed

Despite recent wildfires, semi-arid forests in my valley and across the inland West are still under a chronic fire deficit, resulting from a variety of historical factors. Fire suppression, displacement of native people, railroad and road building, and livestock grazing all contributed to the lack of fire.

It is difficult to convey how excluding fires from forests can so radically change them. Imagine if we replaced days of rain and snow with sunshine: the absence of precipitation would quickly shift all existing vegetation to sparse desert. Similarly, the near absence of fire over the past century has dramatically altered semi-arid landscapes, gradually replacing varied burn mosaics, characterized by forests of varying ages, shrublands and grasslands, with dense, multi-layered forests.

Markedly different wildfire behavior accompanies these changes. Wildfires are now able to contagiously burn vast areas of flammable vegetation, and severe fires, including crown fires that consume forest canopies, are increasingly common.

A rapidly warming climate is also contributing to large and severe wildfires.

It was after an early and dry spring in 2006 that the largest wildfire in 50 years, the Tripod Complex fire, raged north of our small town of Winthrop, Washington.

I remember watching it start – awestruck by the smoke plume, which resembled the aftermath of a bomb explosion. As the plume collapsed and smoke settled into our valley, the reality of living through a major wildfire sunk in. I wasn’t prepared for this kind of fire. None of us was.

Eight years later, the 2014 Carlton Complex fire burned down our valley, and in two days became the largest wildfire in state history. Lightning strikes had started many small fires, and when high winds arrived on July 17, fire starts exploded into fire storms, coalescing to burn over 160,000 acres and traveling nearly 40 miles in just nine hours.

If you asked anyone in our valley who lived through the Carlton Complex fires, you would need to prepare for a long story. Evacuations of everyone downwind of the fires. Night skies filled with ember showers. A total of 310 homes destroyed. Loss of pets and livestock. Properties so blackened and charred that owners chose to move. Wide-ranging opinions about firefighter responses, from profound gratitude to what might have been done. Massive flood and mudslide events that followed. Heroic acts of tight-knit neighborhoods and communities as we pulled together and helped each other recover and rebuild.

Recovery had just begun when the 2015 wildfire season struck. Drought continued across the region and set the stage for a second, fire-filled summer. In mid-July, lightning storms ignited the Okanogan Complex, the latest record-holding wildfire in state history. One hundred and twenty homes were destroyed, many in neighboring communities to the north and south. In our valley, three firefighters lost their lives, and a fourth was badly burned. After all that we have been through, the loss and injury of these young people is the most devastating.

Evidence for thinning and prescribed burns

As we face another dry summer, our community is coming to terms with the continuing reality of wildfires. By my estimate, since 1990 over one-third of our watershed has burned. We are beginning to discuss what it means to be fire-adapted: making our homes less penetrable to burning embers, reducing fuels and thinning vegetation around our properties, and choosing better places to live and build. We can also create safe access for firefighters, plan emergency evacuation routes, and manage dry forests to be more resilient.

After decades of fire exclusion, dense and dry forests with heavy accumulations of fuel and understory vegetation often need to be treated with a combination of thinning and prescribed burning. Restoring landscape patterns will take time and careful management to mitigate how future wildfires burn across landscapes.

Parts of the Tripod fire in 2006 burned in a mosaic pattern of trees of different ages, which can prevent large-scale, contiguous burns. It’s evidence that prescribed burning and thinning can make forests more resilient.
U.S. Forest Service

From our research, we know that fuel reduction in dry forests can mitigate the effects of wildfires. After the 2006 Tripod fires, we studied how past forest thinning and prescribed burning treatments influenced subsequent wildfire severity. We found that tree mortality was high in untreated or recently thinned forests, but lower in forests that had been recently thinned and prescribed burned. Our results, along with other studies in the western United States, provide compelling evidence that thinning, in combination with prescribed burning, can make forests more resilient.

On average, one-quarter of mature trees died in thinned and prescribed burned forests compared to 60-65 percent of trees in untreated or thinned forests. In a driving tour of the Tripod burn post-wildfire, areas that were prescribed burned are generally green islands amidst a gray sea of standing dead trees.

In ongoing research, we hope to learn how restoration treatments can be strategically placed to create more fire-resistant landscapes.

Self-regulating?

Wildfires also have a critical role in restoration. The 2014 Carlton and 2015 Okanogan Complex fires burned the borders of the Tripod fire and of other recent wildfires, but sparse fuels on the margins of these prior burned areas did not support fire spread.

As more fires burn across dry forests, they are creating vast puzzle-piece mosaics, and in time may become more self-regulating – limiting the size and spread.pdf) of subsequent fires.

Unmanaged stands on the left compared to an adjacent plot that’s been thinned to reduce vulnerability to severe fire.
Susan J Prichard, Author provided

However, the imprints of recent fires are large, and it will take many small to medium wildfires to restore the diverse mosaic these landscapes need and once supported. Managing naturally ignited wildfires that burn in the late season or under favorable weather conditions, in combination with prescribed burning, will be essential to restore self-regulating landscapes.

Recent summers have taught us that we can’t permanently exclude fire from our valley or other fire-prone areas. This is difficult to accept for a community so recently devastated by fire and sick of the smoke that comes with it. However, summers are getting hotter and drier, and more wildfires are on the way. We have to adapt the way we live with fire and learn ways to promote resilience – within our homes, communities and neighboring forests.

Native peoples, less than 150 years ago, proactively burned the landscapes we currently inhabit – for personal safety, food production and enhanced forage for deer and elk. In some places, people still maintain and use traditional fire knowledge. As we too learn to be more fire-adapted, we need to embrace fire not only as an ongoing problem but an essential part of the solution.

The ConversationSusan J. Prichard, Research Scientist of Forest Ecology, University of Washington

This article was originally published on The Conversation. Read the original article.

Now, Check Out:

Should parents ask their children to apologize?

Craig Smith, University of Michigan

Have you ever felt deserving of an apology and been upset when you didn’t get one? Have you ever found it hard to deliver the words, I’m sorry?

Such experiences show how much apologies matter. The importance placed on apologies is shared by many cultures. Diverse cultures even share a great deal in common when it comes to how apologies are communicated.

When adults feel wronged, apologies have been shown to help in a variety of ways:
Apologies can reduce retaliation; they can bring about forgiveness and empathy for wrongdoers; and they can aid in the repair of broken trust. Further, sincere apologies have the physiological effect of lowering blood pressure more quickly, especially among those who are prone to hold on to anger.

How do children view and experience apologies? And what do parents think about when to prompt their young ones to apologize?

How children understand apologies

Research shows that children as young as age four grasp the emotional implications of apology. They understand, for example, that an apology can improve the feelings of someone who’s been upset. Preschoolers also judge apologizing wrongdoers to be more likable, and more desirable as partners for interaction and cooperation.

Children as young as four understand the emotional meaning of an apology. Funkyah, CC BY-NC-ND

Recent studies have tested the actual impact of apologies on children. In one such study, a group of four- to seven-year-olds received an apology from a child who failed to share, while another group did not get an apology. The participants who received the apology felt better and viewed the offending child as nicer as well as more remorseful.

Another study exposed children to a more distressing event: A person knocked over a tower that six- to seven-year-olds were building. Some children got an apology, some did not. In this case, a spontaneous apology did not improve children’s upset feelings. However, the apology still had an impact. Children who got an apology were willing to share more of their attractive stickers with the person who knocked over the tower compared to those who did not get an apology.

This finding suggests that an apology led to forgiveness in children, even if sadness about the incident understandably lingered. Notably, children did feel better when the other person offered to help rebuild their toppled towers. In other words, for children, both remorseful words and restorative actions make a difference.

When does a child’s apology matter to parents?

Although apologies carry meaning for children, views on whether parents should ask their children to apologize vary. A recent caution against apology prompting was based on the mistaken notion that young children have limited social understanding. In fact, young children understand a great deal about others’ viewpoints.

When and why parents prompt their children to apologize has not been systematically studied. In order to gain better insight into this question, I recently conducted a study with my colleagues Jee Young Noh and Michael Rizzo at the University of Maryland and Paul Harris at Harvard University.

We surveyed 483 parents of three- to 10-year-old children. Most participants were mothers, but there was a sizable group of fathers as well. Parents were recruited via online parenting discussion groups and came from communities all around the U.S.. The discussion groups had a variety of orientations toward parenting.

In order to account for the possibility that parents might want to show themselves in the best light, we took a measure of “social desirability bias” from each parent. The results reported here emerged after we statistically corrected for the influence of this bias.

A card from daughter to mother.
Todd Ehlers, CC BY-ND

We asked parents to imagine their children committing what they would consider to be “transgressions.” We then asked them how likely they would be to prompt an apology in each scenario. We also asked parents to rate how important they felt it was for their children to learn to apologize in a variety of situations. Finally, we asked the parents about their general approaches to parenting.

The large majority of parents (96 percent) felt that it was important for their children to learn to apologize following an incident in which children upset another person on purpose. Further, 88 percent felt it was important for their children to learn to apologize in the aftermath of upsetting someone by mistake.

Fewer than five percent of the parents surveyed endorsed the view that apologies are empty words. However, parents were sensitive to context.

Parents reported being especially likely to prompt apologies following their children’s intentional and accidental “moral transgressions.” Moral transgressions involve issues of welfare, justice, and rights, such as stealing from or hurting another person.

Parents viewed apologies as relatively less important following their children’s transgressions of social convention (e.g., breaking a rule in a game, interrupting a conversation).

Apology as a way to mend rifts

It’s noteworthy that parents were very likely to anticipate prompting apologies following incidents in which their children upset others on purpose and by mistake.

This suggests that a focus for many parents, when prompting apologies, is addressing the outcomes of their children’s social missteps. Our data suggest that parents use apology prompts to teach their children how to manage difficult social situations, regardless of underlying intentions.

Parents may prompt an apology to mend an interpersonal rift.
Girl image via www.shutterstock.com

For example, 88 percent of parents indicated that they would typically prompt an apology if their child broke a peer’s toy by mistake (in the event that the child did not apologize spontaneously).

Indeed, parents especially anticipated prompting apologies following accidental mishaps that involved their children’s peers (and not parents themselves as the wronged parties). When a child’s peer is a victim, parents likely recognize that apologies can quickly mend potential interpersonal rifts that may otherwise linger.

We also asked parents why they viewed apology prompts as important for their children. In the case of moral transgressions, parents saw these prompts as tools for helping children take responsibility. In addition, they used apology prompts for promoting empathy, teaching about harm, helping others feel better and clearing up confusing situations.

However, not all parents viewed the importance of apology prompting in the same way. There was a subset of parents who were relatively permissive: warm and caring but not overly inclined to provide discipline or expect mature behavior from their children.

Most of these parents were not wholly dismissive of the importance of apologies, but they consistently indicated being less likely to provide prompting to their children, compared to the other parents in the study.

When to prompt an apology

Overall, most parents in our study viewed apologies as important in the lives of children. And the child development research described above indicates that many children share this view.

But are there more and less effective ways to prompt a child to apologize? I argue that parents should consider whether a child will offer a prompted apology willingly and sincerely. A recently completed study sheds some light on why.

When should parents prompt an apology? Zvi Kons, CC BY-NC

In this study – currently under review – we asked four- to nine-year-old children to evaluate two types of apologies that were prompted by an adult. One apology was willingly given to the victim after the apology prompt; the other apology was given only after additional adult coercion (“You need to say you’re sorry!”).

We found that 90 percent of the children viewed the recipient of the prompted, “willingly given” apology as feeling better. However, only 22 percent of the children connected a coerced apology to improved feelings in the victim.

So, as parents ponder the merits of prompting apologies from children, it seems important to refrain from pushing one’s child to apologize when he or she is not ready, or is simply not remorseful. Most young children don’t view coerced apologies as effective.

In such cases, interventions aimed at calming down, increasing empathy and making amends may be more constructive than pushing a resistant child to deliver an apology. And, of course, components like making amends can accompany willingly given apologies as well.

Finally, to arguments that apologies are merely empty words that young children parrot, it’s worth noting that we have many rituals that involve rather scripted verbal exchanges, such as when two people in love say “I do” at a wedding or commitment ceremony.

Just as these scripted words carry deep cultural and personal meaning, so too can other culturally valued verbal scripts, such the words in an apology. Thoughtfully teaching young children about apologizing is one aspect of teaching them how to be caring and well-regarded members of their communities.

The ConversationCraig Smith, Research Investigator, University of Michigan

This article was originally published on The Conversation. Read the original article.

Now, Check Out:

Why are people starting to believe in UFOs again?

By Joseph P. Laycock, Texas State University.

The 1990s were a high-water mark for public interest in UFOs and alien abduction. Shows like “The X-Files” and Fox’s “alien autopsy” hoax were prime-time events, while MIT even hosted an academic conference on the abduction phenomenon.

But in the first decade of the 21st century, interest in UFOs began to wane. Fewer sightings were reported, and established amateur research groups like the British Flying Saucer Bureau disbanded.

In 2006 historian Ben Macintyre suggested in The Times that the internet had “chased off” the UFOs. The web’s free-flowing, easy exchange of ideas and information had allowed UFO skeptics to prevail, and, to Macintyre, people were no longer seeing UFOs because they no longer believed in them.

Data seemed to back up Macintyre’s argument that, when it came to belief in UFOs, reason was winning out. A 1990 Gallup poll found that 27 percent of Americans believed “extraterrestrial beings have visited Earth at some time in the past.” That number rose to 33 percent in 2001, before dropping back to 24 percent in 2005.

But now “The X-Files” is back, and Hillary Clinton has even pledged to disclose what the government knows about aliens if elected president. Meanwhile, a recent Boston Globe article by Linda Rodriguez McRobbie suggests that belief in UFOs may be growing.

She points to a 2015 Ipsos poll, which reported that 45 percent of Americans believe extraterrestrials have visited the Earth.

So much for reason.

Why does Western society continue to be fascinated with the paranormal? If science doesn’t automatically kill belief in UFOs, why do reports of UFOs and alien abductions go in and out of fashion?

To some extent, this is political. Even though government agents like “Men in Black” may be the stuff of folklore, powerful people and institutions can influence the level of stigma surrounding these topics.

Sociologists of religion have also suggested that skepticism is countered by a different societal trend, something they’ve dubbed “re-enchantment.” They argue that while science can temporarily suppress belief in mysterious forces, these beliefs will always return – that the need to believe is ingrained in the human psyche.

A new mythology

The narrative of triumphant reason dates back, at least, to German sociologist Max Weber’s 1918 speech “Science as a Vocation,” in which he argued that the modern world takes for granted that everything is reducible to scientific explanations.

“The world,” he declared, “is disenchanted.”

As with many inexplicable events, UFOs were initially treated as an important topic of scientific inquiry. The public wondered what was going on; scientists studied the issue and then “demystified” the topic.

Modern UFOlogy – the study of UFOs – is typically dated to a sighting made by a pilot named Kenneth Arnold. While flying over Mount Rainier on June 24, 1947, Arnold described nine disk-like objects that the media dubbed “flying saucers.”

A few weeks later the Roswell Daily Register reported that the military had recovered a crashed flying saucer. By the end of 1947, Americans had reported an additional 850 sightings.

The front page of the July 6, 1947, edition of the Roswell Daily Record.
Wikimedia Commons

During the 1950s, people started reporting that they’d made contact with the inhabitants of these craft. Frequently, the encounters were erotic.

For example, one of the first “abductees” was a mechanic from California named Truman Bethurum. Bethurum was taken aboard a spaceship from Planet Clarion, which he said was captained by a beautiful woman named Aura Rhanes. (Bethurum’s wife eventually divorced him, citing his obsession with Rhanes.) In 1957, Antonio Villas-Boas of Brazil reported a similar encounter in which he was taken aboard a ship and forced to breed with a female alien.

Psychologists and sociologists proposed a few theories about the phenomenon. In 1957, psychoanalyst Carl Jung theorized that UFOs served a mythological function that helped 20th-century people adapt to the stresses of the Cold War. (For Jung, this did not preclude the possibility that UFOs might be real.)

Furthermore, American social mores were rapidly changing in the mid-20th century, especially around issues of race, gender and sexuality. According to historian W. Scott Poole, stories of sex with aliens could have been a way of processing and talking about these changes. For example, when the Supreme Court finally declared laws banning interracial marriage unconstitutional in 1967, the country had already been talking for years about Betty and Barney Hill, an interracial couple who claimed to have been probed by aliens.

Contactee lore also started applying “scientific ideas” as a way to repackage some of the mysterious forces associated with traditional religions. Folklore expert Daniel Wojcik has termed belief in benevolent space aliens as “techno-millennarianism.” Instead of God, some UFO believers think forms of alien technology will be what redeems the world. Heaven’s Gate – whose members famously committed mass suicide in 1995 – was one of several religious groups awaiting the arrival of the aliens.

You’re not supposed to talk about it

Despite some dubious stories from contactees, the Air Force took UFO sightings seriously, organizing a series of studies, including Project Blue Book, which ran from 1952 to 1969.

In 1966, the Air Force tapped a team of University of Colorado scientists headed by physicist Edward Condon to investigate reports of UFOs. Even though the team failed to identify 30 percent of the 91 sightings it examined, its 1968 report concluded that it wouldn’t be useful to continue studying the phenomenon. Condon added that schoolteachers who allowed their students to read UFO-related books for classroom credit were doing a grave disservice to the students’ critical faculties and ability to think scientifically.

Basing its decision off the report, the Air Force terminated Project Blue Book, and Congress ended all funding for UFO research.

As religion scholar Darryl Caterine explained in his book “Haunted Ground,” “With civil rights riots, hippie lovefests and antiwar protests raging throughout the nation, Washington gave its official support to a rational universe.”

While people still believed in UFOs, expressing too much interest in the subject now came with a price. In 2010, sociologists Christopher D. Bader, F. Carson Mencken and Joseph O. Baker found that 69 percent of Americans reported belief in at least one paranormal subject (astrology, ghosts, UFOs, etc.).

But their findings also suggested that the more status and social connections someone has, the less likely he or she is to report paranormal belief. Single people report more paranormal beliefs than married people, and those with low incomes report more paranormal belief than those with high incomes. It may be that people with “something to lose” have reason not to believe in the paranormal (or at least not to talk about it).

In 1973, the American Institute of Aeronautics and Astronautics surveyed its membership about UFOs. Several scientists reported that they had seen unidentified objects and a few even answered that UFOs are extraterrestrial or at least “real.” However, physicist Peter A. Sturrock suggested that scientists felt comfortable answering these questions only because their anonymity was guaranteed.

Harvard psychiatrist John Mack came to symbolize the stigma of UFO research. Mack worked closely with abductees, whom he dubbed “experiencers.” While he remained cagey about whether aliens actually existed, he advocated for the experiencers and argued that their stories should be taken seriously.

John Mack’s appearance on ‘Oprah.’

His bosses weren’t happy. In 1994, Harvard Medical School opened an investigation into his research – an unprecedented action against a tenured professor. In the end, Harvard dropped the case and affirmed Mack’s academic freedom. But the message was clear: Being open-minded about aliens was bad for one’s career.

Reason and re-enchantment

So if Hillary Clinton is running for president, why is she talking about UFOs?

Part of the answer may be that the Clintons have ties to a network of influential people who have lobbied the government to disclose the truth about UFOs. This includes the late millionaire Laurence Rockefeller (who funded John Mack’s research) and John Podesta, the chairman of Clinton’s campaign and a long-time disclosure advocate.

But there may also be a broader cultural cycle at work. Sociologists such as Christopher Partridge have suggested that disenchantment leads to re-enchantment. While secularization may have weakened the influence of traditional churches, this doesn’t mean that people have become disenchanted skeptics. Instead, many have explored alternate spiritualities that churches had previously stigmatized as “superstitions” (everything from holistic healing to Mayan prophecies). The rise of scientific authority may have paradoxically paved the way for UFO mythology.

A similar change may be happening in the political sphere where the language of critical thinking has been turned against the scientific establishment. In the 1960s, Congress deferred to the Condon Report. Today, conservative politicians regularly challenge ideas like climate change, evolution and the efficacy of vaccines. These dissenters never frame their claims as “anti-science” but rather as courageous examples of free inquiry.

Donald Trump may have been the first candidate to discover that weird ideas are now an asset instead of a liability. In a political climate where the language of reason is used to attack the authority of science, musing over the possibility of UFOs simply doesn’t carry the stigma that it used to.

The ConversationJoseph P. Laycock, Assistant Professor of Religious Studies, Texas State University

This article was originally published on The Conversation. Read the original article.

Now, Check Out:

 

Will the health dangers of climate change get people to care? The science says: maybe

By Matthew Nisbet, Northeastern University.

Climate change is a major public health threat, already making existing problems like asthma, exposure to extreme heat, food poisoning, and infectious disease more severe, and posing new risks from climate change-related disasters, including death or injury.

Those were the alarming conclusions of a new scientific assessment report released by the Obama administration this week, drawing on input from eight federal agencies and more than 100 relevant experts.

“As far as history is concerned this is a new kind of threat that we are facing,“ said U.S. Surgeon General Vivek Murthy at a White House event. Pregnant women, children, low-income people and communities of color are among the most at risk.

Despite ever more urgent warnings of scientists, Americans still tend to view climate change as a scientific or environmental issue, but not as a problem that currently affects them personally, or one that connects to issues that they already perceive as important.

Yet research suggests that as federal agencies, experts, and societal leaders increasingly focus on the public health risks of climate change, this reframing may be able to overcome longstanding public indifference on the issue. The new communication strategy, however, faces several hurdles and uncertainties.

Putting a public health focus to the test

In a series of studies that I conducted with several colleagues in 2010 and 2011, we examined how Americans respond to information about climate change when the issue is reframed as a public health problem.

In line with the findings of the recent Obama administration report, the messages we tested with Americans stressed scientific findings that link climate change to an increase in the incidence of infectious diseases, asthma, allergies, heat stroke and other health problems – risks that particularly impact children, the elderly and the poor.

We evaluated not only story lines that highlighted these risks, but also the presentations that focused on the benefits to public health if actions were taken to curb greenhouse emissions.

In an initial study, we conducted in-depth interviews with 70 respondents from 29 states, recruiting subjects from six previously defined audience segments. These segments ranged on a continuum from those individuals deeply alarmed by climate change to those who were deeply dismissive of the problem.

Across all six audience segments, when asked to read a short essay that framed climate change in terms of public health, individuals said that the information was both useful and compelling, particularly at the end of the essay when locally focused policy actions were presented with specific benefits to public health.

Effects of climate change, including higher temperatures, have direct effects on public health, but historically it’s largely been framed as an environmental issue.
anoushdehkordi/flickr, CC BY

In a follow-up study, we conducted a nationally representative online survey. Respondents from each of the six audience segments were randomly assigned to three different experimental conditions in which they read brief essays about climate change discussed as either an environmental problem, a public health problem or a national security problem. This allowed us to evaluate their emotional reactions to strategically framed messages about the issue.

In comparison to messages that defined climate change in terms of either the environment or national security, talking about climate change as a public health problem generated greater feelings of hope among subjects. Research suggests that fostering a sense of hope, specifically a belief that actions to combat climate change will be successful, is likely to promote greater public involvement and participation on the issue.

Among subjects who tended to doubt or dismiss climate change as a problem, the public health focus also helped diffuse anger in reaction to information about the issue, creating the opportunity for opinion change.

A recent study by researchers at Cornell University built on our findings to examine how to effectively reframe the connections between climate change and ocean health.

In this study involving 500 subjects recruited from among passengers on a Seattle-area ferry boat, participants were randomly assigned to two frame conditions in which they read presentations that defined the impact of climate change on oceans.

For a first group of subjects, the consequences of climate change were framed in terms of their risks to marine species such as oysters. For the second group, climate change was framed in terms of risks to humans who may eat contaminated oysters.

The framing of ocean impacts in terms of risks to human health appeared to depoliticize perceptions. In this case, the human health framing condition had no discernible impact on the views of Democrats and independents, but it did influence the outlook of Republicans. Right-leaning people, when information emphasized the human health risks, were significantly more likely to support various proposed regulations of the fossil fuel industry.

In two other recent studies, the Cornell team of researchers have found that communications about climate change are more persuasive among political conservatives when framed in terms of localized, near-term impacts and if they feature compassion appeals for the victims of climate change disasters, such as drought.

Challenges to reframing climate change

To date, a common weakness in studies testing different framing approaches to climate change is that they do not evaluate the effects of the tested messages in the context of competing arguments.

In real life, most people hear about climate change by way of national news outlets, local TV news, conversations, social media and political advertisements. In these contexts, people are likely to also encounter arguments by those opposed to policy action who misleadingly emphasize scientific uncertainty or who exaggerate the economic costs of action.

Thus our studies and others may overestimate framing effects on attitude change, since they do not correspond to how most members of the public encounter information about climate change in the real world.

The two studies that have examined the effects of novel frames in the presence of competing messages have found mixed results. A third recent study finds no influence on attitudes when reframing action on climate change in terms of benefits to health or the economy, even in the absence of competing frames. In light of their findings, the authors recommend that communication efforts remain focused on emphasizing the environmental risks of inaction.

Communicating about climate change as a public health problem also faces barriers from how messages are shared and spread online, suggests another recent study.

In past research on Facebook sharing, messages that are perceived to be conventional are more likely to be passed on than those that are considered unconventional. Scholars theorize that this property of Facebook sharing relates closely to how cultures typically tend to reinforce status quo understandings of social problems and to marginalize unconventional perspectives.

In an experiment designed like a game of three-way telephone in which subjects were asked to select and pass on Facebook messages about climate change, the authors found that a conventional framing of climate change in terms of environmental risks was more likely to be shared, compared to less conventional messages emphasizing the public health and economic benefits to action.

In all, these results suggest that efforts to employ novel framing strategies on climate change that involve an emphasis on public health will require sustained, well-resourced, and highly coordinated activities in which such messages are repeated and emphasized by a diversity of trusted messengers and opinion leaders.

That’s why the new federal scientific assessment, which was promoted via the White House media and engagement offices, is so important. As these efforts continue, they will also need to be localized and tailored to specific regions, cities, or states and periodically evaluated to gauge success and refine strategy.

The ConversationMatthew Nisbet, Associate Professor of Communication, Northeastern University

This article was originally published on The Conversation. Read the original article.

Now, Check Out:

Freaks, geeks, norms and mores: why people use the status quo as a moral compass

By Christina Tworek, University of Illinois at Urbana-Champaign.

The Binewskis are no ordinary family. Arty has flippers instead of limbs; Iphy and Elly are Siamese twins; Chick has telekinetic powers. These traveling circus performers see their differences as talents, but others consider them freaks with “no values or morals.” However, appearances can be misleading: The true villain of the Binewski tale is arguably Miss Lick, a physically “normal” woman with nefarious intentions.

Much like the fictional characters of Katherine Dunn’sGeek Love,” everyday people often mistake normality as a criterion for morality. Yet, freaks and norms alike may find themselves anywhere along the good/bad continuum. Still, people use what’s typical as a benchmark for what’s good, and are often averse to behavior that goes against the norm. Why?

In a series of studies, psychologist Andrei Cimpian and I investigated why people use the status quo as a moral codebook – a way to decipher right from wrong and good from bad. Our inspiration for the project was philosopher David Hume, who pointed out that people tend to allow the status quo (“what is”) to guide their moral judgments (“what ought to be”). Just because a behavior or practice exists, that doesn’t mean it’s good – but that’s exactly how people often reason. Slavery and child labor, for example, were and still are popular in some parts of the world, but their existence doesn’t make them right or OK. We wanted to understand the psychology behind the reasoning that prevalence is grounds for moral goodness.

To examine the roots of such “is-to-ought inferences,” we turned to a basic element of human cognition: how we explain what we observe in our environments. From a young age, we try to understand what’s going on around us, and we often do so by explaining. Explanations are at the root of many deeply held beliefs. Might people’s explanations also influence their beliefs about right and wrong?

Quick shortcuts to explain our environment

When coming up with explanations to make sense of the world around us, the need for efficiency often trumps the need for accuracy. (People don’t have the time and cognitive resources to strive for perfection with every explanation, decision or judgment.) Under most circumstances, they just need to quickly get the job done, cognitively speaking. When faced with an unknown, an efficient detective takes shortcuts, relying on simple information that comes to mind readily.

More often than not, what comes to mind first tends to involve “inherent” or “intrinsic” characteristics of whatever is being explained.

Separate bathrooms reflect the natural order of things?
Bart Everson, CC BY

For example, if I’m explaining why men and women have separate public bathrooms, I might first say it’s because of the anatomical differences between the sexes. The tendency to explain using such inherent features often leads people to ignore other relevant information about the circumstances or the history of the phenomenon being explained. In reality, public bathrooms in the United States became segregated by gender only in the late 19th century – not as an acknowledgment of the different anatomies of men and women, but rather as part of a series of political changes that reinforced the notion that women’s place in society was different from that of men.

Testing the link

We wanted to know if the tendency to explain things based on their inherent qualities also leads people to value what’s typical.

To test whether people’s preference for inherent explanations is related to their is-to-ought inferences, we first asked our participants to rate their agreement with a number of inherent explanations: For example, girls wear pink because it’s a dainty, flower-like color. This served as a measure of participants’ preference for inherent explanations.

In another part of the study, we asked people to read mock press releases that reported statistics about common behaviors. For example, one stated that 90 percent of Americans drink coffee. Participants were then asked whether these behaviors were “good” and “as it should be.” That gave us a measure of participants’ is-to-ought inferences.

These two measures were closely related: People who favored inherent explanations were also more likely to think that typical behaviors are what people should do.

We tend to see the commonplace as good and how things should be. For example, if I think public bathrooms are segregated by gender because of the inherent differences between men and women, I might also think this practice is appropriate and good (a value judgment).

This relationship was present even when we statistically adjusted for a number of other cognitive or ideological tendencies. We wondered, for example, if the link between explanation and moral judgment might be accounted for by participants’ political views. Maybe people who are more politically conservative view the status quo as good, and also lean toward inherence when explaining? This alternative was not supported by the data, however, and neither were any of the others we considered. Rather, our results revealed a unique link between explanation biases and moral judgment.

What goes into even young children’s assumptions that ‘what is’ is ‘what ought to be’?
Girl image via www.shutterstock.com.

A built-in bias affecting our moral judgments

We also wanted to find out at what age the link between explanation and moral judgment develops. The earlier in life this link is present, the greater its influence may be on the development of children’s ideas about right and wrong.

From prior work, we knew that the bias to explain via inherent information is present even in four-year-old children. Preschoolers are more likely to think that brides wear white at weddings, for example, because of something about the color white itself, and not because of a fashion trend people just decided to follow.

Does this bias also affect children’s moral judgment?

Indeed, as we found with adults, 4- to 7-year-old children who favored inherent explanations were also more likely to see typical behaviors (such as boys wearing pants and girls wearing dresses) as being good and right.

If what we’re claiming is correct, changes in how people explain what’s typical should change how they think about right and wrong. When people have access to more information about how the world works, it might be easier for them to imagine the world being different. In particular, if people are given explanations they may not have considered initially, they may be less likely to assume “what is” equals “what ought to be.”

Queen Victoria started the trend in 1840 with her at-the-time unusual white wedding dress.
Franz Xaver Winterhalter

Consistent with this possibility, we found that by subtly manipulating people’s explanations, we could change their tendency to make is-to-ought inferences. When we put adults in what we call a more “extrinsic” (and less inherent) mindset, they were less likely to think that common behaviors are necessarily what people should do. For instance, even children were less likely to view the status quo (brides wear white) as good and right when they were provided with an external explanation for it (a popular queen long ago wore white at her wedding, and then everyone started copying her).

Implications for social change

Our studies reveal some of the psychology behind the human tendency to make the leap from “is” to “ought.” Although there are probably many factors that feed into this tendency, one of its sources seems to be a simple quirk of our cognitive systems: the early emerging bias toward inherence that’s present in our everyday explanations.

This quirk may be one reason why people – even very young ones – have such harsh reactions to behaviors that go against the norm. For matters pertaining to social and political reform, it may be useful to consider how such cognitive factors lead people to resist social change.

The ConversationChristina Tworek, Ph.D. Student in Developmental Psychology, University of Illinois at Urbana-Champaign

This article was originally published on The Conversation. Read the original article.

Now, Check Out:

Here’s How Octopuses See Color Differently than Any Other Animal [Video]

Biologists have puzzled for decades over the paradox of octopus vision. Despite their brilliantly colored skin and ability to rapidly change color to blend into the background, cephalopods like octopuses and squid have eyes with only one type of light receptor—which basically means they see only black and white.

Why would a male risk flashing its bright colors during a mating dance if the female can’t even see him—but a nearby fish can, and quickly gulps him down? And how could these animals match the color of their skin with their surroundings as camouflage if they can’t actually see the colors?

A new study shows that cephalopods may actually be able to see color—just differently from any other animal.

Their secret? An unusual pupil—U-shaped, W-shaped, or dumbbell-shaped—that allows light to enter the eye through the lens from many directions, rather than just straight into the retina.

Chromatic aberration

Humans and other mammals have eyes with round pupils that contract to pinholes to give us sharp vision, with all colors focused on the same spot. But as anyone who’s been to the eye doctor knows, dilated pupils not only make everything blurry, but create colorful fringes around objects—what is known as chromatic aberration.

This is because the transparent lens of the eye—which in humans changes shape to focus light on the retina—acts like a prism and splits white light into its component colors. The larger the pupillary area through which light enters, the more the colors are spread out. The smaller our pupil, the less the chromatic aberration. Camera and telescope lenses similarly suffer from chromatic aberration, which is why photographers stop down their lenses to get the sharpest image with the least color blurring.

Cephalopods, however, evolved wide pupils that accentuate the chromatic aberration and might have the ability to judge color by bringing specific wavelengths to a focus on the retina, much the way animals like chameleons judge distance by using relative focus. They focus these wavelengths by changing the depth of their eyeball, altering the distance between the lens and the retina, and moving the pupil around to changes its off-axis location and thus the amount of chromatic blur.

“We propose that these creatures might exploit a ubiquitous source of image degradation in animal eyes, turning a bug into a feature,” says Alexander Stubbs, a graduate student at the University of California, Berkeley. “While most organisms evolve ways to minimize this effect, the U-shaped pupils of octopus and their squid and cuttlefish relatives actually maximize this imperfection in their visual system while minimizing other sources of image error, blurring their view of the world but in a color-dependent way and opening the possibility for them to obtain color information.”

How U-shaped pupils work

Stubbs came up with the idea that cephalopods could use chromatic aberration to see color after photographing lizards that display with ultraviolet light, and noticing that UV cameras suffer from chromatic aberration. He teamed up with his father, Christopher Stubbs, professor of physics and of astronomy at Harvard University, to develop a computer simulation to model how cephalopod eyes might use this to sense color. Their findings appear in the Proceedings of the National Academy of Sciences.

They concluded that a U-shaped pupil like that of squid and cuttlefish would allow the animals to determine the color based on whether or not it was focused on its retina. The dumbbell-shaped pupils of many octopuses work similarly, since they’re wrapped around the eyeball in a U shape and produce a similar effect when looking down. This may even be the basis of color vision in dolphins, which have U-shaped pupils when contracted, and jumping spiders.

“Their vision is blurry, but the blurriness depends on the color,” Stubbs says. “They would be comparatively bad at resolving white objects, which reflect all wavelengths of light. But they could fairly precisely focus on objects that are purer colors, like yellow or blue, which are common on coral reefs and rocks and algae. It seems they pay a steep price for their pupil shape but may be willing to live with reduced visual acuity to maintain chromatically-dependent blurring, and this might allow color vision in these organisms.”

“We carried out extensive computer modeling of the optical system of these animals, and were surprised at how strongly image contrast depends on color,” says Christopher Stubbs. “It would be a shame if nature didn’t take advantage of this.”

Not enough contrast

Alexander Stubbs extensively surveyed 60 years of studies of color vision in cephalopods, and discovered that, while some biologists had reported an ability to distinguish colors, others reported the opposite.

Octopus skin can sense light without eyes

The negative studies, however, often tested the animal’s ability to see solid colors or edges between two colors of equal brightness, which is hard for this type of eye because, as with a camera, it’s hard to focus on a solid color with no contrast. Cephalopods are best at distinguishing the edges between dark and bright colors, and in fact, their display patterns are typically regions of color separated by black bars.

“We believe we have found an elegant mechanism that could allow these cephalopods to determine the color of their surroundings, despite having a single visual pigment in their retina,” he says. “This is an entirely different scheme than the multi-color visual pigments that are common in humans and many other animals. We hope this study will spur additional behavioral experiments by the cephalopod community.”

 

UC Berkeley’s Museum of Vertebrate Zoology, a Graduate Research Fellow Program grant to Alexander Stubbs, and Harvard University supported the work. Their work is published in the Proceedings of the National Academy of Sciences.

Source: Republished from Futurity.org as a derivative work under the Attribution 4.0 International license. Original article posted to Futurity by Robert Sanders, UC Berkeley.

Now, Check Out:

Reducing water pollution with microbes and wood chips

By Laura Christianson, University of Illinois at Urbana-Champaign.

Beneath fields of corn and soybeans across the U.S. Midwest lies an unseen network of underground pipes. These systems, which are known as tile drainage networks, channel excess water out of soil and carry it to lakes, streams and rivers. There are over 38 million acres of tile drainage in the Corn Belt states.

These networks play a vital role in farm production. They allow farmers to drive tractors into fields that would otherwise be too wet and make it possible to plant early in spring. And they boost crop growth and yield by preventing fields from becoming waterlogged.

But drainage systems are also major contributors to water pollution. The water they remove from fields contains nitrogen, which comes both from organic matter in rich Midwestern soil and from fertilizer. This nitrogen over-fertilizes downstream water bodies, causing blooms of algae. When the algae die, bacteria decompose them, using oxygen in the water as fuel.

The result is hypoxic zones, also known as dead zones, where nothing can live. Some of these zones, such as the one that forms in the Gulf of Mexico every year fed by Midwestern farm drainage water, cover thousands of miles.

The Gulf of Mexico dead zone forms every summer, fed by drainage from midwestern farms.
NASA/NOAA via Wikipedia

Across the Midwest and in many other areas, we need to reduce nitrogen pollution on a very large scale to improve water quality. My research focuses on woodchip bioreactors – simple trenches that can be constructed on farms to clean the water that flows out of tile drains. This is a proven practice that is ready for broad-scale implementation. Nevertheless, there is still great potential to improve how well wood chip bioreactors work, and to convince farmers to use them through additional research and engagement.

Removing nitrogen from farm runoff

Researchers studying ways to improve agricultural water quality have shown that we can use a natural process called denitrification to treat subsurface drainage water on farms. It relies on bacteria found in soil around the world to convert nitrate – the form of nitrogen in farm drainage water – to nitrogen gas, which is environmentally benign and makes up more than three-fourths of the air we breathe.

These bacteria use carbon as a food source. In oxygen-free conditions, such as wetlands or soggy soils, they are fueled by carbon in the surrounding soil, and inhale nitrate while exhaling nitrogen gas. Bioreactors are engineered environments that take advantage of their work on a large scale.

Denitrifying bioreactors on farms are surprisingly simple. To make them we dig trenches between farm fields and the outlets where water flows from tile drains into ditches or streams. We fill them with wood chips, which are colonized by native bacteria from the surrounding soil, and then route water from farm drainage systems through the trenches. The bacteria “eat” the carbon in the wood chips, “inhale” the nitrate in the water, and “exhale” nitrogen gas. In the process, they reduce nitrogen pollution in water flowing off of the farm by anywhere from 15 percent to over 90 percent.

A denitrifying woodchip bioreactor removing nitrate from a tile-drained corn field
Christianson and Helmers/Iowa State Extension

Although denitrifying bioreactors are relatively new, they have moved beyond proof of concept. A new special collection of papers in the Journal of Environmental Quality, which I co-edited with Dr. Louis Schipper of the University of Waikato in New Zealand, demonstrates that these systems can now be considered an effective tool to reduce pollution in nitrate-laden waters. Researchers are using these systems in an expanding range of locations, applications, and environmental conditions.

Making bioreactors work for farmers

Woodchip bioreactors can be installed without requiring farmers to take land out of production, and require very little annual maintenance. These are important selling points for farmers. The Clean Water Act does not regulate nitrogen pollution from diffuse agricultural sources such as farm runoff, but states across the Midwest are working with federal regulators to set targets for reducing nitrogen pollution. They also are developing water quality strategies that call for installing tens of thousands of denitrifying bioreactors to help reach those targets.

So far, wood chips have proven to be the most practical bioreactor fill. Research at the lab scale has also analyzed the idea of using farm residues such as corn cobs instead. In laboratory studies, such agricultural residues consistently provide much higher nitrate removal rates than wood chips. However, they need to be replaced more frequently than wood chips, which have an estimated design life of 10 years in a bioreactor.

Laboratory studies have also helped us understand how other factors influence nitrate removal in bioreactors, including water temperature and the length of time that water remains inside the bioreactor – which, in turn, depends on the flow rate and the size of the bioreactor. Another challenge is that bioreactors work best in late summer, when drainage flow rates are low and the water flowing from fields is warm, but most nitrogen flows from fields in drainage water in spring, when conditions are cool and wet. Researchers are working to design bioreactors that can overcome this disconnect.

Installing a denitrifying woodchip bioreactor
L. Christianson /Iowa Soybean Association Environmental Programs and Services

We have also carried out tests to see whether bioreactors can treat aquaculture wastewater, which typically contains much higher levels of nitrate and other water pollutants than tile drainage water. Our study showed that bioreactors could be a viable low-cost water treatment option for fish farms.

And researchers from New Zealand recently showed that denitrifying bioreactors may be an effective option for treating some small sources of municipal wastewater. Their work provided the first indication that woodchip bioreactors may be able to remove microbial contaminants like E.coli and viruses, which can be hazardous to human health, from water. The exact process by which the E.coli and viruses were removed is not yet known.

One difficult challenge in designing denitrifying bioreactors is testing novel designs at the field scale. We need to build and test large bioreactors so that we can provide useful information to farmers, landowners, crop advisors, drainage contractors, conservation staff, and state and federal agencies. They want to know practical facts, such as how long the wood chips last (approximately 7-15 years), how much it costs to install a field-scale bioreactor ($8,000-$12,000), and whether bioreactors back up water in tile drainage systems (no). To refine what we know, we plan to continue installing full-size bioreactors either on research farms or by collaborating with private farmers who want to be at the cutting edge of water-quality solutions.

We all play a role in agriculture because we all eat, and at the same time, we all need clean water. Simple technologies like woodchip bioreactors can help meet both goals by helping farmers maintain good drainage and providing cleaner water downstream.

The ConversationLaura Christianson, Research Assistant Professor of Crop Sciences, University of Illinois at Urbana-Champaign

This article was originally published on The Conversation. Read the original article.

Now, Check Out:

Can next-generation bomb ‘sniffing’ technology outdo dogs on explosives detection?

By David Atkinson, Pacific Northwest National Laboratory.

With each terrorist attack on another airport, train station or other public space, the urgency to find new ways to detect bombs before they’re detonated ratchets up.

Chemical detection of explosives is a cornerstone of aviation security. Typically called “trace detection,” this approach can find minuscule amounts of residue left behind after someone handles an explosive. A form of this technology called ion mobility spectroscopy is what Transportation Security Administration officers are using when they swab and test your laptop, hands or other items at the airport. In a few seconds, a sample is vaporized, and the resulting chemical ions are separated by molecular size and shape, triggering an alarm if an explosive compound is detected.

Contact sampling can be an effective approach, but the terrorist must be screened before reaching an intended target.
Jason Reed/Reuters

But this method is labor-intensive and slow for large volumes of stuff, and its effectiveness can depend on the sampling skill of the officer. It relies on contact sampling, which requires security personnel to have access to surfaces where residue may have been left. That’s not useful if a bomber has no intention of going through a security line and having his personal effects searched.

Bomb-sniffing canines are security partners at airports. Rebecca Cook/Reuters

Some security teams rely on dogs, which can be trained to sniff out explosives using their exquisite sense of smell. But the logistics and training involved with the routine deployment of canines can be arduous, and there are cultural barriers to using dogs to directly screen people.

What researchers have wanted to develop for a long time is a new chemical detection technology that could “sniff” for explosives vapor, much like a canine does. Many efforts over the years fell short as not being sensitive enough. My research team has been working on this problem for nearly two decades – and we’re making good headway.

More and more sensitive

The one big hurdle to engineering some kind of technology to rival a dog’s nose is the extremely low vapor pressures of most explosives. What we call the “equilibrium vapor pressure” of a material is basically a measure of how much of it is in the air, available for detection, under perfect conditions at a specific temperature.

Commonly used by military forces around the world, nitro-organic explosives such as TNT, RDX and PETN have equilibrium vapor pressures in the parts per trillion range. To reliably sniff out related vapors in operational environments, like a busy check-in area of an airport, the detection capability would need to be well below that – down into the parts per quadrillion range for many explosives.

These levels have been beyond the capability of trace detection instrumentation. Achieving a 325 parts per quadrillion level of detection is analogous to finding one specific tree on the entire planet Earth.

But recent research has pushed the detection envelope into that part-per-quadrillion range. In 2008, an international team used an advanced ionization technique, called secondary electrospray ionization mass spectrometry, to get better than part per trillion level detection of TNT and PETN.

In 2012, our research team at Pacific Northwest National Laboratory (PNNL) achieved direct, real-time detection of RDX vapors at levels below 25 parts per quadrillion using atmospheric flow tube mass spectrometry (AFT-MS).

Schematic diagram of the elegant simplicity of the AFT-MS device.
PNNL, CC BY

Sensitivity for a mass spectrometer is related to how many of the target molecules can be ionized and transferred into the mass spectrometer for detection. The more complete that process is, the better sensitivity will be. Our AFT-MS scheme is different because it uses time to maximize the benefits of the collisions of the explosive vapor molecules with air ions created from the ion source. It is the extent of reaction between the created ions and the explosives molecules that defines the sensitivity. Using AFT-MS, we’ve now expanded the capability to be able to detect a suite of explosives at single-digit part per quadrillion level.

Next step: putting it into practice

So we’ve moved the state of the art of chemical-based explosives detection into a realm where contact sampling is no longer necessary and instruments can “sniff” for explosives in a manner similar to canines.

PNNL research scientist Robert Ewing presenting a trace vapor sample to the detector. PNNL,  CC BY

Instruments that have the vapor detection capability of canines and can also operate continuously open up exciting new security screening possibilities. Trace detection wouldn’t need to rely on direct access to suspicious items for sampling. Engineers could create a noninvasive walk-through explosive detection device, similar to a metal detector.

The real innovation is in the direct detection of the vapor plume, enabled by the extreme sensitivity. There is no longer a need to collect explosive particles for vaporization – as is the case in past trace detection technologies that use loud air jets to dislodge particles from people. Instead, the greater sensitivity means the air could simply be constantly sampled for explosives molecules as people pass through.

This approach would certainly make airport checkpoints less onerous, improving throughput and the passenger experience. These types of devices could also be set up at entrances to airport terminals and other public facilities. It would be a major security leap to be able to detect explosives that are entering a building, not only when passing through a checkpoint.

Making two measurements – vapor detection via mass spectrometer and visual image via currently deployed body scanner – in the same time and space. PNNL , CC BY

A deployed vapor detection capability would also increase safety by adding a second independent form of information to what scanners have available. Currently, most screening techniques, such as x-ray and millimeter wave imaging, are based on spotting anomalies – a TSA operator notices a strange shape in the image. A vapor detection technology would add to their toolkit the ability to identify specific chemicals.

It allows for a two-pronged approach to finding explosives: spotting them on an image and sniffing them out in the vapor plume emitted by a checked bag or a person. It’s like recognizing a person you know but haven’t seen in a long time; both seeing a recent picture and hearing their voice may be necessary to identify them, rather than just one of those pieces of information on its own.

Inspired by the tremendous detection capabilities of dogs, we’ve made remarkable advances toward developing technology that can follow in their footsteps. Deploying vapor analysis for explosives can both enhance security levels and provide a less intrusive screening environment. Continuing research aims to hone the technology and lower its costs so it can be deployed at an airport near you.

The ConversationDavid Atkinson, Senior Research Scientist, Pacific Northwest National Laboratory

This article was originally published on The Conversation. Read the original article.

Now, Check Out:

There’s more than practice to becoming a world-class expert

D. Zachary Hambrick, Michigan State University and Fredrik Ullén, Karolinska Institute

Some people are dramatically better at activities like sports, music and chess than other people. Take the basketball great Stephen Curry. This past season, breaking the record he set last year by over 40 percent, Curry made an astonishing 402 three-point shots – 126 more than his closest challenger.

What explains this sort of exceptional performance? Are experts “born,“ endowed with a genetic advantage? Are they entirely “made” through training? Or is there some of both?

What earlier studies show

This question is the subject of a long-running debate in psychology, and is the focus of the new book “Peak: The New Science of Expertise” by Florida State University psychologist Anders Ericsson and science writer Robert Poole.

In a 1993 study, Ericsson and his colleagues recruited violinists from an elite Berlin music academy and asked them to estimate the amount of time they had spent engaging in “deliberate practice” across their musical careers.

Deliberate practice, as Ericsson and his colleagues have defined it, includes training activities that are specifically designed to improve a person’s performance in an endeavor like playing an instrument. These activities require a high level of concentration and aren’t inherently enjoyable. Consequently, the amount of deliberate practice even experts can engage in is limited to a few hours a day.

Researchers found that skill level correlated with deliberate practice.
Elliot Margolies, CC BY-NC-ND

Ericsson and his colleagues’ major discovery was that there was a positive correlation between the skill level of the violinists and the amount of deliberate practice they had accumulated. As deliberate practice increased, skill level increased.

For example, by age 20, the most accomplished group of violinists had accumulated an average of about 10,000 hours of deliberate practice – or about 5,000 hours more than the average for the least accomplished group. In a second study, Ericsson and colleagues replicated the finding in pianists.

On the basis of the studies, these researchers concluded that deliberate practice, rather than talent, is the determining factor for expert performance. They wrote,

We reject any important role for innate ability.

In a recent interview, Ericsson further explained that

we can’t find any sort of limiting factors that people really can’t surpass with the right kind of training. With the exception of body size: You can’t train to be taller.

Is it all about training?

Based on this evidence, the writer Malcolm Gladwell came up with his “10,000-hour rule” – the maxim that it takes 10,000 hours of practice to become an expert in a field. In the scientific literature, however, Ericsson’s views have been highly controversial from the start.

In an early critique, Harvard psychologist and multiple intelligence theorist Howard Gardner commented that Ericsson’s view required a “blindness” to earlier research on skill acquisition. Developmental psychologist Ellen Winner added that “Ericsson’s research demonstrated the importance of hard work but did not rule out the role of innate ability.” Renowned giftedness researcher Françoys Gagné noted that Ericsson’s view “misses many significant variables.” Cognitive neuroscientist Gary Marcus observed,

Practice does indeed matter – a lot and in surprising ways. But it would be a logical error to infer from the importance of practice that talent is somehow irrelevant, as if the two were in mutual opposition.

How important is training?

For our part, working with colleagues around the world, we have focused on empirically testing Ericsson and colleagues’ theory to find out more about the relationship between deliberate practice and performance in various domains.

A 2014 study led by Case Western Reserve University psychologist Brooke Macnamara used a statistical tool called “meta-analysis” to aggregate the results of 88 earlier studies involving over 11,000 participants, including studies that Ericsson and colleagues had used to argue for the importance of deliberate practice.

Each study included a measure of some activity that could be interpreted as deliberate practice, as well as a measure of skill level in a domain such as music, chess or sports.

It isn’t all about practice.
Ahd Photography, CC BY-NC

The study revealed that deliberate practice and skill level correlated positively with each other. In other words, the higher the skill level, the greater the amount of deliberate practice. However, the correlation wasn’t so strong as to warrant the claim that differences in skill level are largely due to deliberate practice.

In concrete terms, a key implication of this discovery is that people may require vastly different amounts of deliberate practice to reach the same level of skill.

A more recent study synthesized the results of 33 studies to understand the relationship between deliberate practice and performance in sports at a more detailed level.

One important finding was that deliberate practice lost its predictive power at the highest levels of skill. That is, on average, there was almost no difference in accumulated amount of deliberate practice between elite-level athletes, such as Olympians, and subelite athletes, such as contestants in national championships.

Training isn’t the only factor

As we discuss in a recent review article with behavioral geneticist Miriam Mosing, this evidence tells us that expertise – like virtually all phenomena that psychologists study – is determined by multiple factors.

Training history is certainly an important factor in explaining why some people are more successful than others. No one becomes a world-class performer without practice. People aren’t literally born with the sort of specialized knowledge that underpins skill in domains like music and chess. However, it now seems clear that training isn’t the only important factor in acquiring expertise. Other factors must matter, too.

What might these other factors be? There are likely many, including basic abilities and capacities that are known to be influenced by genes.

In a 2010 study with psychologist Elizabeth Meinz, 57 pianists ranging in skill from beginner to professional estimated the amount of deliberate practice they had accumulated across their musical careers, and took tests of “working memory capacity.” Working memory capacity is the ability to focus one’s attention on information critical to performing a task by filtering out distractions.

Working memory capacity made a difference while sight-reading.
woodleywonderworks, CC BY

The pianists then attempted to sight-read pieces of music (that is, to play the pieces without preparation) on a piano in the lab. The major finding was that working memory capacity was a factor in the pianists’ success in the sight-reading task, even among those with thousands of hours of deliberate practice.

Our research on twins further reveals that the propensity to practice music is influenced by genetic factors. This research compares identical twins, who share 100 percent of their genes, to fraternal twins, who on average share only 50 percent of their genes. A key finding of this work is that identical twins are typically more similar to each other in their practice histories, as well as their scores on tests of basic music aptitude, than fraternal twins are to each other. For example, it’s more likely to find a pair of identical twins who have both accumulated over 10,000 hours of practice than a pair of fraternal twins who have both accumulated this amount of practice.

This discovery indicates that, while extensive practice is necessary to become a highly skilled musician, genetic factors influence our willingness to put in that practice. More generally, this research suggests that we gravitate toward and persist at those activities that we have an aptitude for from the outset.

Research by other scientists is beginning to link expert performance to specific genes. In a groundbreaking series of molecular genetic studies, the University of Sydney geneticist Kathryn North and her colleagues found that the ACTN3 gene, which is expressed in fast-twitch muscle fibers, correlates with high-level success in sprinting events. Based on these findings, North and her colleagues have called ACTN3 a possible “gene for speed.”

How can people excel?

In view of this evidence, we have argued that the richness and complexity of expertise can never be fully understood by focusing on “nature” or “nurture.”

For us, the days of the “experts are born versus made” debate are over. The task before us is to understand the myriad ways that experts are born and made by developing and testing models of expertise that take into account all relevant factors, including not only training but also genetic influences.

From a practical perspective, we believe that this research will provide a scientific foundation for developing sound principles and procedures for helping people develop skills. As sports science research is already starting to demonstrate, it may one day be possible to give people accurate information about the activities in which they are likely to excel, and develop highly individualized training regimens to maximize people’s potential.

Far from discouraging people from following their dreams, this research promises to bring expert performance within the reach of a greater number of people than is currently the case.

The ConversationD. Zachary Hambrick, Professor of Psychology, Michigan State University and Fredrik Ullén, Professor of Cognitive Neuroscience, Karolinska Institute

This article was originally published on The Conversation. Read the original article.

Now, Check Out:

Breakthrough Device can Split Signals on Terahertz Wavelengths

One of the most basic components of any communications network is a power splitter that allows a signal to be sent to multiple users and devices. Researchers have now developed just such a device for terahertz radiation—a range of frequencies that may one day enable data transfer up to 100 times faster than current cellular and Wi-Fi networks.

“One of the big thrusts in terahertz technology is wireless communications,” says Kimberly Reichel, a postdoctoral researcher in Brown University’s School of Engineering who led the device’s development. “We believe this is the first demonstration of a variable broadbrand power splitter for terahertz, which would be a fundamental device for use in a terahertz network.”

The device could have numerous applications, including as a component in terahertz routers that would send data packets to multiple computers, just like the routers in current Wi-Fi networks.

Today’s cellular and Wi-Fi networks rely on microwaves, but the amount of data that can travel on microwaves is limited by frequency. Terahertz waves (which span from about 100 to 10,000 GHz on the electromagnetic spectrum) have a higher frequency and therefore the potential to carry much more data. Until recently, however, terahertz hasn’t received much attention from scientists and researchers, so many of the basic components for a terahertz communications network simply don’t exist.

Daniel Mittleman, a professor in Brown’s School of Engineering, has been working to develop some of those key components. His lab recently developed the first system for terahertz multiplexing and demultiplexing—a method of sending multiple signals through a single medium and then separating them back out on the other side. Mittleman’s lab has also produced a new type of lens for focusing terahertz waves.

Each of the components Mittleman has developed makes use of parallel-plate waveguides—metal sheets that can constrain terahertz waves and guide them in particular directions.

“We’re developing a family of waveguide tools that could be integrated to create the appropriate signal processing that one would need to do networking,” says Mittleman, who was a coauthor of the new paper along with Reichel and Brown research professor Rajind Mendis. “The power splitter is another member of that family.”

The new device consists of several waveguides arranged to form a T-junction. Signal going into the leg of the T is split by a triangular septum at the junction, sending a portion of the signal down each of the two arms. The septum’s triangular shape minimizes the amount of radiation that reflects back down the leg of the T, reducing signal loss. The septum can be moved right or left in order to vary the amount of power that is sent down either arm.

“We can go from an equal 50/50 split up to a 95/5 split, which is quite a range,” Reichel says.

For this proof-of-concept device, the septum is manipulated manually, but Mittleman says that process could easily be motorized to enable dynamic switching of power output to each channel. That could enable the device to be incorporated in a terahertz router.

“It’s reasonable to think that we could operate this at sub-millisecond timescales, which would be fast enough to do data packet switching,” Mittleman says. “So this is a component that could be used to enable routing in the manner of the microwave routers we use today.”

The researchers plan to continue to work with the new device. A next step, they say, would be to start testing error rates in data streams sent through the device.

“The goal of this work was to demonstrate that you can do variable power switching with a parallel-plate waveguide architecture,” Mittleman says. “We wanted to demonstrate the basic physics and then refine the design.”

The National Science Foundation and the W. M. Keck Foundation funded the work. The new device is described in the journal Scientific Reports.

Source: Republished from Futurity.org as a derivative work under the Attribution 4.0 International license. Original article posted to Futurity by .
Featured Image Credit: Tony Webster, via Wikimedia Commons

Now, Check Out: