There’s more than practice to becoming a world-class expert

D. Zachary Hambrick, Michigan State University and Fredrik Ullén, Karolinska Institute

Some people are dramatically better at activities like sports, music and chess than other people. Take the basketball great Stephen Curry. This past season, breaking the record he set last year by over 40 percent, Curry made an astonishing 402 three-point shots – 126 more than his closest challenger.

What explains this sort of exceptional performance? Are experts “born,“ endowed with a genetic advantage? Are they entirely “made” through training? Or is there some of both?

What earlier studies show

This question is the subject of a long-running debate in psychology, and is the focus of the new book “Peak: The New Science of Expertise” by Florida State University psychologist Anders Ericsson and science writer Robert Poole.

In a 1993 study, Ericsson and his colleagues recruited violinists from an elite Berlin music academy and asked them to estimate the amount of time they had spent engaging in “deliberate practice” across their musical careers.

Deliberate practice, as Ericsson and his colleagues have defined it, includes training activities that are specifically designed to improve a person’s performance in an endeavor like playing an instrument. These activities require a high level of concentration and aren’t inherently enjoyable. Consequently, the amount of deliberate practice even experts can engage in is limited to a few hours a day.

Researchers found that skill level correlated with deliberate practice.
Elliot Margolies, CC BY-NC-ND

Ericsson and his colleagues’ major discovery was that there was a positive correlation between the skill level of the violinists and the amount of deliberate practice they had accumulated. As deliberate practice increased, skill level increased.

For example, by age 20, the most accomplished group of violinists had accumulated an average of about 10,000 hours of deliberate practice – or about 5,000 hours more than the average for the least accomplished group. In a second study, Ericsson and colleagues replicated the finding in pianists.

On the basis of the studies, these researchers concluded that deliberate practice, rather than talent, is the determining factor for expert performance. They wrote,

We reject any important role for innate ability.

In a recent interview, Ericsson further explained that

we can’t find any sort of limiting factors that people really can’t surpass with the right kind of training. With the exception of body size: You can’t train to be taller.

Is it all about training?

Based on this evidence, the writer Malcolm Gladwell came up with his “10,000-hour rule” – the maxim that it takes 10,000 hours of practice to become an expert in a field. In the scientific literature, however, Ericsson’s views have been highly controversial from the start.

In an early critique, Harvard psychologist and multiple intelligence theorist Howard Gardner commented that Ericsson’s view required a “blindness” to earlier research on skill acquisition. Developmental psychologist Ellen Winner added that “Ericsson’s research demonstrated the importance of hard work but did not rule out the role of innate ability.” Renowned giftedness researcher Françoys Gagné noted that Ericsson’s view “misses many significant variables.” Cognitive neuroscientist Gary Marcus observed,

Practice does indeed matter – a lot and in surprising ways. But it would be a logical error to infer from the importance of practice that talent is somehow irrelevant, as if the two were in mutual opposition.

How important is training?

For our part, working with colleagues around the world, we have focused on empirically testing Ericsson and colleagues’ theory to find out more about the relationship between deliberate practice and performance in various domains.

A 2014 study led by Case Western Reserve University psychologist Brooke Macnamara used a statistical tool called “meta-analysis” to aggregate the results of 88 earlier studies involving over 11,000 participants, including studies that Ericsson and colleagues had used to argue for the importance of deliberate practice.

Each study included a measure of some activity that could be interpreted as deliberate practice, as well as a measure of skill level in a domain such as music, chess or sports.

It isn’t all about practice.
Ahd Photography, CC BY-NC

The study revealed that deliberate practice and skill level correlated positively with each other. In other words, the higher the skill level, the greater the amount of deliberate practice. However, the correlation wasn’t so strong as to warrant the claim that differences in skill level are largely due to deliberate practice.

In concrete terms, a key implication of this discovery is that people may require vastly different amounts of deliberate practice to reach the same level of skill.

A more recent study synthesized the results of 33 studies to understand the relationship between deliberate practice and performance in sports at a more detailed level.

One important finding was that deliberate practice lost its predictive power at the highest levels of skill. That is, on average, there was almost no difference in accumulated amount of deliberate practice between elite-level athletes, such as Olympians, and subelite athletes, such as contestants in national championships.

Training isn’t the only factor

As we discuss in a recent review article with behavioral geneticist Miriam Mosing, this evidence tells us that expertise – like virtually all phenomena that psychologists study – is determined by multiple factors.

Training history is certainly an important factor in explaining why some people are more successful than others. No one becomes a world-class performer without practice. People aren’t literally born with the sort of specialized knowledge that underpins skill in domains like music and chess. However, it now seems clear that training isn’t the only important factor in acquiring expertise. Other factors must matter, too.

What might these other factors be? There are likely many, including basic abilities and capacities that are known to be influenced by genes.

In a 2010 study with psychologist Elizabeth Meinz, 57 pianists ranging in skill from beginner to professional estimated the amount of deliberate practice they had accumulated across their musical careers, and took tests of “working memory capacity.” Working memory capacity is the ability to focus one’s attention on information critical to performing a task by filtering out distractions.

Working memory capacity made a difference while sight-reading.
woodleywonderworks, CC BY

The pianists then attempted to sight-read pieces of music (that is, to play the pieces without preparation) on a piano in the lab. The major finding was that working memory capacity was a factor in the pianists’ success in the sight-reading task, even among those with thousands of hours of deliberate practice.

Our research on twins further reveals that the propensity to practice music is influenced by genetic factors. This research compares identical twins, who share 100 percent of their genes, to fraternal twins, who on average share only 50 percent of their genes. A key finding of this work is that identical twins are typically more similar to each other in their practice histories, as well as their scores on tests of basic music aptitude, than fraternal twins are to each other. For example, it’s more likely to find a pair of identical twins who have both accumulated over 10,000 hours of practice than a pair of fraternal twins who have both accumulated this amount of practice.

This discovery indicates that, while extensive practice is necessary to become a highly skilled musician, genetic factors influence our willingness to put in that practice. More generally, this research suggests that we gravitate toward and persist at those activities that we have an aptitude for from the outset.

Research by other scientists is beginning to link expert performance to specific genes. In a groundbreaking series of molecular genetic studies, the University of Sydney geneticist Kathryn North and her colleagues found that the ACTN3 gene, which is expressed in fast-twitch muscle fibers, correlates with high-level success in sprinting events. Based on these findings, North and her colleagues have called ACTN3 a possible “gene for speed.”

How can people excel?

In view of this evidence, we have argued that the richness and complexity of expertise can never be fully understood by focusing on “nature” or “nurture.”

For us, the days of the “experts are born versus made” debate are over. The task before us is to understand the myriad ways that experts are born and made by developing and testing models of expertise that take into account all relevant factors, including not only training but also genetic influences.

From a practical perspective, we believe that this research will provide a scientific foundation for developing sound principles and procedures for helping people develop skills. As sports science research is already starting to demonstrate, it may one day be possible to give people accurate information about the activities in which they are likely to excel, and develop highly individualized training regimens to maximize people’s potential.

Far from discouraging people from following their dreams, this research promises to bring expert performance within the reach of a greater number of people than is currently the case.

The ConversationD. Zachary Hambrick, Professor of Psychology, Michigan State University and Fredrik Ullén, Professor of Cognitive Neuroscience, Karolinska Institute

This article was originally published on The Conversation. Read the original article.

Now, Check Out:

Personal beliefs versus scientific innovation: getting past a flat Earth mentality

By Igor Juricevic, Indiana University South Bend.

The history of science is also a history of people resisting new discoveries that conflict with conventional wisdom.

When Galileo promoted Copernicus’ theory that the Earth revolves around the sun – counter to church doctrine about the Earth being the center of the universe – he wound up condemned by the Roman Inquisition in 1633. Charles Darwin’s theory of evolution – that new species develop as a result of natural selection on inherited traits – ran into opposition because it contradicted long-held scientific, political and religious beliefs. Alfred Wegener’s 1912 proposal that Earth’s continents move relative to each other – the theory of continental drift – was rejected for decades, in part because scientists held fast to the traditional theories they’d spent careers developing.

These kinds of examples aren’t only historical, unfortunately. We’re used to hearing about how the general public can be dense about science. You might expect some portion of everyday folks to take their time coming around on truly groundbreaking ideas that run counter to what they’ve always thought.

But scientists, too, hold their own personal beliefs – by definition, based on old ways of thinking – that may be holding back the innovation that’s at the heart of science. And that’s a problem. It’s one thing for an average Joe to resist evolving scientific theories. It’s quite another if a scientist’s preconceived notions holds us back from discovering the new and unknown – whether that’s a cure for Zika or a cutting-edge technology to combat climate change.

Personal beliefs as publication roadblocks

Real scientific progress occurs when laboratory or field research is reported to the public. With luck, the finding is accepted and put into practice, cures are developed, social policies are instituted, educational practices are improved and so on.

This usually occurs though publication of the research in scientific journals. There’s an important step between the lab and publication that laypeople may not know about – the evaluation of the research by other scientists. These other scientists are peers of the researcher, typically working in a closely related area. This middle step is commonly referred to as peer review.

In a perfect world, peer review is supposed to determine if the study is solid, based on the quality of the research. It’s meant to be an unbiased evaluation of whether the findings should be reported via journal publication. This important step prevents sloppy research from reaching the public.

However, in the real world, scientists are human beings and are often biased. They let their own beliefs influence their peer reviews. For example, numerous reports indicate that scientists rate research more favorably if the findings agree with their prior beliefs. Worst of all, these prior beliefs often have nothing to do with science but are simply the scientists’ personal views.

‘But that’s counter to what I thought…’

How is this a problem for scientific innovation? Let’s look at how some personal beliefs could prevent innovative science from reaching the public.

What if she’s on the path to a revolutionary idea?
CIAT, CC BY-SA

“Minorities aren’t good at STEM.” The stereotype that “women are not good at math” is commonly held – and also happens to be incorrect. If a scientist holds this personal belief, then he is likely to judge any research done by women in STEM (Science, Technology, Engineering and Mathematics) more negatively – not because of its quality, but because of his own personal belief.

For instance, some studies have shown that female STEM applicants in academia are judged more harshly than their male counterparts. Because of this gender bias, it may take a female STEM researcher more time and effort before her work reaches the public.

Some racial minorities face similar kinds of bias. For example, one study found that black applicants are less likely to receive research funding from the U.S. National Institutes of Health than equivalently qualified whites. That’s a major roadblock to these researchers advancing their work.

“Comic books are low-brow entertainment for kids.” Here’s an example from my own area of expertise.

Does this look like a legitimate area of inquiry to you? Analysis of comic book images can yield new insights into how we perceive

.

Comic book research is a relatively recent area of study. Perhaps because of this, innovative findings in psychology have been discovered by analyzing comic book images.

However, people often believe that comic books are just low-brow entertainment for kids. If a scientist holds this personal belief, then she’s likely to judge any psychology research using comic books more negatively. Because of this, scientists like me who focus on comic books may not be able to publish in the most popular psychology journals. As a result, fewer people will ever see this research.

“The traditional ways are the best ways.” A final example is a personal belief that directly counters scientific innovation. Often, scientists believe that traditional methods and techniques are better than any newly proposed approaches.

The history of psychology supplies one example. Behaviorism was psychology’s dominant school of thought for the first part of the 20th century, relying on observed behavior to provide insights. Its devotees rejected new techniques for studying psychology. During behaviorism’s reign, any talk of internal processes of the mind was considered taboo. One of the pioneers of the subsequent cognitive revolution, George A. Miller, said “using ‘cognitive’ was an act of defiance.” Luckily for us, he was defiant and published one of the most highly cited papers in psychology.

If a scientist believes the way we’ve always done things in the lab is best, then she’ll judge any research done using novel approaches more negatively. Because of this, highly innovative work is rarely published in the best scientific journals and is often recognized only after considerable delay.

We know our planet is round. But are we missing out on other innovative ideas?
Jaya Ramchandani, CC BY

How is this a problem for scientific progress?

Almost by definition, the most important and innovative scientific findings often go against people’s existing beliefs. If research that conforms to personal beliefs is favored, then any research that is based on new ideas runs the risk of being passed over. It takes a leap to imagine a round Earth when everyone’s always believed it to be flat.

When old ideas rule the day, scientific progress stalls. And as our world changes at an ever faster pace, we need innovative thinking to face the coming challenges.

How can scientists stop their personal beliefs from impeding scientific progress? Completely removing personal beliefs from these contexts is impossible. But we can work to change our beliefs so that, instead of hampering scientific progress, they encourage it. Many studies have outlined possible ways to modify beliefs. It’s up to scientists, and indeed society as well, to begin to examine their own beliefs and change them for the better.

After all, we don’t want to delay the next revolutionary idea in climate science, pioneering cure for cancer, or dazzling discovery in astronomy just because we can’t see past our original beliefs.

The ConversationIgor Juricevic, Assistant Professor of Psychology (Perception and Cognition), Indiana University South Bend

This article was originally published on The Conversation. Read the original article.

Now, Check Out:

Schizophrenia Makes the External World Unreliable

New research may offer support for the idea that schizophrenia is a sensory disorder and that individuals with the condition are impaired in their ability to process stimuli from the outside world.

The findings may also point to a new way to identify the disease at an early stage and before symptoms become acute.

Because one of the hallmarks of the disease is auditory hallucinations, such as hearing voices, researchers have long suspected a link between auditory processing and schizophrenia. The new study provides evidence that the filtering of incoming visual information, and also of simple touch inputs, is also severely compromised in individuals with the condition.

“When we think about schizophrenia, the first things that come to mind are the paranoia, the delusions, the disorganized thinking,” says John Foxe, the chair of the University of Rochester Medical Center’s department of neuroscience and senior author of the study. “But there is increasing evidence that there is something fundamentally wrong with the way these patients hear, the way they feel things through their sense of touch, and in the way in which they see the environment.”

As reported in Translational Psychiatry, researchers conducted a series of experiments in which they presented visual and touch stimuli to 15 schizophrenia patients and 15 controls while they recorded the brain’s response via electrodes placed on the surface of the scalp. What scientists have known for years is that when encountering a series of inputs, such as successive flashes of light, the brain’s initial response is large and strong. However, as the flash is repeated the reaction quickly fades in intensity.

This response reduction is known as sensory “adaptation” and is an essential mechanism that enables the brain to filter out repetitive and irrelevant information. Researchers believe that adaptation allows the brain to free itself up to respond to new events and stimuli that may be more important.

The research team found that adaptation was substantially weaker in the patients with schizophrenia, and this was the case for both repeated visual stimulation and for repeated touch stimulation.

“If you can’t properly filter the information at the basic sensory input stage, then it is not too hard to imagine how the external world could begin to be experienced as bizarre and unreliable,” says Gizely Andrade, of the Albert Einstein College of Medicine and a coauthor of the study. “A fundamental aspect of the way our minds operate is that they can rely on the fact that the external world remains constant. If it doesn’t, then reality itself could become distorted.”

The team is hopeful that this discovery might lead to simple and basic measures of sensory adaptation that could be used to diagnose schizophrenia or identify individuals at risk of developing the condition before the disease has had a chance to fully establish itself.

“A key point with this study is that we find these dramatic differences in patients who are already suffering from full-blown schizophrenia,” says Foxe. “Schizophrenia is a disease that typically strikes during late adolescence or early adulthood, but what we also know is that long before a person has their first major psychotic episode, there are subtle changes occurring that precede the full manifestation of the disease. Our hope is that these new measures can allow us to pick up on these people before they ever become seriously ill.”

Additional coauthors of the study are from the Albert Einstein College of Medicine, and the Dublin Institute of Technology in Ireland. The National Institute of Child Health and Human Development supported the work, which is published in  Translational Psychiatry.

Source: Republished from Futurity.org as a derivative work under the Attribution 4.0 International license. Original article posted to Futurity by .

Featured Image Credit:  telmah.hamlet/Flickr

Now, Check Out:

Why U.S. Politicians Have Some of the Biggest Grins

You may not have noticed, if you live in the United States, that our politicians tend to smile more and smile bigger than politicians from other countries around the world. Psychology researchers at Stanford noticed this and decided to look in to why that is.

It turns out that the answer is: the more a nation values excitement, the bigger its politicians smile.

For example, in the United States, the smiles of President Barack Obama and other politicians tend to be big and wide. In East Asian countries like China and Taiwan, they are much more modest.

“Often people think that when they are viewing a candidate’s official photo, they are learning about the candidate’s unique traits,” says Jeanne Tsai, associate professor of psychology at Stanford University. “But our findings suggest that they are also learning about the candidate’s culture and the emotions it values.”

This smiling effect may have ramifications in the world of politics, Tsai says. Culturally different emotions and expressions may create misunderstandings between leaders from different nations involved in negotiations or crises.

Tsai directs the Culture and Emotion Lab at Stanford and has previously published research on how Americans tend to embrace positive feelings while Chinese people do not as much; how people from different cultures express sympathy; and how culture factors into why we like or don’t like people.

16133-abe_obama_news
How a political leader smiles in official photos – left, Japanese Prime Minister Shinzo Abe; right, US President Barack Obama – reflects their particular country’s cultural values related to how people express themselves, says Stanford psychologist Jeanne Tsai. (Credit: AKIRA/ITOH; Pete Souza)

The new study, published in the journal Emotion, suggests that the size of a political leader’s smile relates to his or her nation’s “ideal affect,”—culturally valued emotions and how people want to feel. Different countries, such as China and the United States, diverge in their ideal affects—or how people want to feel.

“It is significant that although democratic and developed nations were more likely to have leaders who smiled in their photos, it was the nation’s ideal affect that uniquely determined whether leaders’ smiles were more excited or calm,” Tsai says.

For the research, Tsai and colleagues conducted three studies. In one they compared the smiles of top-ranked American and Chinese government leaders, chief executive officers, and university presidents in their official photos. In the second study, they compared the smiles of winning vs. losing political candidates and higher vs. lower ranking chief executive officers and university presidents in the United States, Taiwan, and China.

In both studies, Americans showed more excited smiles than other leaders, regardless of election outcome or ranking.

The third study involved self-reported measures of ideal affect among college students from 10 different nations. Eight years later, researchers coded the smiles that legislators from those nations showed in their official photos and found that the more nations valued excitement and other high-arousal positive emotional states, the more their leaders showed excited smiles—and the more nations valued calm and other low-arousal positive states, the more their leaders showed calm smiles.

High-arousal positive states are emotions that feel good and energizing; low arousal positive states are emotions that feel good and soothing.

Tsai was surprised the results held across all occupations—beyond politics—and regardless of whether leaders were of higher or lower rank. “I thought they might be more pronounced in occupations that are more visible to the public, like government.”

Tsai also expected the findings might be more pronounced for higher ranked than lower ranked leaders, because higher ranked ones might have more knowledge about how their culture views smiling. But they weren’t—everyone shared the same inclination on smiles.

“I think the fact that the cultural differences emerged regardless of occupation or rank speaks to how pervasive cultural values regarding emotion are.”

Republished from Futurity.org as a derivative work under the Attribution 4.0 International license. Original article posted to Futurity by .

Next, Check Out:

One Question that Doctors Really Should Ask Patients

To better help their patients—and find more meaning in their work—physicians should turn toward suffering, according to a recent essay.

When patients suffer, doctors tend to want to fix things and, if they cannot, many doctors then withdraw emotionally.

Suffering doesn’t often fit neatly within the hurried, fragmented, world of clinical care, says University of Rochester professor Ronald M. Epstein, coauthor of the essay in the Journal of the American Medical Association with oncologist Anthony L. Back of the University of Washington.

Back and Epstein conducted a literature review on how doctors address suffering. Despite the ubiquity of suffering, they discovered few articles in the medical literature—most of which were published in journals rarely read by practicing clinicians.

“Physicians can have a pivotal role in addressing suffering if they can expand how they work with patients,” the article states. “Some people can do this instinctively but most physicians need training in how to respond to suffering—yet this kind of instruction is painfully lacking.”

The authors provide an example of how doctors can address suffering more effectively using a story of a patient who went years without a diagnosis, despite pain and disability. Surgery and medical treatments were not enough. Only after her physicians became truly curious about her experience, listening to her, looking at her, and bearing witness, could they help the patient heal.

Epstein and Back offer two clinical approaches to suffering to complement the familiar “diagnosing and treating.” They call these “turning toward” and “refocusing and reclaiming,” and the authors suggest that doctors use these approaches routinely.

Turning toward suffering means to, first, recognize it. It requires physicians to ask patients about their experience of suffering, through questions such as “what’s the worst part of this for you?” Sometimes doctors feel helpless in the face of suffering, and their own discomfort in those situations can be a useful wake-up call.

To refocus and reclaim involves helping patients reconnect with what’s important and meaningful in their lives, especially when suffering and its underlying causes cannot be eliminated. Sometimes that requires physicians to be supportive of a patient’s efforts to become more whole. In the case described, the patient separated from her spouse and re-established a professional identity. By making those changes she saw past her suffering and again viewed herself as a complete human being.

Asking physicians to engage as whole persons in order to address patients as whole persons “is a tall order,” Epstein and Back write, “yet, it strikes us as more feasible than ever because of evidence that programs promoting mindfulness, emotional intelligence, and self-regulation makes a difference.”

 

Republished from Futurity.org as a derivative work under the Attribution 4.0 International license. Original article published on Futurity by 

Research essay published in Journal of the American Medical Association

Featured Photo is Public Domain

Could ‘The Hunger Games’ Turn Your Teen Into a Revolutionary?

Tom van Laer, City University London

As a fan of The Hunger Games trilogy, I cannot wait to see the final part of the film series.

The Hunger Games novels and films have fascinated me for more than seven years.

And I’m not alone.

The popular books by Suzanne Collins are the most visible example of a genre of stories today’s teens are reading voraciously: young adult dystopian fiction.

Dystopian fiction is set in a world where people lead dehumanized and often fearful lives. Typically, these worlds are environmentally degraded or governed by totalitarian regimes.

My favorite example is George Orwell’s 1984, a hugely ambitious novel that deals with themes of both personal threat and universal oppression. Orwell’s vision is expressed in phrases like Big Brother, doublethink and Thought Police that are now part of everyday speech.

Even though they may have read 1984 as kids, some of today’s parents worry their teens’ obsession with dark fiction means they’ll grow up and overthrow the government – like Katniss Everdeen in Hunger Games or Tris Prior in Divergent.

How real is this concern?

Why is dystopian fiction popular?

A fan waits at the premiere of the new Hunger Games movie in LA.
Mario Anzuoni/REUTERS

I am a narratologist, meaning I study the causes and effects of exposure to stories.

My colleagues and I have found that the extent to which a reader loses herself in a story is a major cause of the extent to which her attitudes change.

Losing oneself in a story is also known as narrative transportation.

Narratologists have uncovered three consequences of narrative transportation that explain the current popularity of young adult dystopian fiction.

  1. Narrative transportation feels good. People like to empathize with story characters and to suspend reality.
  2. Narrative transportation teaches people about story themes, such as poverty. In turn, this knowledge intensifies the narrative transportation experience and makes people care about these themes.
  3. Narrative transportation teaches people about story symbols. For instance, the fictional mockingjay bird represents defiance in the Hunger Games. In turn, this knowledge makes it easier to follow the story, allows people to brag about being in the know, and strengthens in-group identity.

All three consequences make people want to repeat narrative transportation, which is most easily done with sequels – like the four movies based on the three Hunger Game books, or stories in the same genre.

Dystopian themes are egalitarian

There is no doubt that storytelling is powerful, and can have an effect on readers.

Young adult dystopian fiction tends to include at least one key learning point or moral. When teens absorb the moral, it can change their attitudes and durably weave the story into teens’ life choices. Narratologists call this the cultivation effect.

The cultivation effect of Harry Potter is reported to have played an important role in galvanizing Millennials’ political opinions. As Anthony Gierzynski and Kathryn Eddy note:

Harry Potter fans are more open to diversity and are more politically tolerant than nonfans; fans are also less authoritarian, less likely to support the use of deadly force or torture, more politically active, and more likely to have had a negative view of the Bush administration.

A similar effect could occur with teens and dystopian fiction. Many young adult dystopian stories tackle themes such as oppression, poverty, starvation, and war, among others. Relating to the story characters allows teens to explore and learn to care about these issues. Attitudes towards justice and responsibility can be changed when teens empathize with Tris or Katniss.

But that doesn’t mean dystopian-loving teens will act out the story plots when they grow up – say, by starting a revolution.

Rather, the cultivation effect predicts that today’s teens will grow up with less acceptance of oppression, poverty, starvation, and war. If government officials do not take these concerns seriously though, who knows what might happen?

Nothing is less innocent than a story – except maybe a teen who has taken its message to heart.

The Conversation

Tom van Laer, Senior Lecturer in Marketing, City University London

This article was originally published on The Conversation. Read the original article.

Featured Image Credit: mockingjay jennife b.

Hearing Ghost Voices Relies on Pseudoscience and Fallibility of Human Perception

Michael Nees, Lafayette College

Nontrivial numbers of Americans believe in the paranormal. These beliefs have spawned thousands of groups dedicated to investigating paranormal phenomena and a proliferation of ghost-hunting entries in the reality television market. Anecdotal evidence even suggests that ghost-hunting reality shows have increased public openness to paranormal research, which usually entails a small group traipsing through reportedly haunted locales at night with various ghost-hunting technologies.

Audio recorders figure prominently in paranormal researchers’ toolkits. Microphones capture ambient sounds during the investigation. Later, the audio recordings are scoured in search of messages from spirits. The premise is that audio recording devices can register otherwise inaudible communications from discarnate entities.

These purported communications have been dubbed electronic voice phenomena (EVP). The sounds are generally brief – most examples consist of single words or short phrases. Perceived contents of EVP range from threatening (“You’re going to hell”) to bizarre (“Egypt Air”).

An EVP recorded at Lizzie Borden’s house.

Part of the attraction of the audio recorder for paranormal researchers is its apparent objectivity. How could a skeptic refute the authenticity of a spirit captured by an unbiased technical instrument? To the believers, EVP seem like incontrovertible evidence of communications from beyond. But recent research in my lab suggested that people don’t agree much about what, if anything, they hear in the EVP sounds – a result readily explained by the fallibility of human perception. Despite the technological trappings, EVP research bears several characteristics of pseudoscience.

What are the EVP sounds?

The chain of evidence for most purported EVP makes hoaxes difficult to rule out, but let’s assume that many of these sounds are not deliberate fraud. In some instances, alleged EVP are the voices of the investigators or interference from radio transmissions – problems that indicate shoddy data collection practices. Other research, however, has suggested that EVP have been captured under acoustically controlled circumstances in recording studios. What are the possible explanations for these sounds?

The critical leap in EVP research is the point at which odd sounds are interpreted as voices that communicate with intention. Paranormal investigators typically decode the content of EVP by arriving at consensus among themselves. EVP websites advise paranormal researchers to ask themselves, “Is it a voice…are you sure?” or to “Share results among fellow investigators and try to prevent investigator bias when reviewing data.” Therein lies a methodological difficulty.

[nextpagelink][/nextpagelink]