Young children are terrible at hiding – psychologists have a new theory why

By Henrike Moll, University of Southern California – Dornsife College of Letters, Arts and Sciences and Allie Khalulyan, University of Southern California – Dornsife College of Letters, Arts and Sciences.

Young children across the globe enjoy playing games of hide and seek. There’s something highly exciting for children about escaping someone else’s glance and making oneself “invisible.”

However, developmental psychologists and parents alike continue to witness that before school age, children are remarkably bad at hiding. Curiously, they often cover only their face or eyes with their hands, leaving the rest of their bodies visibly exposed.

For a long time, this ineffective hiding strategy was interpreted as evidence that young children are hopelessly “egocentric” creatures. Psychologists theorized that preschool children cannot distinguish their own perspective from someone else’s. Conventional wisdom held that, unable to transcend their own viewpoint, children falsely assume that others see the world the same way they themselves do. So psychologists assumed children “hide” by covering their eyes because they conflate their own lack of vision with that of those around them.

But research in cognitive developmental psychology is starting to cast doubt on this notion of childhood egocentrism. We brought young children between the ages of two and four into our Minds in Development Lab at USC so we could investigate this assumption. Our surprising results contradict the idea that children’s poor hiding skills reflect their allegedly egocentric nature.

Who can see whom?

Each child in our study sat down with an adult who covered her own eyes or ears with her hands. We then asked the child whether or not she could see or hear the adult, respectively. Surprisingly, children denied that they could. The same thing happened when the adult covered her own mouth: Now children denied that they could speak to her.

A number of control experiments ruled out that the children were confused or misunderstood what they were being asked. The results were clear: Our young subjects comprehended the questions and knew exactly what was asked of them. Their negative responses reflected their genuine belief that the other person could not be seen, heard, or spoken to when her eyes, ears, or mouth were obstructed. Despite the fact that the person in front of them was in plain view, they flatout denied being able to perceive her. So what was going on?

It seems like young children consider mutual eye contact a requirement for one person to be able to see another. Their thinking appears to run along the lines of “I can see you only if you can see me, too” and vice versa. Our findings suggest that when a child “hides” by putting a blanket over her head, this strategy is not a result of egocentrism. In fact, children deem this strategy effective when others use it.

Built into their notion of visibility, then, is the idea of bidirectionality: Unless two people make eye contact, it is impossible for one to see the other. Contrary to egocentrism, young children simply insist on mutual recognition and regard.

An expectation of mutual engagement

Children’s demand of reciprocity demonstrates that they are not at all egocentric. Not only can preschoolers imagine the world as seen from another’s point of view; they even apply this capacity in situations where it’s unnecessary or leads to wrong judgments, such as when they are asked to report their own perception. These faulty judgments – saying that others whose eyes are covered cannot be seen – reveal just how much children’s perception of the world is colored by others.

The seemingly irrational way in which children try to hide from others and the negative answers they gave in our experiment show that children feel unable to relate to a person unless the communication flows both ways – not only from me to you but also from you to me, so we can communicate with each other as equals.

We are planning to investigate children’s hiding behavior directly in the lab and test if kids who are bad at hiding show more reciprocity in play and conversation than those who hide more skillfully. We would also like to conduct these experiments with children who show an atypical trajectory in their early development.

Children want to interact with the people around them.
Eye contact image via www.shutterstock.com.

Our findings underscore children’s natural desire and preference for reciprocity and mutual engagement between individuals. Children expect and strive to create situations in which they can be reciprocally involved with others. They want to encounter people who are not only looked at but who can return another’s gaze; people who not only listen but are also heard; and people who are not just spoken to but who can reply and thus enter a mutual dialogue.

At least in this respect, young children understand and treat other human beings in a manner that is not at all egocentric. On the contrary, their insistence on mutual regard is remarkably mature and can be considered inspirational. Adults may want to turn to these preschoolers as role models when it comes to perceiving and relating to other humans. These young children seem exquisitely aware that we all share a common nature as people who are in constant interaction with others.

The ConversationHenrike Moll, Assistant Professor in Developmental Psychology, University of Southern California – Dornsife College of Letters, Arts and Sciences and Allie Khalulyan, Ph.D. Student in Developmental Psychology, University of Southern California – Dornsife College of Letters, Arts and Sciences

This article was originally published on The Conversation. Read the original article.

Now, Check Out:

Confirmation bias: A psychological phenomenon that helps explain why pundits got it wrong

By Ray Nickerson, Tufts University.

As post mortems of the 2016 presidential election began to roll in, fingers started pointing to what psychologists call the confirmation bias as one reason many of the polls and pundits were wrong in their predictions of which candidate would end up victorious.

Confirmation bias is usually described as a tendency to notice or search out information that confirms what one already believes, or would like to believe, and to avoid or discount information that’s contrary to one’s beliefs or preferences. It could help explain why many election-watchers got it wrong: in the runup to the election, they saw only what they expected, or wanted, to see.

Psychologists put considerable effort into discovering how and why people sometimes reason in less than totally rational ways. The confirmation bias is one of the better-known of the biases that have been identified and studied over the past few decades. A large body of psychological literature reports how confirmation bias works and how widespread it is.

The role of motivation

Confirmation bias can appear in many forms, but for present purposes, we may divide them into two major types. One is the tendency, when trying to determine whether to believe something is true or false, to look for evidence that it is true while failing to look for evidence that it is false.

Imagine four cards on a table, each one showing either a letter or number on its visible side. Let’s say the cards show A, B, 1 and 2. Suppose you are asked to indicate which card or cards you would have to turn over in order to determine whether the following statement is true or false: If a card has A on its visible side, it has 1 on its other side. The correct answer is the card showing A and the one showing 2. But when people are given this task, a large majority choose to turn either the card showing A or both the card showing A and the one showing 1. Relatively few see the card showing 2 as relevant, but finding A on its other side would prove the statement to be false. One possible explanation for people’s poor performance of this task is that they look for evidence that the statement is true and fail to look for evidence that it is false.

Another type of confirmation bias is the tendency to seek information that supports one’s existing beliefs or preferences or to interpret data so as to support them, while ignoring or discounting data that argue against them. It may involve what is best described as case building, in which one collects data to lend as much credence as possible to a conclusion one wishes to confirm.

At the risk of oversimplifying, we might call the first type of bias unmotivated, inasmuch as it doesn’t involve the assumption that people are driven to preserve or defend their existing beliefs. The second type of confirmation bias may be described as motivated, because it does involve that assumption. It may go a step further than just focusing on details that support one’s existing beliefs; it may involve intentionally compiling evidence to confirm some claim.

It seems likely that both types played a role in shaping people’s election expectations.

A proper venue for leaving out conflicting evidence.
Clyde Robinson, CC BY

Case building versus unbiased analysis

An example of case building and the motivated type of confirmation bias is clearly seen in the behavior of attorneys arguing a case in court. They present only evidence that they hope will increase the probability of a desired outcome. Unless obligated by law to do so, they don’t volunteer evidence that’s likely to harm their client’s chances of a favorable verdict.

Another example is a formal debate. One debater attempts to convince an audience that a proposition should be accepted, while another attempts to show that it should be rejected. Neither wittingly introduces evidence or ideas that will bolster one’s adversary’s position.

In these contexts, it is proper for protagonists to behave in this fashion. We generally understand the rules of engagement. Lawyers and debaters are in the business of case building. No one should be surprised if they omit information likely to weaken their own argument. But case building occurs in contexts other than courtrooms and debating halls. And often it masquerades as unbiased data collection and analysis.

Where confirmation bias becomes problematic

One sees the motivated confirmation bias in stark relief in commentary by partisans on controversial events or issues. Television and other media remind us daily that events evoke different responses from commentators depending on the positions they’ve taken on politically or socially significant issues. Politically liberal and conservative commentators often interpret the same event and its implications in diametrically opposite ways.

Anyone who followed the daily news reports and commentaries regarding the election should be keenly aware of this fact and of the importance of political orientation as a determinant of one’s interpretation of events. In this context, the operation of the motivated confirmation bias makes it easy to predict how different commentators will spin the news. It’s often possible to anticipate, before a word is spoken, what specific commentators will have to say regarding particular events.

Here the situation differs from that of the courtroom or the debating hall in one very important way: Partisan commentators attempt to convince their audience that they’re presenting a balanced factual – unbiased – view. Presumably, most commentators truly believe they are unbiased and responding to events as any reasonable person would. But the fact that different commentators present such disparate views of the same reality makes it clear that they cannot all be correct.

Reporters in the media center watched a presidential debate, but might have seen something different.
AP Photo/John Locher

Selective attention

Motivated confirmation bias expresses itself in selectivity: selectivity in the data one pays attention to and selectivity with respect to how one processes those data.

When one selectively listens only to radio stations, or watches only TV channels, that express opinions consistent with one’s own, one is demonstrating the motivated confirmation bias. When one interacts only with people of like mind, one is exercising the motivated confirmation bias. When one asks for critiques of one’s opinion on some issue of interest, but is careful to ask only people who are likely to give a positive assessment, one is doing so as well.

This presidential election was undoubtedly the most contentious of any in the memory of most voters, including most pollsters and pundits. Extravagant claims and counterclaims were made. Hurtful things were said. Emotions were much in evidence. Civility was hard to find. Sadly, “fallings out” within families and among friends have been reported.

The atmosphere was one in which the motivated confirmation bias would find fertile soil. There is little doubt that it did just that and little evidence that arguments among partisans changed many minds. That most pollsters and pundits predicted that Clinton would win the election suggests that they were seeing in the data what they had come to expect to see – a Clinton win.

None of this is to suggest that the confirmation bias is unique to people of a particular partisan orientation. It is pervasive. I believe it to be active independently of one’s age, gender, ethnicity, level of intelligence, education, political persuasion or general outlook on life. If you think you’re immune to it, it is very likely that you’ve neglected to consider the evidence that you’re not.

The ConversationRay Nickerson, Research Professor of Psychology, Tufts University

This article was originally published on The Conversation. Read the original article.

Now, Check Out: 

Science deconstructs humor: What makes some things funny?

By Alex Borgella, Tufts University.

Think of the most hilarious video you’ve ever seen on the internet. Why is it so funny?

As a researcher who investigates some of the potential side effects of humor, I spend a fair bit of time verifying the funniness of the jokes, photos and videos we present to participants in our studies. Quantifying the perception of humor is paramount in ensuring our findings are valid and reliable. We often rely on pretesting – that is, trying out jokes and other potential stimuli on different samples of people – to give us a sense of whether they might work in our studies.

To make predictions on how our funny materials will be perceived by study subjects, we also turn to a growing body of humor theories that speculate on why and when certain situations are considered funny. From ancient Greece to today, many thinkers from around the world have yearned to understand what makes us laugh. Whether their reasons for studying humor were strategic (like some of Plato’s thoughts on using humor to manipulate people’s political views) or simply inquisitive, their insights have been crucial to the development of humor research today.

Take the following video as an example of a funny stimulus one might use in humor research:

Man vs. Moose in Sweden.

To summarize: A man and his female companion are enjoying a pleasant day observing a moose in one of Sweden’s forests. The woman makes a sudden movement, causing the moose to charge the couple. The man stands his ground, causing the moose to stop in his tracks. After a few feints with a large stick and several caveman-ish grunts by the man, the defeated moose retreats while the man proclaims his victory (with more grunting).

The clip has been viewed on YouTube almost three million times, and the comments make it clear that many folks who watch it are LOLing. But why is this funny?

Superiority theory: Dumb moose

It is the oldest of all humor theories: Philosophers such as Aristotle and Plato alluded to the idea behind the superiority theory thousands of years ago. It suggests that all humor is derived from the misfortunes of others – and therefore, our own relative superiority. Thomas Hobbes also alluded to this theory in his book “Leviathan,” suggesting that humor results in any situation where there’s a sudden realization of how much better we are than our direct competition.

Taking this theory into consideration, it seems like the retreating moose is the butt of the joke in this scenario. Charles Gruner, the late expert on superiority theory, suggest that all humor is derived from competition. In this case, the moose lost that competition.

Relief theory: Nobody died

The relief theory of humor stems from Sigmund Freud’s assertion that laughter lets us relieve tension and release “psychic energy.” In other words, Freud and other relief theorists believe that some buildup of tension is inherent to all humorous scenarios and the perception of humor is directly related to the release of that tension.

Freud used this idea to explain our fascination with taboo topics and why we might find it humorous to acknowledge them. For example, my own line of research deals with humor in interracial interactions and how it can be used to facilitate these commonly tense situations. Many comedians have tackled this topic as well, focusing on how language is used in interracial settings and using it as an example of how relief can be funny.

A comedy clip focused on interracial interactions gets some of its humor from the relief when a tense situation is resolved.

Interestingly, this theory has served as the rationale behind many studies documenting the psychological and physiological benefits of laughter. In both cases, the relief of tension (physiological tension, in the case of laughing) can lead to positive health outcomes overall, including decreased stress, anxiety and even physical pain.

In the case of our moose video: Once the moose charges, the tension builds as the man and the animal face off for an extended period of time. The tension is released when the moose gives up his ground, lowers his ears and eventually scurries away. The video would probably be far less humorous if the tension had been resolved with violence – for instance, the moose trampling the man, or alternatively ending up with a stick in its eye.

Incongruity theory: It’s unexpected

The incongruity theory of humor suggests that we find fundamentally incompatible concepts or unexpected resolutions funny. Basically, we find humor in the incongruity between our expectations and reality.

Resolving incongruity can contribute to the perception of humor as well. This concept is known as as the “incongruity-resolution” theory, and primarily refers to written jokes. When identifying what makes a humorous situation funny, this theory can be applied broadly; it can account for the laughs found in many different juxtaposed concepts.

Take the following one-liners as examples:

“I have an Epi-Pen. My friend gave it to me as he was dying. It seemed very important to him that I have it.”

“Remains to be seen if glass coffins become popular.”

The humor in both of these examples relies on incongruous interpretations: In the first, a person has clearly misinterpreted his friend’s dying wish. In the second, the phrase “remains to be seen” is a play on words that takes on two very different meanings depending on how you read the joke.

In the case of our moose video, the incongruity results from the false expectation that the interaction between man and moose would result in some sort of violence. When we see our expectations foiled, it results in the perception of humor.

The safety of being in the audience at a comedy show frees you to let loose.
Mark Schiefelbein/AP

Benign violations theory: It’s bad, but harmless

Incongruity is also a fundamental part of the benign violations theory of humor (BVT), one of the most recently developed explanations. Derived from the linguist Thomas Veatch’s “violation theory,” which describes various ways for incongruity to be funny, BVT attempts to create one global theory to unify all previous theories of humor and account for issues with each.

Broadly, benign violations theory asserts that all humor derives from three necessary conditions:

  1. The presence of some sort of norm violation, be it a moral norm violation (robbing a retirement home), social norm violation (breaking up with a long-term boyfriend via text message) or physical norm violation (purposefully sneezing directly on a child).
  2. A “benign” or “safe” context in which the violation takes place (this can take many forms).
  3. The interpretation of the first two points simultaneously. In other words, one must view, read or otherwise interpret a violation as relatively harmless.

Thus far, researchers studying BVT have demonstrated a few different scenarios in which the perception of a benign violation could take place – for example, when there is weak commitment to the norm being violated.

Take the example of a church raffling off a Hummer SUV. They found this scenario is much less funny to churchgoers (with their strong commitment to the norm that the church is sacred and embodies values of humility and stewardship) than it is to non-churchgoers (with relatively weak norm commitment about the church). While both groups found the concept of the church’s choice of fundraiser disgusting, only the non-churchgoers simultaneously appraised the situation as also amusing. Hence, a benign violation is born.

In the case of our moose video, the violation is clear; there’s a moose about to charge two people, and we’re not sure what exactly is about to go down. The benign part of the situation could be credited to a number of different sources, but it’s likely due to the fact that we’re psychologically (and physically, and temporally) distant from the individuals in the video. They’re far away in Sweden, and we’re comfortably watching their dilemma on a screen.

Homing in on funny

At one point or another, we’ve all wondered why some phrase or occurrence has caused us to erupt with laughter. In many ways, this type of inquiry is what drove me to research the limits and consequences of humor in the first place. People are unique and often find different things amusing. In order to examine the effects of humor, it is our job as researchers to try to select and craft the stimuli we present to affect the widest range of people. The outcomes of good science stem from both the validity and reliability of our stimuli, which is why it’s important to think critically about the reasons why we’re laughing.

The application of this still-growing body of humor research and theory is seen everywhere, influencing everything from political speeches to advertising campaigns. And while “laughter is the best medicine” may be an overstatement (penicillin is probably better, for one), psychologists and medical professionals have started to lend credence to the idea that humor and laughter might have some positive effects for health and happiness. These applications underscore the importance of developing the best understanding of humor we can.

The ConversationAlex Borgella, Ph.D. Candidate in Psychology, Tufts University

This article was originally published on The Conversation. Read the original article.

Now, Check Out:

Memetics and the science of going viral

By Shontavia Johnson, Drake University.

WHO LET THE DOGS OUT? WHO-WHO-WHO-WHO-WHO? WHO LET THE DOGS OUT? WHO-WHO-WHO-WHO-WHO?

If you’ve ever heard the Baha Men’s 2000 hit “Who Let the Dogs Out,” you probably have also experienced its somewhat-annoying-but-very-catchy hook being stuck in your head for several hours.

The official video for ‘Who Let the Dogs Out’ by the Baha Men.

As you went about your day quietly humming it, perhaps someone else heard you and complained minutes later that you’d gotten the tune stuck in their head. The song’s hook seems to have the ability to jump from one brain to another. And perhaps, to jump from the web browser you are using right now to your brain. In fact, you may be singing the hook to yourself right now.

Something similar happens on the internet when things go viral – seeming to follow no rhyme or reason, people are compelled to like, share, retweet or participate in things online.

Meme world domination: When the leader of the free world impersonates Grumpy Cat. Gary Cameron/Reuters

For example, Grumpy Cat’s photo was shared so many times online that it went on to receive the “single greatest internet achievement of the year” in 2013. The owner of Grumpy Cat (whose real name is Tardar Sauce) has said she did not know the cat’s photo, originally posted to Reddit, would be anything special.

Out of the blue, some social media challenges take off to such an extent that people seem powerless to ignore them. In 2014, more than 3 million people mentioned and shared #IceBucketChallenge videos in less than three weeks. After going viral, the challenge raised more than US$100 million for the ALS Association.

In other instances, however, digital media languishes. Funny cat photos and #XYZChallenges go ignored, unshared and without retweets.

Why, and how, are we compelled to repeat and share certain cultural elements like songs, videos, words and pictures? Is it just the luck of the draw and the pity of internet strangers? The reason may have less to do with random chance and more to do with a controversial field called memetics, which focuses on how little bits of culture spread among us.

As the director of Drake University Law School’s Intellectual Property Law Center, I’ve studied memetics and its relationship to viral media. It’s hard to ignore the connection between memetics and the question of what makes certain media get shared and repeated millions of times. Companies and individuals would be well-served to understand whether there is actually a science to going viral and how to use that science in campaigns.

Idea of ‘memes’ is based on genes

The term “memetics” was first proposed by evolutionary biologist Richard Dawkins in his popular 1976 book “The Selfish Gene.” He offered a theory regarding how cultural information evolves and is transmitted from person to person.

In the way a gene is a discrete packet of hereditary information, the idea is that a meme is a similar packet of cultural information. According to Dawkins, when one person imitates another, a meme is passed to the imitator, similar to the way blue eyes are passed from parents to children through genes.

Like male elk battling for supremacy, memes also fight to be on top.
Jake Bellucci, CC BY-ND

Memetics borrows from the theory of Darwinian evolution. It suggests that memes compete, reproduce and evolve just as genes do. Only the strongest survive. So memes fiercely vie for space and advantages in our brains and behaviors. The ones that succeed through widespread imitation have best evolved for repetition and communication. A meme is not controllable by any one individual – many people can simultaneously serve as hosts for it.

It can be difficult to further explain what might fall under the heading of “meme.” Commonly, however, scientists note a meme may be a phrase, catchy jingle, or behavior. Dawkins hesitated to strictly define the term, but he noted that tunes, ideas, catch-phrases, clothes fashions, and ways of making pots or building arches could all be memes. Memetics suggests that memes have existed for as long as human beings have been on the planet.

One common illustration is the spoked wheel meme. According to philosopher Daniel C. Dennett:

A wagon with spoked wheels carries not only grain or freight from place to place; it carries the brilliant idea of a wagon with spoked wheels from mind to mind.

The first person whose brain transported the spoked wheel meme builds one spoke-wheeled wagon. Others will see this first wagon, replicate the same meme, and continue to build more wagons until there are hundreds, thousands, or millions of wagons with spoked wheels. In the earliest days of human existence, such a meme could replicate quickly in a universe where alternatives were few and far between.

Memetics is about more than just what makes a thing popular. The strongest memes – those that replicate in the most minds – are the ones responsible for creating human culture.

A strong meme is going to go places.
Richard Walker, CC BY

Enter the internet

Today, the internet meme (what most people now just call a meme) is a piece of media that is copied and quickly spread online. One of the first uses of the internet meme idea arose in 1994, when Mike Godwin, an American attorney and internet law expert, used the word “meme” to characterize the rapid spread of ideas online. He had noticed that, in disparate newsgroups and virtual communities, certain posters were labeled as “similar to the Nazis” or “Hitler-like” when they made unpopular comments. Godwin dubbed this the Nazi-comparison meme. It would pop up again and again, in different kinds of discussions with posters from around the world, and Godwin marveled at the meme’s “peculiar resilience.”

More than 20 years later, the word “meme” has become a regular part of our lexicon, and has been used to describe everything from the Ermahgerd Girl to Crying Jordan to Gangnam Style.

In today’s world, any one meme has lots of competition. Americans spend, on average, 11 hours per day interacting with digital media. Australians spend 10 hours per day on internet-connected devices. Latin Americans spend more than 12 hours consuming some sort of media daily.

Around the world, people constantly receive thousands of photos, videos and other messages. Determining which of these items captures the most attention could translate into significant advantages for digital content creators.

Manipulating memes to go viral?

The internet meme and the scientific meme are not identical. The internet meme is typically deliberately altered by human ingenuity, while the scientific meme (invented before the internet became a thing) involves random change and accurate copying. In addition, internet memes are traceable through their presence on social media, while scientific memes are (at least right now), untraceable because they have no physical form or footprint. Dawkins, however, has stated that the internet meme and the scientific meme are clearly related and connected.

What causes one meme to replicate more successfully than another? Some researchers say that memes develop characteristics called “Good Tricks” to provide them with competitive advantages, including:

  1. being genuinely useful to a human host;
  2. being easily imitated by human brains; and
  3. answering questions that the human brain finds of interest.

First, if a meme is genuinely useful to humans, it is more likely to spread. Spoked-wheel wagons will replicate quickly because early humans need to transport lots of freight easily. Second, memes that are easy to copy have a competitive advantage over those that aren’t – a catchy hook like “WHO LET THE DOGS OUT” is easier to replicate than the lines to U2’s “Numb” (called one of the toughest pop songs to understand). Third, memes that answer pressing questions are likely to replicate. Peruse any bookstore aisle and you will find numerous books about finding your purpose, figuring out the meaning of life, or losing weight quickly and effectively – all topics of immense interest to many people.

Memetics suggests that there are real benefits to pairing a strong meme (using Dawkins’ original definition) with digital and other content. If there is a scientific explanation for strong replication, marketing and advertising strategies coupled with strong memes can unlock the share-and-repeat secrets of viral media.

The answer to such secrets may be found in songs like “WHO LET THE DOGS OUT? WHO-WHO-WHO-WHO-WHO?” Are you humming it yet?

The ConversationShontavia Johnson, Professor of Intellectual Property Law, Drake University

This article was originally published on The Conversation. Read the original article.

Next, Check Out:

Getting serious about funny: Psychologists see humor as a character strength

By Janet M. Gibson, Grinnell College.

Humor is observed in all cultures and at all ages. But only in recent decades has experimental psychology respected it as an essential, fundamental human behavior.

Historically, psychologists framed humor negatively, suggesting it demonstrated superiority, vulgarity, Freudian id conflict or a defense mechanism to hide one’s true feelings. In this view, an individual used humor to demean or disparage others, or to inflate one’s own self-worth. As such, it was treated as an undesirable behavior to be avoided. And psychologists tended to ignore it as worthy of study.

But research on humor has come into the sunlight of late, with humor now viewed as a character strength. Positive psychology, a field that examines what people do well, notes that humor can be used to make others feel good, to gain intimacy or to help buffer stress. Along with gratitude, hope and spirituality, a sense of humor belongs to the set of strengths positive psychologists call transcendence; together they help us forge connections to the world and provide meaning to life. Appreciation of humor correlates with other strengths, too, such as wisdom and love of learning. And humor activities or exercises result in increased feelings of emotional well-being and optimism.

For all these reasons, humor is now welcomed into mainstream experimental psychology as a desirable behavior or skill researchers want to understand. How do we comprehend, appreciate and produce humor?

What it takes to get a joke

Understanding and creating humor require a sequence of mental operations. Cognitive psychologists favor a three-stage theory of humor. To be in on the joke you need to be able to:

  1. Mentally represent the set up of the joke.
  2. Detect an incongruity in its multiple interpretations.
  3. Resolve the incongruity by inhibiting the literal, nonfunny interpretations and appreciating the meaning of the funny one.

An individual’s knowledge is organized in mental memory structures called schemas. When we see or think of something, it activates the relevant schema; Our body of knowledge on that particular topic immediately comes to mind.

For example, when we see cows in a Far Side cartoon, we activate our bovine schema (stage 1). But when we notice the cows are inside the car while human beings are in the pasture grazing, there are now two mental representations in our conscious mind: what our preexisting schema mentally represented about cows and what we imagined from the cartoon (stage 2). By inhibiting the real-world representation (stage 3), we find the idea of cows driving through a countryside of grazing people funny. “I know about cows” becomes “wait, cows should be the ones in the field, not people” becomes an appreciation of the humor in an implausible situation.

Funny is the subjective experience that comes from the resolution of at least two incongruous schemas. In verbal jokes, the second schema is often activated at the end, in a punchline.

That’s not funny

There are at least two reasons that we sometimes don’t get the joke. First, the punchline must create a different mental representation that conflicts with the one set up by the joke; timing and laugh tracks help signal the listener that a different representation of the punchline is possible. Second, you must be able to inhibit the initial mental representation.

You need some new material. Special Collections at Johns Hopkins University, CC BY-NC

When jokes perpetuate a stereotype that we find offensive (as in ethnic, racist or sexist jokes), we may refuse to inhibit the offensive representation. Violence in cartoons is another example; In Roadrunner cartoons, when an anvil hits the coyote, animal lovers may be unable to inhibit the animal cruelty meaning instead of focusing on the funny meaning of yet another inevitable failure.

This incongruity model can explain why older adults do not comprehend jokes as frequently as younger adults. Due to declines tied to the aging process, older adults may not have the cognitive resources needed to create multiple representations, to simultaneously hold multiple ones in order to detect the incongruity, or to inhibit the first one that was activated. Getting the joke relies on working memory capacity and control functions. However, when older adults succeed in their efforts to do these things, they typically show greater appreciation of the joke than younger adults do and report greater life satisfaction than those who don’t see the humor.

Advancing age can set the stage for an appreciation of humor.
Ann Fisher, CC BY-NC-ND

There may be other aspects to humor, though, where older adults hold the advantage. Wisdom is a form of reasoning that increases with age and is correlated with subjective well-being. Humor is linked with wisdom – a wise person knows how to use humor or when to laugh at oneself.

Additionally, intuition is a form of decision-making that may develop with the expertise and experience that come with aging. Like humor, intuition is enjoying a bit of a renaissance within psychology research now that it’s been reframed as a major form of reasoning. Intuition aids humor in schema formation and incongruity resolution, and we perceive and appreciate humor more through speedy first impressions rather than logical analysis.

Traveling through time

It’s a uniquely human ability to parse time, to reflect on our past, present and future, and to imagine details in these mental representations. As with humor, time perspective is fundamental to human experience. Our ability to enjoy humor is enmeshed with this mental capacity for time travel and subjective well-being.

People vary greatly in the ability to detail their mental representations of the past, present and future. For example, some people may have what psychologists call a negative past perspective – frequently thinking about bygone mistakes that don’t have anything to do with the present environment, even reliving them in vivid detail despite the present or future being positive.

Time perspective is related to feelings of well-being. People report a greater sense of well-being depending on the quality of the details of their past or present recollections. When study participants focused on “how” details, which tend to elicit vivid details, they were more satisfied with life than when they focused on “why,” which tend to elicit abstract ideas. For example, when remembering a failed relationship, those focusing on events that led to the breakup were more satisfied than those dwelling on abstract causal explanations concerning love and intimacy.

The way you think about the past is tied up with your sense of humor.
Pensive image via www.shutterstock.com.

One study found that people who use humor in positive ways held positive past time perspectives, and those using self-defeating humor held negative past time perspectives. This kind of study contributes to our understanding of how we think about and interpret social interactions. Such research also suggests that attempts to use humor in a positive way may improve the emotional tone of details in our thoughts and thereby our moods. Clinical psychologists are using humor as a treatment to increase subjective well-being.

In ongoing recent work, my students and I analyzed college students’ scores on a few common scales that psychologists use to assess humor, time perspective and the need for humor – a measure of how an individual produces or seeks humor in their daily lives. Our preliminary results suggest those high in humor character strength tend to concentrate on the positive aspects of their past, present and future. Those who seek humor in their lives appear in our study sample also to focus on the pleasant aspects of their current lives.

Though our investigation is still in the early phase, our data support a connection between the cognitive processes needed to mentally time-travel and to appreciate humor. Further research on time perspectives may help explain individual differences in detecting and resolving incongruities that result in funny feelings.

Learning to respect laughter

Experimental psychologists are rewriting the book on humor as we learn its value in our daily lives and its relationship to other important mental processes and character strengths. As the joke goes, how many psychologists does it take to change a light bulb? Just one, but it has to want to change.

Studying humor allows us to investigate theoretical processes involved in memory, reasoning, time perspective, wisdom, intuition and subjective well-being. And it’s a behavior of interest in and of itself as we work to describe, explain, control and predict humor across age, genders and cultures.

Whereas we may not agree on what’s funny and what isn’t, there’s more consensus than ever among experimental psychologists that humor is serious and relevant to the science of behavior. And that’s no laughing matter.

The ConversationJanet M. Gibson, Professor of Cognitive Psychology, Grinnell College

This article was originally published on The Conversation. Read the original article.

Now, Check Out:

Virtual bodyswapping reduces bias against other races

By Manos Tsakiris, Royal Holloway.

In 1959, John Howard Griffin, a white American writer, underwent medical treatments to change his skin appearance and present himself as a black man. He then traveled through the segregated US south to experience the racism endured daily by millions of black Americans. This unparalleled life experiment provided invaluable insights into how the change in Griffin’s own skin color triggered negative and racist behaviors from his fellow Americans.

But what about the changes that Griffin himself might have experienced? What does it mean to become someone else? How does this affect one’s self? And how can this affect one’s stereotypes, beliefs and racial attitudes? That was the key question that my colleagues and I set out to answer in a series of psychological experiments that looked at the link between our bodies and our sense of who we are.

Is this my body?

Can I trick you into thinking that a fake hand is part of your body? When I ask my students or friends, the immediate reaction I get is one of disbelief, followed by a definite “no.“ However, we’ve learned from experimental psychology that it’s actually quite easy to trick your brain into thinking that a fake hand is indeed part of you.

Watch a subject experiencing the rubber hand illusion.

The Rubber Hand Illusion, as it came to be known, is based on a simple mechanism of sensory integration that shows how malleable the sense of our body is. As a participant in my experiment, I would ask you to sit in front of a table and place your right hand behind a screen so that you cannot directly see it. I would then place a prosthetic rubber hand in front of you and ask you to look at it. Now the trick is that with two paintbrushes I start stroking gently your own hand, while simultaneously stroking the rubber hand at exactly the same location. In this way, you feel touch in your own hand, which you cannot see, and you see touch on the fake hand.

A light-skinned participant experiences a dark hand as part of his own body. Trends in Cognitive Sciences, Maister et al.,  CC BY

This situation creates a conflict for the brain, precisely because you feel something in one location and see something very similar – the touch on the fake hand – in a different location. Brains don’t like conflicts, and your brain will try to solve the conflict by using the sensory information available. Since we tend to put more stock in what we see, your brain will start creating the illusion that the sensation you feel in your own hand is actually caused by the touch you see on the fake hand. If this is the origin of your sensation, thenthe fake hand must be yours!

The illusion is strong and some of its effects are remarkable. For example, it has been shown that once you experience the illusion, the skin temperature on your hand drops, suggesting that the brain downregulates the homeostasis of your own hand since it now has a new hand to take care of. Interestingly, we also found that people experience the illusion independent of differences in skin color between their own hand and the rubber hand. These striking findings inspired us to ask some important questions about the ways in which we relate socially to other people.

Changing bodies

Walking down the street, our attention is constantly and automatically attracted to other people, especially their faces and appearance; these are very salient social stimuli. Additionally, we’re capable of making split-second decisions about others – whether we like them or not, whether we would trust them or not, whether they are similar to us or not, and by extension whether they belong to the same group as us or not.

Such decisions often influence and to a certain extent bias our behavior towards them. For example, we tend to trust people more who we perceive to be physically similar to us. The same goes for perceived similarity in personality traits. It seems that our brain constantly computes the perceived physical or psychological similarity between self and others to gauge our behavior.

What if you could, for a moment have the body of another race, sex or age compared to your own? Would that make you perceive people of another race, sex or age as more similar to you? Would that change the way you feel about yourself or the way that you stereotype different social groups? By combining illusions – including the Rubber Hand Illusion – that change the way our brain represents our body, we were able to test whether a change in your self would result in a change in your implicit racial bias.

We did not want explicitly to ask our participants whether they were racist because we could easily anticipate their answers. Instead, we used a well known social psychological test, the Implicit Association Test or IAT for short. It’s designed to measure the strength of association between different categories, such as Black or White people and pleasant or unpleasant concepts.

One screen participants would see in an IAT about race.They’re asked to sort the face to the left or the right.
Manos Tsakiris, Author provided

In the typical IAT procedure, the word “Black” appears in the top left corner of the screen and the word “White” appears in the top right corner. In the middle of the screen a “Black” or a “White” face appears and participants must sort the face into the appropriate category by pressing the appropriate left or right key. In addition to faces, other positive or negative attributes can also be used.

We can measure how fast people are at categorizing black faces when these are paired with unpleasant or pleasant concepts. If people hold negative implicit attitudes towards black people, they should have strong associations between unpleasant concepts and black faces. As a result, they should be faster at categorizing black faces when these are paired with unpleasant concepts, and should be slower when black faces are paired with pleasant concepts. We can therefore measure people’s performance in the IAT and estimate how negatively or positively biased they are against black people.

In a series of studies that we run in my lab as well as in the lab of Prof Mel Slater, we first used this simple test to measure the implicit racial bias in large samples of white Caucasian adult participants. As expected, they showed small but nevertheless negative biases towards black people. Next, we used different kinds of bodily illusions to make people experience that they have a body of dark skin color. For example, participants experienced that their hand, their face or their whole body in a virtual reality environment was black.

A woman feels the sensation she sees happening to a different face.

Once they experienced the illusion of having a different body, we gave them again the same test of implicit bias. For white people who were made to feel that they had black bodies, their negative biases against black people diminished. In similar experiments, adults who felt as if they had children’s bodies processed perceptual information and aspects of themselves as being more child-like.

Changing minds

One basic function that underlies many of our social interactions is computing the perceived physical or psychological similarity between ourselves and others. By changing how people represent themselves internally, we probably allowed them to experience others as being more similar to them. This in turn resulted in a reduction in their negative implicit biases.

In other words, the integration of different sensory signals can allow the brain to update its model of the body and cause people to change their attitudes about others.

Often formed at an early age, negative racial attitudes are thought to remain relatively stable throughout adulthood. Few studies have looked into whether implicit social biases can change. The converging evidence that we report shows that we can positively alter such biases by exploiting the way the brain integrates sensory information from our bodies. Such findings can motivate new research into how self-identity is constructed and how the boundaries between ingroups and outgroups might be altered.

Immersive virtual reality enhances the illusion of embodying a different body.
Trends in Cognitive Sciences, Maister et al., CC BY

From a societal point of view, our methods and findings might help us understand how to approach phenomena such as racism, racial and religious hatred, and gender inequality and discrimination. There is no simple cure for racism, of course. But together with the increased accessibility of virtual reality technologies, our experiments can be easily transformed into engaging educational tools that could allow participants to experience the world from the perspective of someone different from themselves.

This feeling of being a different person or a member of a different group allows us to understand that “we are more alike… than we are unalike,“ as Maya Angelou famously wrote. How can such changes be effected in society? This is a fundamental political question, one that has not been answered for some thousand years now, but experiencing the world through someone else’s body might be a small but important step towards more integration.

The ConversationManos Tsakiris, Professor of Psychology, Royal Holloway

This article was originally published on The Conversation. Read the original article.

Now, Check Out:

Why you shouldn’t want to always be happy

By Frank T. McAndrew, Knox College.

In the 1990s, a psychologist named Martin Seligman led the positive psychology movement, which placed the study of human happiness squarely at the center of psychology research and theory. It continued a trend that began in the 1960s with humanistic and existential psychology, which emphasized the importance of reaching one’s innate potential and creating meaning in one’s life, respectively.

Since then, thousands of studies and hundreds of books have been published with the goal of increasing well-being and helping people lead more satisfying lives.

So why aren’t we happier? Why have self-reported measures of happiness stayed stagnant for over 40 years?

Perversely, such efforts to improve happiness could be a futile attempt to swim against the tide, as we may actually be programmed to be dissatisfied most of the time.

You can’t have it all

Part of the problem is that happiness isn’t just one thing.

Jennifer Hecht is a philosopher who studies the history of happiness. In her book “The Happiness Myth,” Hecht proposes that we all experience different types of happiness, but these aren’t necessarily complementary. Some types of happiness may even conflict with one another. In other words, having too much of one type of happiness may undermine our ability to have enough of the others – so it’s impossible for us to simultaneously have all types of happiness in great quantities.

For example, a satisfying life built on a successful career and a good marriage is something that unfolds over a long period of time. It takes a lot of work, and it often requires avoiding hedonistic pleasures like partying or going on spur-of-the-moment trips. It also means you can’t while away too much of your time spending one pleasant lazy day after another in the company of good friends.

On the other hand, keeping your nose to the grindstone demands that you cut back on many of life’s pleasures. Relaxing days and friendships may fall by the wayside.

As happiness in one area of life increases, it’ll often decline in another.

A rosy past, a future brimming with potential

This dilemma is further confounded by the way our brains process the experience of happiness.

By way of illustration, consider the following examples.

We’ve all started a sentence with the phrase “Won’t it be great when…” (I go to college, fall in love, have kids, etc.). Similarly, we often hear older people start sentences with this phrase “Wasn’t it great when…”

Think about how seldom you hear anyone say, “Isn’t this great, right now?”

Surely, our past and future aren’t always better than the present. Yet we continue to think that this is the case.

These are the bricks that wall off harsh reality from the part of our mind that thinks about past and future happiness. Entire religions have been constructed from them. Whether we’re talking about our ancestral Garden of Eden (when things were great!) or the promise of unfathomable future happiness in Heaven, Valhalla, Jannah or Vaikuntha, eternal happiness is always the carrot dangling from the end of the divine stick.

There’s evidence for why our brains operate this way; most of us possess something called the optimistic bias, which is the tendency to think that our future will be better than our present.

To demonstrate this phenomenon to my classes, at the beginning of a new term I’ll tell my students the average grade received by all students in my class over the past three years. I then ask them to anonymously report the grade that they expect to receive. The demonstration works like a charm: Without fail, the expected grades are far higher than one would reasonably expect, given the evidence at hand.

And yet, we believe.

Cognitive psychologists have also identified something called the Pollyanna Principle. It means that we process, rehearse and remember pleasant information from the past more than unpleasant information. (An exception to this occurs in depressed individuals who often fixate on past failures and disappointments.)

For most of us, however, the reason that the good old days seem so good is that we focus on the pleasant stuff and tend to forget the day-to-day unpleasantness.

Our memories of the past are often distorted, viewed through rose-colored glasses.
U.S. 97, South of Klamath Falls, Oregon, July 21, 1973. Chromogenic color print. © Stephen Shore.

Self-delusion as an evolutionary advantage?

These delusions about the past and the future could be an adaptive part of the human psyche, with innocent self-deceptions actually enabling us to keep striving. If our past is great and our future can be even better, then we can work our way out of the unpleasant – or at least, mundane – present.

All of this tells us something about the fleeting nature of happiness. Emotion researchers have long known about something called the hedonic treadmill. We work very hard to reach a goal, anticipating the happiness it will bring. Unfortunately, after a brief fix we quickly slide back to our baseline, ordinary way-of-being and start chasing the next thing we believe will almost certainly – and finally – make us happy.

My students absolutely hate hearing about this; they get bummed out when I imply that however happy they are right now – it’s probably about how happy they will be 20 years from now. (Next time, perhaps I will reassure them that in the future they’ll remember being very happy in college!)

Nevertheless, studies of lottery winners and other individuals at the top of their game – those who seem to have it all – regularly throw cold water on the dream that getting what we really want will change our lives and make us happier. These studies found that positive events like winning a million bucks and unfortunate events such as being paralyzed in an accident do not significantly affect an individual’s long-term level of happiness.

Assistant professors who dream of attaining tenure and lawyers who dream of making partner often find themselves wondering why they were in such a hurry. After finally publishing a book, it was depressing for me to realize how quickly my attitude went from “I’m a guy who wrote a book!” to “I’m a guy who’s only written one book.”

But this is how it should be, at least from an evolutionary perspective. Dissatisfaction with the present and dreams of the future are what keep us motivated, while warm fuzzy memories of the past reassure us that the feelings we seek can be had. In fact, perpetual bliss would completely undermine our will to accomplish anything at all; among our earliest ancestors, those who were perfectly content may have been left in the dust.

This shouldn’t be depressing; quite the contrary. Recognizing that happiness exists – and that it’s a delightful visitor that never overstays its welcome – may help us appreciate it more when it arrives.

Furthermore, understanding that it’s impossible to have happiness in all aspects of life can help you enjoy the happiness that has touched you.

Recognizing that no one “has it all” can cut down on the one thing psychologists know impedes happiness: envy.

The ConversationFrank T. McAndrew, Cornelia H. Dudley Professor of Psychology, Knox College

This article was originally published on The Conversation. Read the original article.

Now, Check Out:

 

The power of rewards and why we seek them out

By Rachel Grieve, University of Tasmania and Emily Lowe-Calverley, University of Tasmania.

Any dog owner will tell you that we can use a food reward as a motivation to change a dog’s behaviour. But humans are just as susceptible to rewards too.

When we get a reward, special pathways in our brain become activated. Not only does this feel good, but the activation also leads us to seek out more rewarding stimuli.

Humans show these neurological responses to many types of rewards, including food, social contact, music and even self-affirmation.

But there is more to reward than physiology: differences in how often and when we get rewarded can also have a big impact on our experience of reward. In turn, this influences the likelihood that we will engage in that activity again. Psychologists describe these as schedules of reinforcement.

It’s not (just) what you do, it’s when you do it

The simplest type of reinforcement is continuous reinforcement, where a behaviour is rewarded every time it occurs. Continuous reinforcement is a particularly good way to train a new behaviour.

But intermittent reinforcement is the strongest way to maintain a behaviour. In intermittent reinforcement, the reward is delivered after some of the behaviours, but not all of the behaviours.

There are four main intermittent schedules of reinforcement, and some of these are more powerful than others.

Fixed Ratio

In the Fixed Ratio schedule of reinforcement, a specific number of actions must occur before the behaviour is rewarded. For example, your local coffee shop tells you that after you stamp your card nine times, your tenth drink is free.

Fixed Interval

Similarly, in the Fixed Interval schedule, a specific time must pass before the behaviour is rewarded. It is easy to think about this schedule in terms of work paid on an hourly basis – you are rewarded with money for every 60 minutes of work you complete.

Variable Ratio

For the Variable Ratio schedule, rewards are given after a varying number of behaviours – sometimes after four, sometimes five and other times 20 – making the reward more unpredictable.

This principle can be seen in poker (slot) machine gambling. The machine has an average win ratio, but that doesn’t guarantee a consistent rate of reward, so players continue in the hope that the next press of the button is the one that pays off.

Variable Interval

The Variable Interval schedule works on the same unpredictable principle, but in terms of time. So rewards are given after varying intervals of time – sometimes five minutes, sometimes 30 and sometimes after a longer period. So at work, when your boss drops in at random points of the day, your hard work is reinforced.

It is easy to see that rewards given on a variable ratio would reinforce behaviours far more effectively – if you don’t know when you will be rewarded, you continue to act, just in case!

Psychologists describe this persistent behaviour as a resistance to extinction. Even after the reward is completely taken away, the behaviour will remain for a while because you aren’t sure if this is just a longer interval before the reward than usual.

We all respond to rewards, but only if they are rewarding enough.
Keith Williamson/Flickr, CC BY-NC-ND

Do rewards have a ‘dark side’?

You can certainly use these principles to shape someone’s behaviour. Loyalty cards for supermarkets, airlines, and restaurants all increase the likelihood of our continued use of those services.

Marketers can also use reward to their advantage. If you can make someone feel anxious because they don’t own a particular product – maybe the latest or greatest version of something they already have – when the person buys the new product, the reward comes from the reduction in anxiety.

Want more help around the house? Start off with praising your partner/kids every time they do the desired behaviour, and once they are doing it regularly, slip into a comfortable variable ratio mode.

And of course, sometimes rewards can result in addiction.

Addiction used to be seen in the context of substance use, and there is indeed substantial evidence for the role of reward pathways in alcohol and other drug addiction.

Obviously, the nature of addiction is complex. But more recently, there is evidence of addiction that can be based on behaviour, rather than ingesting a substance.

For example, people show addiction-like behaviours related to their mobile phone use, shopping and even love relationships.

Pokémon GO rewards

Recently the world has watched the introduction of the mobile game Pokémon GO. Cleverly, this game employs multiple schedules of reinforcement which ensure users continue to feel the need to “catch ‘em all”.

On the fixed ratio schedule, users know that if they catch enough Pokemon they will level up, or possess enough candy to evolve. The hatching of eggs also follows a fixed interval, in this case it’s distance walked.

Discovering a rare Pokémon can keep players hooked.
But on the variable ratio and interval schedules, users never know how far they need to wander before they will find a new Pokemon, or how long it will be before something other than a wild Pidgey appears!

So they continue to check the app regularly throughout the day. No wonder Pokemon GO is so addictive.

But it’s not just Pokemon masters who fall prey to online reward schedules.

Checking our emails at various points of the day is reinforced when there is something in our inbox – a variable interval schedule. This makes us more likely to check for emails again.

Our social media posts are reinforced with “likes” on an variable ratio schedule. You may be rewarded with likes on most posts (continuous reinforcement), but occasionally (and importantly, unpredictably) a post will be rewarded with much more attention than other posts, which encourages more posting in the future.

Now, if you will excuse us, we just need to click “refresh” on our inbox. Again.

The ConversationRachel Grieve, Senior Lecturer in Psychology, University of Tasmania and Emily Lowe-Calverley, PhD Candidate in Cyberpsychology, University of Tasmania

This article was originally published on The Conversation. Read the original article.

Now, Check Out:

Why are people starting to believe in UFOs again?

By Joseph P. Laycock, Texas State University.

The 1990s were a high-water mark for public interest in UFOs and alien abduction. Shows like “The X-Files” and Fox’s “alien autopsy” hoax were prime-time events, while MIT even hosted an academic conference on the abduction phenomenon.

But in the first decade of the 21st century, interest in UFOs began to wane. Fewer sightings were reported, and established amateur research groups like the British Flying Saucer Bureau disbanded.

In 2006 historian Ben Macintyre suggested in The Times that the internet had “chased off” the UFOs. The web’s free-flowing, easy exchange of ideas and information had allowed UFO skeptics to prevail, and, to Macintyre, people were no longer seeing UFOs because they no longer believed in them.

Data seemed to back up Macintyre’s argument that, when it came to belief in UFOs, reason was winning out. A 1990 Gallup poll found that 27 percent of Americans believed “extraterrestrial beings have visited Earth at some time in the past.” That number rose to 33 percent in 2001, before dropping back to 24 percent in 2005.

But now “The X-Files” is back, and Hillary Clinton has even pledged to disclose what the government knows about aliens if elected president. Meanwhile, a recent Boston Globe article by Linda Rodriguez McRobbie suggests that belief in UFOs may be growing.

She points to a 2015 Ipsos poll, which reported that 45 percent of Americans believe extraterrestrials have visited the Earth.

So much for reason.

Why does Western society continue to be fascinated with the paranormal? If science doesn’t automatically kill belief in UFOs, why do reports of UFOs and alien abductions go in and out of fashion?

To some extent, this is political. Even though government agents like “Men in Black” may be the stuff of folklore, powerful people and institutions can influence the level of stigma surrounding these topics.

Sociologists of religion have also suggested that skepticism is countered by a different societal trend, something they’ve dubbed “re-enchantment.” They argue that while science can temporarily suppress belief in mysterious forces, these beliefs will always return – that the need to believe is ingrained in the human psyche.

A new mythology

The narrative of triumphant reason dates back, at least, to German sociologist Max Weber’s 1918 speech “Science as a Vocation,” in which he argued that the modern world takes for granted that everything is reducible to scientific explanations.

“The world,” he declared, “is disenchanted.”

As with many inexplicable events, UFOs were initially treated as an important topic of scientific inquiry. The public wondered what was going on; scientists studied the issue and then “demystified” the topic.

Modern UFOlogy – the study of UFOs – is typically dated to a sighting made by a pilot named Kenneth Arnold. While flying over Mount Rainier on June 24, 1947, Arnold described nine disk-like objects that the media dubbed “flying saucers.”

A few weeks later the Roswell Daily Register reported that the military had recovered a crashed flying saucer. By the end of 1947, Americans had reported an additional 850 sightings.

The front page of the July 6, 1947, edition of the Roswell Daily Record.
Wikimedia Commons

During the 1950s, people started reporting that they’d made contact with the inhabitants of these craft. Frequently, the encounters were erotic.

For example, one of the first “abductees” was a mechanic from California named Truman Bethurum. Bethurum was taken aboard a spaceship from Planet Clarion, which he said was captained by a beautiful woman named Aura Rhanes. (Bethurum’s wife eventually divorced him, citing his obsession with Rhanes.) In 1957, Antonio Villas-Boas of Brazil reported a similar encounter in which he was taken aboard a ship and forced to breed with a female alien.

Psychologists and sociologists proposed a few theories about the phenomenon. In 1957, psychoanalyst Carl Jung theorized that UFOs served a mythological function that helped 20th-century people adapt to the stresses of the Cold War. (For Jung, this did not preclude the possibility that UFOs might be real.)

Furthermore, American social mores were rapidly changing in the mid-20th century, especially around issues of race, gender and sexuality. According to historian W. Scott Poole, stories of sex with aliens could have been a way of processing and talking about these changes. For example, when the Supreme Court finally declared laws banning interracial marriage unconstitutional in 1967, the country had already been talking for years about Betty and Barney Hill, an interracial couple who claimed to have been probed by aliens.

Contactee lore also started applying “scientific ideas” as a way to repackage some of the mysterious forces associated with traditional religions. Folklore expert Daniel Wojcik has termed belief in benevolent space aliens as “techno-millennarianism.” Instead of God, some UFO believers think forms of alien technology will be what redeems the world. Heaven’s Gate – whose members famously committed mass suicide in 1995 – was one of several religious groups awaiting the arrival of the aliens.

You’re not supposed to talk about it

Despite some dubious stories from contactees, the Air Force took UFO sightings seriously, organizing a series of studies, including Project Blue Book, which ran from 1952 to 1969.

In 1966, the Air Force tapped a team of University of Colorado scientists headed by physicist Edward Condon to investigate reports of UFOs. Even though the team failed to identify 30 percent of the 91 sightings it examined, its 1968 report concluded that it wouldn’t be useful to continue studying the phenomenon. Condon added that schoolteachers who allowed their students to read UFO-related books for classroom credit were doing a grave disservice to the students’ critical faculties and ability to think scientifically.

Basing its decision off the report, the Air Force terminated Project Blue Book, and Congress ended all funding for UFO research.

As religion scholar Darryl Caterine explained in his book “Haunted Ground,” “With civil rights riots, hippie lovefests and antiwar protests raging throughout the nation, Washington gave its official support to a rational universe.”

While people still believed in UFOs, expressing too much interest in the subject now came with a price. In 2010, sociologists Christopher D. Bader, F. Carson Mencken and Joseph O. Baker found that 69 percent of Americans reported belief in at least one paranormal subject (astrology, ghosts, UFOs, etc.).

But their findings also suggested that the more status and social connections someone has, the less likely he or she is to report paranormal belief. Single people report more paranormal beliefs than married people, and those with low incomes report more paranormal belief than those with high incomes. It may be that people with “something to lose” have reason not to believe in the paranormal (or at least not to talk about it).

In 1973, the American Institute of Aeronautics and Astronautics surveyed its membership about UFOs. Several scientists reported that they had seen unidentified objects and a few even answered that UFOs are extraterrestrial or at least “real.” However, physicist Peter A. Sturrock suggested that scientists felt comfortable answering these questions only because their anonymity was guaranteed.

Harvard psychiatrist John Mack came to symbolize the stigma of UFO research. Mack worked closely with abductees, whom he dubbed “experiencers.” While he remained cagey about whether aliens actually existed, he advocated for the experiencers and argued that their stories should be taken seriously.

John Mack’s appearance on ‘Oprah.’

His bosses weren’t happy. In 1994, Harvard Medical School opened an investigation into his research – an unprecedented action against a tenured professor. In the end, Harvard dropped the case and affirmed Mack’s academic freedom. But the message was clear: Being open-minded about aliens was bad for one’s career.

Reason and re-enchantment

So if Hillary Clinton is running for president, why is she talking about UFOs?

Part of the answer may be that the Clintons have ties to a network of influential people who have lobbied the government to disclose the truth about UFOs. This includes the late millionaire Laurence Rockefeller (who funded John Mack’s research) and John Podesta, the chairman of Clinton’s campaign and a long-time disclosure advocate.

But there may also be a broader cultural cycle at work. Sociologists such as Christopher Partridge have suggested that disenchantment leads to re-enchantment. While secularization may have weakened the influence of traditional churches, this doesn’t mean that people have become disenchanted skeptics. Instead, many have explored alternate spiritualities that churches had previously stigmatized as “superstitions” (everything from holistic healing to Mayan prophecies). The rise of scientific authority may have paradoxically paved the way for UFO mythology.

A similar change may be happening in the political sphere where the language of critical thinking has been turned against the scientific establishment. In the 1960s, Congress deferred to the Condon Report. Today, conservative politicians regularly challenge ideas like climate change, evolution and the efficacy of vaccines. These dissenters never frame their claims as “anti-science” but rather as courageous examples of free inquiry.

Donald Trump may have been the first candidate to discover that weird ideas are now an asset instead of a liability. In a political climate where the language of reason is used to attack the authority of science, musing over the possibility of UFOs simply doesn’t carry the stigma that it used to.

The ConversationJoseph P. Laycock, Assistant Professor of Religious Studies, Texas State University

This article was originally published on The Conversation. Read the original article.

Now, Check Out:

 

Freaks, geeks, norms and mores: why people use the status quo as a moral compass

By Christina Tworek, University of Illinois at Urbana-Champaign.

The Binewskis are no ordinary family. Arty has flippers instead of limbs; Iphy and Elly are Siamese twins; Chick has telekinetic powers. These traveling circus performers see their differences as talents, but others consider them freaks with “no values or morals.” However, appearances can be misleading: The true villain of the Binewski tale is arguably Miss Lick, a physically “normal” woman with nefarious intentions.

Much like the fictional characters of Katherine Dunn’sGeek Love,” everyday people often mistake normality as a criterion for morality. Yet, freaks and norms alike may find themselves anywhere along the good/bad continuum. Still, people use what’s typical as a benchmark for what’s good, and are often averse to behavior that goes against the norm. Why?

In a series of studies, psychologist Andrei Cimpian and I investigated why people use the status quo as a moral codebook – a way to decipher right from wrong and good from bad. Our inspiration for the project was philosopher David Hume, who pointed out that people tend to allow the status quo (“what is”) to guide their moral judgments (“what ought to be”). Just because a behavior or practice exists, that doesn’t mean it’s good – but that’s exactly how people often reason. Slavery and child labor, for example, were and still are popular in some parts of the world, but their existence doesn’t make them right or OK. We wanted to understand the psychology behind the reasoning that prevalence is grounds for moral goodness.

To examine the roots of such “is-to-ought inferences,” we turned to a basic element of human cognition: how we explain what we observe in our environments. From a young age, we try to understand what’s going on around us, and we often do so by explaining. Explanations are at the root of many deeply held beliefs. Might people’s explanations also influence their beliefs about right and wrong?

Quick shortcuts to explain our environment

When coming up with explanations to make sense of the world around us, the need for efficiency often trumps the need for accuracy. (People don’t have the time and cognitive resources to strive for perfection with every explanation, decision or judgment.) Under most circumstances, they just need to quickly get the job done, cognitively speaking. When faced with an unknown, an efficient detective takes shortcuts, relying on simple information that comes to mind readily.

More often than not, what comes to mind first tends to involve “inherent” or “intrinsic” characteristics of whatever is being explained.

Separate bathrooms reflect the natural order of things?
Bart Everson, CC BY

For example, if I’m explaining why men and women have separate public bathrooms, I might first say it’s because of the anatomical differences between the sexes. The tendency to explain using such inherent features often leads people to ignore other relevant information about the circumstances or the history of the phenomenon being explained. In reality, public bathrooms in the United States became segregated by gender only in the late 19th century – not as an acknowledgment of the different anatomies of men and women, but rather as part of a series of political changes that reinforced the notion that women’s place in society was different from that of men.

Testing the link

We wanted to know if the tendency to explain things based on their inherent qualities also leads people to value what’s typical.

To test whether people’s preference for inherent explanations is related to their is-to-ought inferences, we first asked our participants to rate their agreement with a number of inherent explanations: For example, girls wear pink because it’s a dainty, flower-like color. This served as a measure of participants’ preference for inherent explanations.

In another part of the study, we asked people to read mock press releases that reported statistics about common behaviors. For example, one stated that 90 percent of Americans drink coffee. Participants were then asked whether these behaviors were “good” and “as it should be.” That gave us a measure of participants’ is-to-ought inferences.

These two measures were closely related: People who favored inherent explanations were also more likely to think that typical behaviors are what people should do.

We tend to see the commonplace as good and how things should be. For example, if I think public bathrooms are segregated by gender because of the inherent differences between men and women, I might also think this practice is appropriate and good (a value judgment).

This relationship was present even when we statistically adjusted for a number of other cognitive or ideological tendencies. We wondered, for example, if the link between explanation and moral judgment might be accounted for by participants’ political views. Maybe people who are more politically conservative view the status quo as good, and also lean toward inherence when explaining? This alternative was not supported by the data, however, and neither were any of the others we considered. Rather, our results revealed a unique link between explanation biases and moral judgment.

What goes into even young children’s assumptions that ‘what is’ is ‘what ought to be’?
Girl image via www.shutterstock.com.

A built-in bias affecting our moral judgments

We also wanted to find out at what age the link between explanation and moral judgment develops. The earlier in life this link is present, the greater its influence may be on the development of children’s ideas about right and wrong.

From prior work, we knew that the bias to explain via inherent information is present even in four-year-old children. Preschoolers are more likely to think that brides wear white at weddings, for example, because of something about the color white itself, and not because of a fashion trend people just decided to follow.

Does this bias also affect children’s moral judgment?

Indeed, as we found with adults, 4- to 7-year-old children who favored inherent explanations were also more likely to see typical behaviors (such as boys wearing pants and girls wearing dresses) as being good and right.

If what we’re claiming is correct, changes in how people explain what’s typical should change how they think about right and wrong. When people have access to more information about how the world works, it might be easier for them to imagine the world being different. In particular, if people are given explanations they may not have considered initially, they may be less likely to assume “what is” equals “what ought to be.”

Queen Victoria started the trend in 1840 with her at-the-time unusual white wedding dress.
Franz Xaver Winterhalter

Consistent with this possibility, we found that by subtly manipulating people’s explanations, we could change their tendency to make is-to-ought inferences. When we put adults in what we call a more “extrinsic” (and less inherent) mindset, they were less likely to think that common behaviors are necessarily what people should do. For instance, even children were less likely to view the status quo (brides wear white) as good and right when they were provided with an external explanation for it (a popular queen long ago wore white at her wedding, and then everyone started copying her).

Implications for social change

Our studies reveal some of the psychology behind the human tendency to make the leap from “is” to “ought.” Although there are probably many factors that feed into this tendency, one of its sources seems to be a simple quirk of our cognitive systems: the early emerging bias toward inherence that’s present in our everyday explanations.

This quirk may be one reason why people – even very young ones – have such harsh reactions to behaviors that go against the norm. For matters pertaining to social and political reform, it may be useful to consider how such cognitive factors lead people to resist social change.

The ConversationChristina Tworek, Ph.D. Student in Developmental Psychology, University of Illinois at Urbana-Champaign

This article was originally published on The Conversation. Read the original article.

Now, Check Out: