Here’s why ‘baby talk’ is good for your baby

By Catherine E. Laing, Duke University.

When we read, it’s very easy for us to tell individual words apart: In written language, spaces are used to separate words from one another. But this is not the case with spoken language – speech is a stream of sound, from which the listener has to separate words to understand what the speaker is saying.

This task isn’t difficult for adults who are familiar with the words of their language. But what about babies, who have almost no linguistic experience? How do they even begin to separate, or “segment,” individual words from the stream of language that they hear all around them all of the time?

As a researcher interested in early language production, I am fascinated by how babies begin acquiring knowledge of their language, and how parents and other caregivers can support them in this task.

Babies first start learning language by listening not to individual words, but to the rhythm and intonation of the speech stream – that is, the changes between high and low pitch, and the rhythm and loudness of syllables in speech. Parents often exaggerate these features of the language when talking with their infants, and this is important for early language learning.

Nevertheless, some may feel that using this exaggerated speech style is condescending, or unrealistic in comparison to adult speech, and as such does not set babies off to a good start.

Is “baby talk” really good for babies?

How babies learn

Even before a baby is born, the process of learning language has already begun. In the third trimester of pregnancy, when the infant’s ears are sufficiently developed, the intonation patterns of the mother’s speech are transmitted through the fluids in the womb.

Babies’ learning starts in the womb itself. brett jordan, CC BY

This is thought to be like listening to someone talking in a swimming pool: It’s difficult to make out the individual sounds, but the rhythm and intonation are clear. This has an important effect on language learning. By the time an infant is born, she already has a preference for her mother’s language. At this stage the infant is able to identify language through its intonation patterns.

For example, French and Russian speakers place emphasis on different parts of a word or sentence, so the rhythm of these two languages sounds different. Even at four days old, babies can use this information to distinguish their own language from an unfamiliar other language.

This means that the newly born infant is ready to start learning the language that surrounds her; she already has an interest in her mother’s language, and as her attention is drawn to this language she begins to learn more about the features and patterns within it.

Using a singsong voice

Intonation is also very important to infants’ language development in the first months of life. Adults tend to speak to babies using a special type of register that we know as “baby talk” or “motherese.” This typically involves a higher pitch than regular speech, with wide, exaggerated intonation changes.

Research has shown that babies prefer to listen to this exaggerated “baby talk” type of speech than typical adult-like speech: They pay more attention when a parent’s speech has a higher pitch and a wider pitch range compared to adult-like speech with less exaggerated pitch features.

For example, a mother might say the word “baby” in an exaggerated “singsong” voice, which holds an infant’s attention longer than it would in a monotonal adult-style voice. Words produced in this way also stand out more from the speech stream, making it easier for babies to pick out smaller chunks of language.

Across the vast stream of language that babies hear around them every day, these distinctive pitch features in baby talk help babies to “tune in” to a small part of the input, making language processing a more manageable task.

How infants process speech

Baby talk tends to be spoken at a slower rate, and key words often appear at the end of a phrase. For example, the sentence, “Can you see the doggie?” is preferable to “The doggie is eating a bone”: Babies will learn the word “doggie” more easily when it appears at the end of the phrase.

For the same reasons, words produced in isolation – separated from the rest of the phrase by pauses – are also easier for infants to learn. Research has shown that the first words that infants produce are often those that are heard most frequently in isolation in early development. Babies hear isolated words such as “bye bye” and “mummy” very frequently, and these are often some of the earliest words that they learn to produce.

How do babies learn language?
Dean Wissing, CC BY-SA

When a word is produced separately from running speech, the infant does not have to segment it from a stream of sounds, and so it is easier to determine where the word begins and where it ends.

Furthermore, infants have been found to recognize words more easily when they are produced more slowly than in typical adult speech. This is because when speech is slower, it is easier for infants to pick out the individual words and sounds, which may be produced more clearly than in faster speech. In addition, infants process language much more slowly than adults, and so it is believed that slower speech gives infants more time to process what they hear.

How reduplication helps

Word repetition is also beneficial in infants’ early word learning. Infants’ first words tend to be those which are produced most frequently in caregiver speech, such as “mummy,” “bottle” and “baby.”

Words with reduplication are easier to learn for babies.
Sellers Patton, CC BY

The more often an infant hears a word, the easier it is to segment it from the speech stream. The infant develops a stronger mental representation of frequent words. Eventually she will be more likely to produce frequently heard words with fewer errors.

Furthermore, reduplicated words – that is, words which contain repetition, such as “woof woof” or “quack quack” – are typical of baby talk, and are known to have an advantage for early word learning.

Even newborn infants show stronger brain activation when they hear words that contain reduplication. This suggests that there may be a strong advantage for these words in human language processing. This is supported by evidence from slightly older infants, who have been found to learn reduplicated words more easily than non-reduplicated words.

How ‘baby talk’ helps infants

So, baby talk is not just a way of engaging with infant on a social level – it has important implications for language learning from the very first moments of a newborn’s life. Features of baby talk present infants with information about their ambient language, and allow them to break up the speech stream into smaller chunks.

While baby talk is not essential to guiding infants’ language learning, the use of pitch modulations, repetition and slower speech all allow infants to process the patterns in their language more easily.

Speaking in such an exaggerated style might not seem conducive to language learning in the longer term, but ample research shows that this speech style actually provides an optimum input for language learning from the very first days of an infant’s life.

The ConversationCatherine E. Laing, Postdoctoral Associate, Duke University

This article was originally published on The Conversation. Read the original article.

Now, Check Out:

What causes mind blanks during exams?

By Jared Cooney Horvath, University of Melbourne and Jason M Lodge, University of Melbourne.

It’s a pattern many of us have likely experienced in the past.

You prep for an exam and all the information seems coherent and simple. Then you sit for an exam and suddenly all the information you learned is gone. You struggle to pull something up – anything – but the harder you fight, the further away the information feels. The dreaded mind blank.

So what is going on?

To understand what’s happening during a mind blank, there are three brain regions we have to become familiar with.

The first is the hypothalamus. For all intents and purposes, we can conceive of the hypothalamus as the bridge between your emotions and your physical sensations. In short, this part of the brain has strong connections to the endocrine system, which, in turn, is responsible for the type and amount of hormones flowing throughout your body.

The second is the hippocampus. A subcortical structure, the hippocampus plays an incredibly important role in both the learning and retrieval of facts and concepts. We can conceive of the hippocampus as a sort of memory door through which all information must pass in order to enter and exit the brain.

The third is the prefrontal cortex (PFC). Located behind your eyes, this is the calm, cool, rational part of your brain. All the things that suggest you, as a human being, are in control are largely mediated here: things like working memory (the ability to hold and manipulate information in your mind), impulse control (the ability to dampen unwanted behavioural responses), decision making (the ability to select a proper response between competing possibilities), etc.

Regions of the brain. from www.shutterstock.com

How a mind blank happens

When you are preparing for an exam in a setting that is predictable and relatively low-stakes, you are able to engage in cold cognition. This is the term given to logical and rational thinking processes.

In our particular instance, when you are studying at home, seated in your comfortable bed, listening to your favourite music, the hypothalamus slows down the production and release of key stress hormones (outlined below) while the PFC and hippocampus are confidently chugging along unimpeded.

However, when you enter a somewhat unpredictable and high-stakes exam situation, you enter the realm of hot cognition. This is the term given to non-logical and emotionally driven thinking processes. Hot cognition is typically triggered in response to a clear threat or otherwise highly stressful situation.

So an exam can serve to trigger a cascade of unique thoughts – for instance,

If I fail this exam I may not get into a good university or graduate program. Then I may not get a good job. Then I may perish alone and penniless.

With this type of loaded thinking, it’s no wonder that those taking tests sometimes perceive an exam as a threat.

When a threat is detected, the hypothalamus stimulates the generation of several key stress hormones, including norepinephrine and cortisol.

Large levels of norepinephrine enter the PFC and serve to dampen neuronal firing and impair effective communication. This impairment essentially clears out your working memory (whatever you were thinking about is now gone) and stops the rational, logical PFC from influencing other brain regions.

At the same time, large levels of cortisol enter the hippocampus and not only disrupt activation patterns there, but also (with prolonged exposure) kill hippocampal neurons. This serves to impede the ability to access old memories and skews the perception and storage of new memories.

In short, when an exam is interpreted as a threat and a stress response is triggered, working memory is wiped clean, recall mechanisms are disrupted, and emotionally laden hot cognition driven by the hypothalamus (and other subcortical regions) overrides the normally rational cold cognition driven by the PFC.

Taken together, this process leads to a mind blank, making logical cognitive activity difficult to undertake.

Is there any way to avoid this?

The good news – there are some things you can do to stave off mind blanks.

The first concerns de-stressing. Through concerted practice and application of cognitive-behavioural and/or relaxation techniques aimed at reframing any perceived threat during an exam situation, those taking tests can potentially abate the stress response and re-enter a more rational thinking process.

Another concerns preparation. The reason the armed forces train new recruits in stressful situations that simulate active combat scenarios is to ensure cold cognition during future engagements.

The more a person experiences a particular situation, the less likely he or she is to perceive such a situation as threatening.

So when preparing for an exam, try not to do so in a highly relaxed soothing environment – rather, try to push yourself in ways that will mimic the final testing scenario you are preparing for.

The ConversationJared Cooney Horvath, Postdoctoral fellow, University of Melbourne and Jason M Lodge, Senior Lecturer, Melbourne Centre for the Study of Higher Education & ARC-SRI Science of Learning Research Centre, University of Melbourne

This article was originally published on The Conversation. Read the original article.

Now, Check Out:

Alcoholism research: A virus could manipulate neurons to reduce the desire to drink

By Yifeng Cheng, Texas A&M University and Jun Wang, Texas A&M University .

About 17 million adults and more than 850,000 adolescents had some problems with alcohol in the United States in 2012. Long-term alcohol misuse could harm your liver, stomach, cardiovascular system and bones, as well as your brain.

Chronic heavy alcohol drinking can lead to a problem that we scientists call alcohol use disorder, which most people call alcohol abuse or alcoholism. Whatever name you use, it is a severe issue that affects millions of people and their families and causes economic burdens to our society.

Quitting alcohol, like quitting any drug, is hard to do. One reason may be that heavy drinking can actually change the brain.

Our research team at Texas A&M University Health Science Center has found that alcohol changes the way information is processed through specific types of neurons in the brain, encouraging the brain to crave more alcohol. Over time, the more you drink, the more striking the change.

In recent research we identified a way to mitigate these changes and reduce the desire to drink using a genetically engineered virus.

Alcohol changes your brain

Alcohol use disorders include alcohol abuse and alcohol dependence, and can be thought of as an addiction. Addiction is a chronic brain disease. It causes abnormalities in the connections between neurons.

Heavy alcohol use can cause changes in a region of the brain, called the striatum. This part of the brain processes all sensory information (what we see and what we hear, for instance), and sends out orders to control motivational or motor behavior.

The striatum is a target for drugs.
Life Science Databases via Wikimedia Commons, CC BY-SA

The striatum, which is located in the forebrain, is a major target for addictive drugs and alcohol. Drug and alcohol intake can profoundly increase the level of dopamine, a neurotransmitter associated with pleasure and motivation, in the striatum.

The neurons in the striatum have higher densities of dopamine receptors as compared to neurons in other parts of the brain. As a result, striatal neurons are more susceptible to changes in dopamine levels.

There are two main types of neurons in the striatum: D1 and D2. While both receive sensory information from other parts of the brain, they have nearly opposite functions.

D1-neurons control “go” actions, which encourage behavior. D2-neurons, on the other hand, control “no-go” actions, which inhibit behavior. Think of D1-neurons like a green traffic light and D2-neurons like a red traffic light.

Dopamine affects these neurons in different ways. It promotes D1-neuron activity, turning the green light on, and suppresses D2-neuron function, turning the red light off. As a result, dopamine promotes “go” and inhibits “no-go” actions on reward behavior.

Alcohol, especially excessive amounts, can hijack this reward system because it increases dopamine levels in the striatum. As a result, your green traffic light is constantly switched on, and the red traffic light doesn’t light up to tell you to stop. This is why heavy alcohol use pushes you to drink to excess more and more.

These brain changes last a very long time. But can they be mitigated? That’s what we want to find out.

What’s in that bottle?
Lab rat via www.shutterstock.com.

Can we mitigate these changes?

We started by presenting mice with two bottles, one containing water and the other containing 20 percent alcohol by volume, mixed with drinking water. The bottle containing alcohol was available every other day, and the mice could freely decide which to drink from. Gradually, most of animals developed a drinking habit.

We then used a process called viral mediated gene transfer to manipulate the “go” or “no-go” neurons in mice that had developed a drinking habit.

Mice were infected with a genetically engineered virus that delivers a gene into the “go” or “no-go” neurons. That gene then drives the neurons to express a specific protein.

After the protein is expressed, we injected the mice with a chemical that recognizes and binds to it. This binding can inhibit or promote activity in these neurons, letting us turn the green light off (by inhibiting “go” neurons) or turn the red light (by exciting “no-go” neurons) back on.

Then we measured how much alcohol the mice were consuming after being “infected,” and compared it with what they were drinking before.

We found that either inhibiting the “go” neurons or turning on the “no-go” neurons successfully reduced alcohol drinking levels and preference for alcohol in the “alcoholic” mice.

In another experiment in this study, we found that directly delivering a drug that excites the “no-go” neuron into the striatum can also reduce alcohol consumption. Conversely, in a previous experiment we found that directly delivering a drug that inhibits the “go” neuron has the same effect. Both results may help the development of clinical treatment for alcoholism.

What does this mean for treatment?

Most people with an alcohol use disorder can benefit from treatment, which can include a combination of medication, counseling and support groups. Although medications, such as Naltrexone, to help people stop drinking can be effective, none of them can accurately target the specific neurons or circuits that are responsible for alcohol consumption.

Employing viruses to deliver specific genes into neurons has been for disorders such as Parkinson’s disease in humans. But while we’ve demonstrated that this process can reduce the desire to drink in mice, we’re not yet at the point of using the same method in humans.

Our finding provides insight for clinical treatment in humans in the future, but using a virus to treat alcoholism in humans is probably still a long way off.

The ConversationYifeng Cheng, Ph.D. Candidate, Texas A&M University Health Science Center, Texas A&M University and Jun Wang, Assistant Professor of Neuroscience and Experimental Therapeutics, Texas A&M Health Science Center , Texas A&M University

This article was originally published on The Conversation. Read the original article.

Now, Check Out:

Can great apes read your mind? [Videos]

By Christopher Krupenye, Max Planck Institute.

One of the things that defines humans most is our ability to read others’ minds – that is, to make inferences about what others are thinking. To build or maintain relationships, we offer gifts and services – not arbitrarily, but with the recipient’s desires in mind. When we communicate, we do our best to take into account what our partners already know and to provide information we know will be new and comprehensible. And sometimes we deceive others by making them believe something that is not true, or we help them by correcting such false beliefs.

All these very human behaviors rely on an ability psychologists call theory of mind: We are able to think about others’ thoughts and emotions. We form ideas about what beliefs and feelings are held in the minds of others – and recognize that they can be different from our own. Theory of mind is at the heart of everything social that makes us human. Without it, we’d have a much harder time interpreting – and probably predicting – others’ behavior.

For a long time, many researchers have believed that a major reason human beings alone exhibit unique forms of communication, cooperation and culture is that we’re the only animals to have a complete theory of mind. But is this ability really unique to humans?

In a new study published in Science, my colleagues and I tried to answer this question using a novel approach. Previous work has generally suggested that people think about others’ perspectives in very different ways than other animals do. Our new findings suggest, however, that great apes may actually be a bit more similar to us than we previously thought.

Apes get some parts of what others are thinking

Decades of research with our closest relatives – chimpanzees, bonobos, gorillas and orangutans – have revealed that great apes do possess many aspects of theory of mind. For one, they can identify the goals and intentions behind others’ actions. They’re also able to recognize which features of the environment others can see or know about.

Where apes have consistently failed, though, is on tasks designed to assess their understanding of others’ false beliefs. They don’t seem to know when someone has an idea about the world that conflicts with reality.

Picture me rummaging through the couch because I falsely believe the TV remote is in there. “Duuuude,” my (human) roommate says, noticing my false belief, “the remote is on the table!” He’s able to imagine the way I’m misconstruing reality, and then set me straight with the correct information.

To investigate false belief understanding in great apes, comparative psychologist Fumihiro Kano and I turned to a technique that hadn’t been used before with apes in this context: eye-tracking. Our international team of researchers enrolled over 40 bonobos, chimpanzees and orangutans at Zoo Leipzig in Germany and Kumamoto Sanctuary in Japan in our novel, noninvasive experiment.

Researchers use juice to attract the apes to the spot where they can watch the videos.
MPI-EVA

Watching what they watched

We showed the apes videos of a human actor engaging in social conflicts with a costumed ape-like character (King Kong). Embedded within these interactions was important information about the human actor’s belief. For example, in one scene the human actor was trying to search for a stone that he saw King Kong hide within one of two boxes. However, while the actor was away, King Kong moved the stone to another location and then removed it completely; when the actor returned, he falsely believed the stone was still in its original location.

The big question was: Where would the apes expect the actor to search? Would they anticipate that the actor would search for the stone in the last place where he saw it, even though the apes themselves knew it was no longer there?

While the apes were watching the videos, a special camera faced them, recording their gaze patterns and mapping them onto the video. This eye-tracker let us see exactly where on the videos the apes were looking as they watched the scenarios play out.

Watch a video of what the apes were shown. The red dots show where one ape was looking as she watched the movie. Credit: MPI-EVA and Kumamoto Sanctuary, Kyoto University

Apes, like people, do what’s called anticipatory looking: They look to locations where they anticipate something is about to happen. This tendency allowed us to assess what the apes expected the actor to do when he returned to search for the stone.

Strikingly, across several different conditions and contexts, when the actor was reaching toward the two boxes, apes consistently looked to the location where the actor falsely believed the stone to be. Importantly, their gaze predicted the actor’s search even before the actor provided any directional cues about where he was going to search for the stone.

The apes were able to anticipate that the actor would behave in accordance with what we humans recognize as a false belief.

The red dots show the ape looking at the place where he anticipates the person will search – even though he himself knows the stone has been moved.
MPI-EVA and Kumamoto Sanctuary, Kyoto University, CC BY-ND

Even more alike than we thought

Our findings challenge previous research, and assumptions, about apes’ theory of mind abilities. Although we have more studies planned to determine whether great apes can really understand others’ false beliefs by imagining their perspectives, like humans do, the current results suggest they may have a richer appreciation of others’ minds than we previously thought.

Great apes didn’t just develop these skills this year, of course, but the use of novel eye-tracking techniques allowed us to probe the question in a new way. By using methods that for the first time assessed apes’ spontaneous predictions in a classic false belief scenario – with minimal demands on their other cognitive abilities – we were able to show that apes knew what was going to happen.

At the very least, in several different scenarios, these apes were able to correctly predict that an individual would search for an object where he falsely believed it to be. These findings raise the possibility that the capacity to understand others’ false beliefs may not be unique to humans after all. If apes do in fact possess this aspect of theory of mind, the implication is that most likely it was present in the last evolutionary ancestor that human beings shared with the other apes. By that metric, this core human skill – recognizing others’ false beliefs – would have evolved at least 13 to 18 million years before our own species Homo sapiens hit the scene.

The ConversationChristopher Krupenye, Postdoctoral Researcher in Developmental and Comparative Psychology, Max Planck Institute

This article was originally published on The Conversation. Read the original article.

Now, Check Out:

Why teen brains need later school start time

By Kyla Wahlstrom, University of Minnesota.

Millions of high schoolers are having to wake up early as they start another academic year. It is not uncommon to hear comments from parents such as,

“I have a battle every morning to get my teenager out of bed and off to school. It’s a hard way to start every day.”

Sleep deprivation in teenagers as a result of early school start has been a topic of concern and debate for nearly two decades. School principals, superintendents and school boards across the country have struggled with the question of whether their local high school should start later.

So, are teenagers just lazy?

I have been researching the impact of later high school start times for 20 years. Research findings show that teens’ inability to get out of bed before 8 a.m. is a matter of human biology, not a matter of attitude.

At issue here are the sleep patterns of the teenage brain, which are different from those of younger children and adults. Due to the biology of human development, the sleep mechanism in teens does not allow the brain to naturally awaken before about 8 a.m. This often gets into conflict with school schedules in many communities.

History of school timing

In the earliest days of American education, all students attended a single school with a single starting time. In fact, as late as 1910, half of all children attended one-room schools. As schools and districts grew in size in the late 1890s-1920s, staggered starting times became the norm across the country.

In cities and large towns, high school students went first, followed by middle schoolers and then elementary students.

Here’s what research shows

Research findings during the 1980s started to cast a new light on teenagers’ sleep patterns.

Researcher Mary Carskadon and others at Brown University found that the human brain has a marked shift in its sleep/wake pattern during adolescence.

Researchers around the world corroborated those findings. At the onset of puberty, nearly all humans (and most mammals) experience a delay of sleep timing in the brain. As a result, the adolescent body does not begin to feel sleepy until about 10:45 p.m.

At the same time, medical researchers also found that sleep patterns of younger children enabled them to rise early and be ready for learning much earlier than adolescents.

In other words, the biology of the teenage brain is in conflict with early school start times, whereas sleep patterns of most younger children are in sync with schools that start early.

Biology of teenage brain

So, what exactly happens to the teenage brain during the growth years?

In the teens, the secretion of the sleep hormone melatonin begins at about 10:45 p.m. and continues until about 8 a.m. What this means is that teenagers are unable to fall asleep until melatonin secretion begins and they are also not able to awaken until the melatonin secretion stops.

What happens to the brain during teenage years? Brain image via www.shutterstock.com

These changes in the sleep/wake pattern of teens are dramatic and beyond their control. Just expecting teens to go to bed earlier is not a solution.

I have interviewed hundreds of teens who all said that if they went to bed early, they were unable to sleep – they just stared at the ceiling until sleep set in around 10:45 p.m.

According to the National Sleep Foundation, the sleep requirement for teenagers is between 8-10 hours per night. That indicates that the earliest healthy wake-up time for teens should not be before 7 a.m.

A recent research study that I led shows that it takes an average of 54 minutes from the time teens wake up until they leave the house for school. With nearly half of all high schools in the U.S. starting before 8:00 a.m., and over 86 percent starting before 8:30 a.m., leaving home by 7:54 a.m. would be a challenge for most teens in America.

What happens with less sleep

Studies on sleep in general, and on sleep in teens in particular, have revealed the serious negative consequences of lack of adequate sleep. Teens who are sleep-deprived – defined as obtaining less than eight hours per night – are significantly more likely to use cigarettes, drugs and alcohol.

What happens with less sleep?
Student image via www.shutterstock.com

The incidence of depression among teens significantly rises with less than nine hours of sleep. Feelings of sadness and hopelessness increase from 19 percent up to nearly 52 percent in teens who sleep four hours or less per night.

Teen car crashes, the primary cause of death for teenagers, are found to significantly decline when teens obtain more than eight hours of sleep per night.

What changes with later start time?

Results from schools that switched to a late start time are encouraging. Not only does the teens’ use of drugs, cigarettes, and alcohol decline, their academic performance improves significantly with later start time.

The Edina (Minnesota) School District superintendent and school board was the first district in the country to make the change. The decision was a result of a recommendation from the Minnesota Medical Association, back in 1996.

Research showed significant benefits for teens from that school as well as others with later start times.

For example, the crash rate for teens in Jackson Hole, Wyoming in 2013 dropped by 70 percent in the first year after the district adopted a later high school start.

Schools that have made a change have found a difference.
Teenager image via www.shutterstock.com

At this point, hundreds of schools across the country in 44 states have been able to make the shift. The National Sleep Foundation had a count of over 250 high schools having made a change to a later start as early as 2007.

Furthermore, since 2014, major national health organizations have taken a policy stand to support the implementation of later starting time for high school. The American Academy of Pediatrics, the American Medical Association and the Centers for Disease Control and Prevention have all come out with statements that support the starting time of high schools to be 8:30 a.m. or later.

Challenges and benefits

However, there are many schools and districts across the U.S. that are resisting delaying the starting time of their high schools. There are many reasons.

Issues such as changing transportation routes and altering the timing for other grade levels often head the list of factors making the later start difficult. Schools are also concerned about afterschool sports and activities.

Such concerns are valid. However, there could be creative ways of finding solutions. We already know that schools that were able to make the change found solutions that show “out of the box” thinking. For example, schools adopted mixed-age busing, coordinated with public transport systems and expanded afterschool child care.

I do understand that there are other realistic concerns that need to be addressed in making the change. But, in the end, communities that value maximum development for all of its children would also be willing to grapple with solutions.

After all, our children’s ability to move into healthy adult lives tomorrow depends on what we as adults are deciding for them today.

The ConversationKyla Wahlstrom, Senior Research Fellow, University of Minnesota

This article was originally published on The Conversation. Read the original article.

Now, Check Out:

New Brain Implant Enables Monkeys to Type Sections of Hamlet Just by Thinking [Video]

New technology allowed monkeys to transcribe passages from the New York Times and Hamlet at a rate of 12 words per minute.

The method, developed by scientists Krishna Shenoy, a professor of electrical engineering at Stanford University, and postdoctoral fellow Paul Nuyujukian, directly reads brain signals to drive a cursor moving over a keyboard.

Earlier versions of the technology have already been tested successfully in people with paralysis, but the typing was slow and imprecise. This latest work tests improvements to the speed and accuracy of the technology that interprets brain signals and drives the cursor.

“Our results demonstrate that this interface may have great promise for use in people,” says Nuyujukian, who will join Stanford faculty as an assistant professor of bioengineering in 2017. “It enables a typing rate sufficient for a meaningful conversation.”

Implants vs. eye-tracking

Other approaches for helping people with movement disorders type involve tracking eye movements or, as in the case of Stephen Hawking, tracking movements of individual muscles in the face. However, these have limitations, and can require a degree of muscle control that might be difficult for some people. For example, Stephen Hawking wasn’t able to use eye-tracking software due to drooping eyelids and other people find eye-tracking technology tiring.

Directly reading brain signals could overcome some of these challenges and provide a way for people to communicate their thoughts and emotions.

This new technology involves a multi-electrode array implanted in the brain to directly read signals from a region that ordinarily directs hand and arm movements used to move a computer mouse.

It’s the algorithms for translating those signals and making letter selections that the team members have been improving. They had tested individual components of the updated technology in prior monkey studies but had never demonstrated the combined improvements in speed and accuracy.

“The interface we tested is exactly what a human would use,” Nuyujukian says. “What we had never quantified before was the typing rate that could be achieved.” Using these high-performing algorithms developed by Nuyujukian and his colleagues, the animals could type more than three times faster than with earlier approaches.

Typing with auto-complete

The monkeys testing the technology had been trained to type letters corresponding to what they see on a screen. For this study, the animals transcribed passages of New York Times articles or, in one example, Hamlet. The results, which appear in the Proceedings of IEEE, show that the technology allows a monkey to type with only its thoughts at a rate of up to 12 words per minute.

People using this system would likely type more slowly, the researchers say, while they think about what they want to communicate or how to spell words. People might also be in more distracting environments and in some cases could have additional impairments that slow the ultimate communication rate.

“What we cannot quantify is the cognitive load of figuring out what words you are trying to say,” Nuyujukian says.

Despite that, Nuyujukian says even a rate lower than the 12 words per minute achieved by monkeys would be a significant advance for people who aren’t otherwise able to communicate effectively or reliably.

“Also understand that we’re not using auto completion here like your smartphone does where it guesses your words for you,” Nuyujukian says. Eventually the technology could be paired with the kinds of world completion technology used by smartphones or tablets to improve typing speeds.

In addition to proving the technology, this study showed that the implanted sensor could be stable for several years. The animals had the implants used to test this and previous iterations of the technology for up to four years prior to this experiment, with no loss of performance or side effects in the animals.

Shenoy and Nuyujukian are part of the Brain-Machine Interface initiative of the Stanford Neurosciences Institute, which is working to develop this and other methods of interfacing technology directly with the brain. The team is running a clinical trial now, in conjunction with Jaimie Henderson, professor of neurosurgery, to test this latest interface in people.

If the group is successful, technologies for directly interpreting brain signals could create a new way for people with paralysis to move and communicate with loved ones. Their research was published in the Proceedings of IEEE.

Source: Republished from Futurity.org as a derivative work under the Attribution 4.0 International license. Original article posted to Futurity by
Featured photo credit: By RedCoatOwn work, CC BY-SA 2.5, https://commons.wikimedia.org/w/index.php?curid=3032679

 

Now, Check Out:

[Video] Astounding Study Shows that Dogs Understand Both What We Say and How We Say it

An amazing news release from the Family Dog Project reveals what many dog lovers have often suspected: dogs really do understand what we say. And not only that, they also understand how we say it.

The release says that their study, the first fMRI study to investigate how dog brains process speech, shows that our best friends in the animal kingdom care about both what we say and how we say it. Dogs, like people, use the left hemisphere to process words, a right hemisphere brain region to process intonation, and praising activates dog’s reward center only when both words and intonation match, according to a study in Science.

Some of the dogs and their owners who participated in the ground-breaking study.  Credit: Family Dog Project
Some of the dogs and their owners who participated in the ground-breaking study.
Credit: Family Dog Project

Andics et al.’s findings suggest that the neural mechanisms to process words evolved much earlier than previously thought, and they are not unique to the human brain. It shows that if an environment is rich in speech, as is the case of family dogs, word meaning representations can arise in the brain, even in a non-primate mammal that is not able to speak.

“During speech processing, there is a well-known distribution of labor in the human brain. It is mainly the left hemisphere’s job to process word meaning, and the right hemisphere’s job to process intonation. But the human brain not only separately analyzes what we say and how we say it, but also integrates the two types of information, to arrive at a unified meaning. Our findings suggest that dogs can also do all that, and they use very similar brain mechanisms,” said lead researcher Attila Andics of Department of Ethology and MTA-ELTE Comparative Ethology Research Group at Eötvös Loránd University, Budapest.

“We trained thirteen dogs to lay completely motionless in an fMRI brain scanner. fMRI provides a non-invasive, harmless way of measurement that dogs enjoy to take part of,” said Márta Gácsi, ethologist, the developer of the training method, author of the study.

“We measured dogs’ brain activity as they listened to their trainer’s speech,” explains Anna Gábor, PhD student, author of the study. “Dogs heard praise words in praising intonation, praise words in neutral intonation, and also neutral conjunction words, meaningless to them, in praising and neutral intonations. We looked for brain regions that differentiated between meaningful and meaningless words, or between praising and non-praising intonations.”

The brain activation images showed that dogs prefer to use their left hemisphere to process meaningful but not meaningless words. This left bias was present for weak and strong levels of brain activations as well, and it was independent of intonation. Dogs activate a right hemisphere brain area to tell apart praising and non-praising intonation. This was the same auditory brain region that this group of researchers previously found in dogs for processing emotional non-speech sounds from both dogs and humans, suggesting that intonation processing mechanisms are not specific to speech.

Andics and colleagues also noted that praise activated dogs’ reward center – the brain region which responds to all sorts of pleasurable stimuli, like food, sex, being petted, or even nice music in humans. Importantly, the reward center was active only when dogs heard praise words in praising intonation. “It shows that for dogs, a nice praise can very well work as a reward, but it works best if both the words and the intonation are praising. So dogs not only tell apart what we say and how we say it, but they can also combine the two, for a correct interpretation of what those words really meant. Again, this is very similar to what human brains do,” Andics said.

This study is the first step to understanding how dogs interpret human speech, and these results can also help to make communication and cooperation between dogs and humans even more efficient, the researchers say.

These findings also have important conclusions about humans. “Our research sheds new light on the emergence of words during language evolution. What makes words uniquely human is not a special neural capacity, but our invention of using them,” Andics explains.

Source: News release on the Family Dog Project website.

Now, Check Out:

Scientists Use Ultrasound to Jump Start Man’s Brain

UCLA reports in a news release that a 25-year-old man recovering from a coma has made remarkable progress following a treatment at UCLA to jump-start his brain using ultrasound. The technique uses sonic stimulation to excite the neurons in the thalamus, an egg-shaped structure that serves as the brain’s central hub for processing information.

“It’s almost as if we were jump-starting the neurons back into function,” said Martin Monti, the study’s lead author and a UCLA associate professor of psychology and neurosurgery. “Until now, the only way to achieve this was a risky surgical procedure known as deep brain stimulation, in which electrodes are implanted directly inside the thalamus,” he said. “Our approach directly targets the thalamus but is noninvasive.”

Monti said the researchers expected the positive result, but he cautioned that the procedure requires further study on additional patients before they determine whether it could be used consistently to help other people recovering from comas.

“It is possible that we were just very lucky and happened to have stimulated the patient just as he was spontaneously recovering,” Monti said.

A report on the treatment is published in the journal Brain Stimulation. This is the first time the approach has been used to treat severe brain injury.

The technique, called low-intensity focused ultrasound pulsation, was pioneered by Alexander Bystritsky, a UCLA professor of psychiatry and biobehavioral sciences in theSemel Institute for Neuroscience and Human Behavior and a co-author of the study. Bystritsky is also a founder of Brainsonix, a Sherman Oaks, California-based company that provided the device the researchers used in the study.

That device, about the size of a coffee cup saucer, creates a small sphere of acoustic energy that can be aimed at different regions of the brain to excite brain tissue. For the new study, researchers placed it by the side of the man’s head and activated it 10 times for 30 seconds each, in a 10-minute period.

Monti said the device is safe because it emits only a small amount of energy — less than a conventional Doppler ultrasound.

Before the procedure began, the man showed only minimal signs of being conscious and of understanding speech — for example, he could perform small, limited movements when asked. By the day after the treatment, his responses had improved measurably. Three days later, the patient had regained full consciousness and full language comprehension, and he could reliably communicate by nodding his head “yes” or shaking his head “no.” He even made a fist-bump gesture to say goodbye to one of his doctors.

“The changes were remarkable,” Monti said.

The technique targets the thalamus because, in people whose mental function is deeply impaired after a coma, thalamus performance is typically diminished. And medications that are commonly prescribed to people who are coming out of a coma target the thalamus only indirectly.

Under the direction of Paul Vespa, a UCLA professor of neurology and neurosurgery at the David Geffen School of Medicine at UCLA, the researchers plan to test the procedure on several more people beginning this fall at the Ronald Reagan UCLA Medical Center. Those tests will be conducted in partnership with the UCLA Brain Injury Research Center and funded in part by the Dana Foundation and the Tiny Blue Dot Foundation.

If the technology helps other people recovering from coma, Monti said, it could eventually be used to build a portable device — perhaps incorporated into a helmet — as a low-cost way to help “wake up” patients, perhaps even those who are in a vegetative or minimally conscious state. Currently, there is almost no effective treatment for such patients, he said.

Source: News release from UCLA

Now, Check Out:

Why emotional abuse in childhood may lead to migraines in adulthood

By Gretchen Tietjen, University of Toledo and Monita Karmakar, University of Toledo.

Child abuse and neglect are, sadly, more common than you might think. According to a 2011 study in JAMA Pediatrics, more than five million U.S. children experienced confirmed cases of maltreatment between 2004 and 2011. The effects of abuse can linger beyond childhood – and migraine headaches might be one of them.

Previous research, including our own, has found a link between experiencing migraine headaches in adulthood and experiencing emotional abuse in childhood. So how strong is the link? What is it about childhood emotional abuse that could lead to a physical problem, like migraines, in adulthood?

What is emotional abuse?

The Centers for Disease Control and Prevention defines childhood maltreatment as:

Any act or series of acts of commission or omission by a parent or other caregiver that results in harm, potential for harm, or threat of harm to a child.

Data suggest that up to 12.5 percent of U.S. children will experience maltreatment by their 18th birthday. However, studies using self-reported data suggest that as many as 25-45 percent adults in the U.S. report experiencing emotional, physical or sexual abuse as a child.

The discrepancy may be because so many cases of childhood abuse, particularly cases of emotional or psychological abuse, are unreported. This specific type of abuse may occur within a family over the course of years without recognition or detection.

The link between emotional abuse and migraines

Migraine is a type of chronic, recurrent moderate to severe headache affecting about 12-17 percent of the people in the U.S. Headaches, including migraine, are the fifth leading cause of emergency department visits and the sixth highest cause of years lost due to disability. Headaches are about three times more common in women than men.

While all forms of childhood maltreatment have been shown to be linked to migraines, the strongest and most significant link is with emotional abuse. Two studies using nationally representative samples of older Americans (the mean ages were 50 and 56 years old, respectively) have found a link.

We have also examined the emotional abuse-migraine link in young adults. In our study, we found that those recalling emotional abuse in childhood and adolescence were over 50 percent more likely to report being diagnosed with migraine. We also found that if a person reported experiencing all three types of abuse (physical, emotional and sexual), the risk of being diagnosed with migraine doubled.

Stress can cause changes in the brain.
Brain image via www.shutterstock.com.

Why would emotional abuse in childhood lead to migraines in adulthood?

The fact that the risk goes up in response to increased exposure is what indicates that abuse may cause biological changes that can lead to migraine later in life. While the exact mechanism between migraine and childhood maltreatment is not yet established, research has deepened our understanding of what might be going on in the body and brain.

Adverse childhood experiences are known to upset the regulation of what is called the hypothalamic-pituitary-adrenal (HPA) axis, which controls the release of stress hormones. In plain English, that means experiencing an adverse event in childhood can disrupt the body’s response to stress. Stress isn’t just an emotion – it’s also a physical response than can have consequences for the body.

Prolonged elevation of these stress hormones can alter both the structure and function of the brain’s limbic system, which is the seat of emotion, behavior, motivation and memory. MRIs have found alterations in structures and connections within the limbic system both in people with a history of childhood maltreatment and people diagnosed with migraine. Stressful experiences also disrupt the immune, metabolic and autonomic nervous systems.

Both childhood abuse and migraine have been associated with elevation of c-reactive protein, a measurable substance in the blood (also known as a biomarker), which indicates the degree of inflammation. This biomarker is a well-established predictor of cardiovascular disease and stroke.

Migraine is considered to be a hereditary condition. But, except in a small minority of cases, the genes responsible have not been identified. However, stress early in life induces alterations in gene expression without altering the DNA sequence. These are called epigenetic changes, and they are long-lasting and may even be passed on to offspring. The role of epigenetics in migraine is in the early stages of investigation.

What does this mean for doctors treating migraine patients?

Childhood maltreatment probably contributes to only a small portion of the number of people with migraine. But because research indicates that there is a strong link between the two, clinicians may want to bear that in mind when evaluating patients.

Treatments such as cognitive behavioral therapy, which alter the neurophysiological response to stress, have been shown to be effective treatments for migraine and also for the psychological effects of abuse. Therefore CBT may be particularly suited to persons with both.

Anti-epileptic drugs such as valproate and topiramate are FDA-approved for migraine treatment. These drugs are also both known to reverse stress-induced epigenetic changes.

Other therapies that decrease inflammation are currently under investigation for migraine.

Migraineurs with history of childhood abuse are also at higher risk for psychiatric conditions like depression and anxiety, as well as for medical disorders like fibromyalgia and irritable bowel syndrome. This may affect the treatment strategy a clinician uses.

Within a migraine clinic population, clinicians should pay special attention to those who have been subjected to maltreatment in childhood, as they are at increased risk of being victims of domestic abuse and intimate partner violence as adults.

That’s why clinicians should screen migraine patients, and particularly women, for current abuse.

The ConversationGretchen Tietjen, Professor and Chair of Neurology, University of Toledo and Monita Karmakar, Ph.D. Candidate in Health Education, University of Toledo

This article was originally published on The Conversation. Read the original article.

Now, Check Out:

Why Having a ‘Bird Brain’ is Actually Awesome [Video]

The macaw has a brain the size of an unshelled walnut and the macaque monkey has one about the size of a lemon. Nevertheless, the macaw has more neurons in its forebrain—the portion of the brain associated with intelligent behavior—than the macaque.

That is one of the surprising results of the first study to systematically measure the number of neurons in the brains of more than two dozen species of birds ranging in size from the tiny zebra finch to the six-foot-tall emu, which found that they consistently have more neurons packed into their small brains than are stuffed into mammalian or even primate brains of the same mass.

“For a long time having a ‘bird brain’ was considered to be a bad thing: Now it turns out that it should be a compliment,” says Suzana Herculano-Houzel, a neuroscientist at Vanderbilt University.

The study provides a straightforward answer to a puzzle that comparative neuroanatomists have been wrestling with for more than a decade: how can birds with their small brains perform complicated cognitive behaviors?

The conundrum was created by a series of studies beginning in the previous decade that directly compared the cognitive abilities of parrots and crows with those of primates. The studies found that the birds could manufacture and use tools, use insight to solve problems, make inferences about cause-effect relationships, recognize themselves in a mirror, and plan for future needs, among other cognitive skills previously considered the exclusive domain of primates.

Scientists were left with a generally unsatisfactory fallback position: Avian brains must simply be wired in a completely different fashion from primate brains. Two years ago, even this hypothesis was knocked down by a detailed study of pigeon brains, which concluded that they are, in fact, organized along quite similar lines to those of primates.

The new study, published in the Proceedings of the National Academy of Sciences, provides a more plausible explanation: Birds can perform these complex behaviors because their forebrains contain a lot more neurons than anyone had previously thought—as many as in mid-sized primates.

Densely packed neurons

“We found that birds, especially songbirds and parrots, have surprisingly large numbers of neurons in their pallium: the part of the brain that corresponds to the cerebral cortex, which supports higher cognition functions such as planning for the future or finding patterns. That explains why they exhibit levels of cognition at least as complex as primates,” Herculano-Houzel says.

That’s possible because the neurons in avian brains are much smaller and more densely packed than those in mammalian brains. Parrot and songbird brains, for example, contain about twice as many neurons as primate brains of the same mass and two to four times as many neurons as equivalent rodent brains.

birdbrain-vs-mammalbrain
Click/tap for larger image.

Not only are neurons packed into the brains of parrots and crows at a much higher density than in primate brains, but the proportion of neurons in the forebrain is also significantly higher.

“In designing brains, nature has two parameters it can play with: the size and number of neurons and the distribution of neurons across different brain centers,” Herculano-Houzel says, “and in birds we find that nature has used both of them.”

Although the relationship between intelligence and neuron count has not yet been firmly established, scientists argue that avian brains with the same or greater forebrain neuron counts than primates with much larger brains can potentially provide the birds with much higher “cognitive power” per pound than mammals.

One of the important implications of the study, Herculano-Houzel says, is that it demonstrates that there is more than one way to build larger brains.

Previously, neuroanatomists thought that as brains grew larger neurons had to grow bigger as well because they had to connect over longer distances. “But bird brains show that there are other ways to add neurons: keep most neurons small and locally connected and only allow a small percentage to grow large enough to make the longer connections. This keeps the average size of the neurons down.

“Something I love about science is that when you answer one question, it raises a number of new questions.”

Among the questions that this study raises are whether the surprisingly large number of neurons in bird brains comes at a correspondingly large energetic cost, and whether the small neurons in bird brains are a response to selection for small body size due to flight, or possibly the ancestral way of adding neurons to the brain – from which mammals, not birds, may have diverged.

Scientists from the Charles University in Prague and the University of Vienna are coauthors of the study.

Source: Republished from Futurity.org as a derivative work under the Attribution 4.0 International license. Original article posted to Futurity by .

Featured Photo Credit:  Mathias Appel/Flickr

Now, Check Out: