Researchers have figured out how brain rhythms help process music.
The study, which appears in the Proceedings of the National Academy of Sciences, points to a newfound role the brain’s cortical oscillations play in the detection of musical sequences. The findings also suggest musical training can enhance the functional role of brain rhythms.
“We’ve isolated the rhythms in the brain that match rhythms in music,” explains lead author Keith Doelling, a PhD student at New York University. “Specifically, our findings show that the presence of these rhythms enhances our perception of music and of pitch changes.”
Not surprisingly, the study found that musicians have more potent oscillatory mechanisms than do non-musicians—but this discovery’s importance goes beyond the value of musical instruction.
“What this shows is we can be trained, in effect, to make more efficient use of our auditory-detection systems,” observes study coauthor David Poeppel, a professor in psychology department and Center for Neural Science and director of the Max Planck Institute for Empirical Aesthetics in Frankfurt.
“Musicians, through their experience, are simply better at this type of processing.”
Previous research has shown that brain rhythms very precisely synchronize with speech, enabling us to parse continuous streams of speech—in other words, how we can isolate syllables, words, and phrases from speech, which is not, when we hear it, marked by spaces or punctuation.
However, it has not been clear what role such cortical brain rhythms, or oscillations, play in processing other types of natural and complex sounds, such as music.
To address these questions, the researchers conducted three experiments using magnetoencephalography (MEG), which allows measurements of the tiny magnetic fields generated by brain activity. The study’s subjects were asked to detect short pitch distortions in 13-second clips of classical piano music (by Bach, Beethoven, Brahms) that varied in tempo—from half a note to eight notes per second.
The study’s authors divided the subjects into musicians (those with at least six years of musical training and who were currently practicing music) and non-musicians (those with two or fewer years of musical training and who were no longer involved in it).
For music that is faster than one note per second, both musicians and non-musicians showed cortical oscillations that synchronized with the note rate of the clips—in other words, these oscillations were effectively employed by everyone to process the sounds they heard, although musicians’ brains synchronized more to the musical rhythms. Only musicians, however, showed oscillations that synchronized with unusually slow clips.
This difference, the researchers say, may suggest that non-musicians are unable to process the music as a continuous melody rather than as individual notes. Moreover, musicians much more accurately detected pitch distortions—as evidenced by corresponding cortical oscillations.
Brain rhythms, they add, therefore appear to play a role in parsing and grouping sound streams into “chunks” that are then analyzed as speech or music.
The National Institutes of Health and the National Science Foundation supported the work.
Source: Reproduced from Futurity.org as a derivative work under the Attribution 4.0 International license. Original article posted by James Devitt-NYU
Featured Photo Credit: danbruell/Flickr