For the first time, neuroscientists have identified a group of neurons in the human brain that light up when we hear singing.
These neurons appeared to respond to the specific combination of voice and music, but not to either regular speech or instrumental music. Exactly what they are doing is unknown and will require more work to uncover, researchers at the Massachusetts Institute of Technology said.
“The work provides evidence for relatively fine-grained segregation of function within the auditory cortex, in a way that aligns with an intuitive distinction within music,” said Sam Norman-Haignere, a former MIT post doctoral student who is now an assistant professor of neuroscience at the University of Rochester Medical Center.
The work builds on the research team’s 2015 study in which they used fMRI to identify a population of neurons in the brain’s auditory cortex that responds specifically to music. In the new work, the researchers used recordings of electrical activity taken at the surface of the brain, which gave them much more precise information than fMRI.
“There’s one population of neurons that responds to singing, and then very nearby is another population of neurons that responds broadly to lots of music. At the scale of fMRI, they’re so close that you can’t disentangle them, but with intracranial recordings, we get additional resolution, and that’s what we believe allowed us to pick them apart,” Norman-Haignere said.
In their 2015 study, the researchers used fMRI to scan the brains of participants as they listened to 165 sounds, including speech, music, finger tapping and dog barking.
The new study used electrocorticography, which allows electrical activity to be recorded by electrodes placed inside the skull. This offered a more precise picture compared to the fMRI.
“With most of the methods in human cognitive neuroscience, you can’t see the neural representations. Most of the kind of data we can collect can tell us that here’s a piece of brain that does something, but that’s pretty limited. We want to know what’s represented in there,” said Nancy Kanwisher, the Walter A. Rosenblith professor of cognitive neuroscience and a member of MIT’s McGovern Institute for Brain Research and Center for Brains, Minds and Machines.
In the second part of their study, the researchers devised a mathematical method to combine the data from the intracranial recordings with the fMRI data from their 2015 study.
The song-specific hotspot they discovered is at the top of the temporal lobe, near regions that are selective for language and music. “That location suggests that the song-specific population may be responding to features such as the perceived pitch, or the interaction between words and perceived pitch, before sending information to other parts of the brain for further processing,” MIT wrote in a press release.
The full study was published in the journal Current Biology.
For more content like this, sign up for the Pulse newsletter here.
About the Author