Imaging techniques provide insight into speech development
Individual language is significantly impacted by what we hear, and a team of researchers from New York University used various imaging methods to determine that our brain’s pathways play a central role in speech development.
The research, published March 4 in Nature Neuroscience, involved neurophysical and structural imaging techniques, in additional to simple behavioral tests in more than 300 participants.
The study included a few main segments. The first involved series of experiments that required subjects listen to a rhythmic sequence of syllables such as “lah,” “di,” “fum” and simultaneously whisper the syllable “tah.” High synchronizers, as labeled by the researchers, synchronized their whispering to the rhythmic sequence, while others (low synchronizers) were unaffected by the rhythm.
“The combined behavioral, neurophysiological and neuroanatomical results reveal a fundamental phenomenon: whereas some individuals are compelled to spontaneously align their speech output to speech input, others remain impervious to external rhythm,” wrote Maria Florencia Assaneo, PhD, with NYU’s Department of Psychology, and colleagues.
Assaneo and colleagues believe their findings could have implications for diagnosing speech-related impediments and for evaluating “cognitive-linguistic development” in children.
The results prompted more questions. For example, did the grouping of individuals based on rhythmic testing have any correlation with the brain’s organization?
The researchers looked at MRI data from the patients, noting if white matter pathways differ between the groups. They found high synchronizers have more matter volume in the pathways connecting listening areas with speaking areas than seen on the images of low synchronizers.
Magnetoencephalography (MEG), which tracks neural dynamics, was used to record brain activity while participants passively listened to rhythmic syllable sequences. High synchronizers showed more connectivity between their brain and the sounds than low synchronizers.
"This implies that areas related to speech production are also recruited during speech perception, which likely helps us track external speech rhythms," Assaneo noted in an NYU news release.
The group also found high synchronizers tended to learn new words easier than their counterparts, even without knowing the meaning of such words.
“Excitingly, the fact that our results scale up to an ecologically relevant task, word learning in the context of speech segmentation, has theoretical and practical implications for how individual differences in cognition and learning are understood and studied,” the authors concluded.