Mother’s voice lights up child’s brain, and the connectivity predicts social skills

Children respond to their mothers’ voices differently than they respond to the voices of women they don’t know. That’s a no-brainer. But it’s something of a revelation to observe specific regions of a kid’s brain lighting up in fMRI when Mom speaks—and to find that the strength of connections between regions predicts how adept the child will become at social communication.

Stanford researchers have done just that. Their study report posted online May 16 in the Proceedings of the National Academy of Sciences.

Daniel Abrams, PhD, and colleagues examined 24 children between the ages of 7 and 12 who had no developmental disorders and were being raised by their biological mothers.

Using functional MRI, the team measured brain activity while the children listened to brief recordings of nonsense words uttered by their respective biological mothers and by two female control voices.

Compared to the control voices, the mothers’ voices elicited a more dynamic response in primary auditory regions in the midbrain and cortex, which process sound, and in the amygdala, which handles emotions.

Further, neural activity lit up more for mother’s voice vs. control in brain regions that

  • respond to rewarding stimuli, such as the mesolimbic reward pathway and medial prefrontal cortex;
  • process information about oneself, others and memories, including the default mode network; and
  • sort out face recognition, including a subregion of the fusiform gyrus.

Meanwhile, the children whose brains showed stronger connections between all these regions at the sound of Mom’s voice also had the strongest social communication ability as assessed using the Social Responsiveness Scale.

“[W]e have identified key functional systems and circuits underlying the perception of a foundational sound source for social communication in a child: mother’s voice,” Abrams and team conclude in their discussion. “Critically, the degree of engagement of these functional systems represents a biological signature of individual differences in social communication abilities.

“Our findings provide a novel neurobiological template for the investigation of normal social development as well as clinical disorders such as autism, in which perception of biologically salient voices may be impaired.”

PNAS has posted the full study.

Dave Pearson

Dave P. has worked in journalism, marketing and public relations for more than 30 years, frequently concentrating on hospitals, healthcare technology and Catholic communications. He has also specialized in fundraising communications, ghostwriting for CEOs of local, national and global charities, nonprofits and foundations.

Around the web

RBMA President Peter Moffatt discusses declining reimbursement rates, recruiting challenges and the role of artificial intelligence in transforming the industry.

Deepak Bhatt, MD, director of the Mount Sinai Fuster Heart Hospital and principal investigator of the TRANSFORM trial, explains an emerging technique for cardiac screening: combining coronary CT angiography with artificial intelligence for plaque analysis to create an approach similar to mammography.

A total of 16 cardiology practices from 12 states settled with the DOJ to resolve allegations they overbilled Medicare for imaging agents used to diagnose cardiovascular disease.