When two people experience a deep connection, they’re informally described as being on the same wavelength. There may be neurological truth to that.
Brain scans of a speaker and listener showed their neural activity synchronizing during storytelling. The stronger their reported connection, the closer the coupling.
The experiment was the first to use fMRI, which measures blood flow changes in the brain, on two people as they talked. Different brain regions have been linked to both speaking and listening, but “the ongoing interaction between the two systems during everyday communication remains largely unknown,” wrote Princeton University neuroscientists Greg Stephens and Uri Hasson in the July 27 Proceedings of the National Academy of Sciences.
They found that speaking and listening used common rather than separate neural subsystems inside each brain. Even more striking was an overlap between the brains of speaker and listener. When post-scan interviews found that stories had resonated, scans showed a complex interplay of neural call and response, as if language were a wire between test subjects’ brains.
The findings don’t explain why any two people “click,” as synchronization is a result of that connection, not its cause. And while the brain regions involved are linked to language, their precise functions are not clear. But even if the findings are general, they support what psychologists call the “theory of interactive linguistic alignment” — a fancy way of saying that talking brings people closer by making them share a common conceptual ground.
“If I say, ‘Do you want a coffee?’ you say, ‘Yes please, two sugars.’ You don’t say, ‘Yes, please put two sugars in the cup of coffee that is between us,’” said Hasson. “You’re sharing the same lexical items, grammatical constructs and contextual framework. And this is happening not just abstractly, but literally in the brain.”
The researchers didn’t test brain synchronization during phone calls or video conferencing, but Hasson speculates that “coupling would be stronger face-to-face.” He also thinks dialogue will produce especially strong forms of synchronization, and plans to run scans of people engaged in deep conversation, rather than telling or listening to long stories.
“But first, we’ll look at cases where there’s a failure to communicate,” said Hasson.