Researchers at Maastricht University in Holland developed an innovative technology that uses functional MRI and computer software to reveal specific details of the hearing process. In gathered fMRI data scientists were able to identify who the subject was listening to and what that person was saying. Even though the study involved only three speakers pronouncing three simple sounds, the technology paves the way for future research into understanding how the brain processes auditory information.
Seven study subjects listened to three different speech sounds (the vowels /a/, /i/ and /u/), spoken by three different people, while their brain activity was mapped using neuroimaging techniques (fMRI). With the help of data mining methods the researchers developed an algorithm to translate this brain activity into unique patterns that determine the identity of a speech sound or a voice. The various acoustic characteristics of vocal cord vibrations (neural patterns) were found to determine the brain activity. Just like real fingerprints, these neural patterns are both unique and specific: the neural fingerprint of a speech sound does not change if uttered by somebody else and a speaker’s fingerprint remains the same, even if this person says something different.
Moreover, this study revealed that part of the complex sound-decoding process takes place in areas of the brain previously just associated with the early stages of sound processing. Existing neurocognitive models assume that processing sounds actively involves different regions of the brain according to a certain hierarchy: after a simple processing in the auditory cortex the more complex analysis (speech sounds into words) takes place in specialised regions of the brain. However, the findings from this study imply a less hierarchal processing of speech that is spread out more across the brain.
Press release: Maastricht University researchers produce ‘neural fingerprint’ of speech recognition …
Abstract in Science…
Image credit…