When we are at a party and talking to someone, our brain is able to identify a single speaker’s voice and focus our hearing on it, helping us to listen more closely and ignore the other voices nearby. For the millions of people with hearing impairment who use hearing aids, they often lose this ability, and instead hear the entire party boosted up louder. This makes communicating with one individual in a crowd very challenging.
To address this problem, researchers from Columbia University have developed a new hearing aid that automatically identifies and decodes the person you want to hear. As they began to study this problem, the researchers found that the brainwaves of the listener begin to mimic the brainwaves of the speaker. They developed an AI tool which can separate many voices from each other in the room. The system then compares each speaker’s voice pattern with the listener’s brain waves, to identify which voice to amplify.
The team’s previous work required the algorithm to be trained on individual speakers beforehand in order to separate individual voices, but in this most recent work the algorithm is able to separate new voices without any additional training.
“Our end result was a speech-separation algorithm that performed similarly to previous versions but with an important improvement,” said Dr. Nima Mesgarani, the senior author of the study published in journal Science Advances. “It could recognize and decode a voice — any voice — right off the bat.”
Check out this impressive demo that demonstrates how the technology is able to separate two different voices:
Here’s a short animation Columbia released about the research:
The publication in journal Science Advances: Speaker-independent auditory attention decoding without access to clean speech sources…
Via: Columbia…