Timbre, a poorly defined characteristic of sound, is a great source of information for people with normal hearing. It allows people to identify the source of a particular sound and makes music enjoyable. Modern hearing aids are still rudimentary devices in terms of providing a rich sound to their users. There’s a lot of potential for improving how elements of timbre are translated to the hearing impaired to improve location and source perception.
An international team of researchers has been studying the way we process sound and they built a software algorithm that can make similar judgements about sound that people with healthy hearing can. This technology should allow hearing aid manufacturers to improve audio processors in their devices, immensely improving their utility and bringing back the pleasures of music to the users.
“Our research has direct relevance to the kinds of responses you want to be able to give people with hearing impairments,” Elhilali said. “People with hearing aids or cochlear implants don’t really enjoy music nowadays, and part of it is that a lot of the little details are being thrown away by hearing prosthetics. By focusing on the characteristics of sound that are most informative, the results have implications for how to come up with improved sound processing strategies and design better hearing prosthetics so they don’t discard a lot of relevant information.”
The researchers set out to examine the neural underpinnings of musical timbre in an attempt to both define what makes a piano sound different than a violin; and explore the processes underlying the brain’s way of recognizing timbre. The basic idea was to develop a mathematical model that would simulate how the brain works when sound comes in, how it looks for specific features and whether something is there that allows the brain to discern these different qualities.
Based on experiments in both animals and humans, they devised a computer model that can accurately mimic how specific brain regions process sounds as they enter our ears and get transformed into brain signals that allow us to recognize the type of sounds we are listening to. The model was able to correctly identify which instrument is playing (out of a total of 13 instruments) to an accuracy rate of 98.7 percent.
The computer model was also able to mirror how human listeners make judgment calls regarding timbre. These judgments were collected from 20 people who were brought separately into a sound booth and listened to musical notes over headphones. The researchers asked these regular listeners to listen to two sounds played by different musical instruments. The listeners were then asked to rate how similar the sounds seemed. A violin and a cello are perceived as closer to each other than a violin and a flute. The researchers also found that wind and percussive instruments tend to overall be the most different from each other, followed by strings and percussions, then strings and winds. These subtle judgments of timbre quality were also reproduced by the computer model.
Johns Hopkins: Helping the song remain the same: New insights about timbre could improve hearing prosthetics
Study in PLOS Computational Biology: Music in Our Ears: The Biological Bases of Musical Timbre Perception