At Caltech researchers are studying how the brain can quickly adapt to use one sense instead of another, hopefully allowing this knowledge to translate into technology to allow blind people to see with their ears. In particular, the investigators are encoding the visual scene captured by a camera on a user’s glasses into an audio representation that uses stereo, loudness, and various frequencies to intuitively describe what things look like.
So far the technology is being applied to describe simple patterns, but that is to build a foundation for a more complicated system.
Here’s a video describing and showing off the work in a bit more detail:
Study in Scientific Reports: Auditory Sensory Substitution is Intuitive and Automatic with Texture Stimuli…