Locked-In Syndrome is one of the most terrifying brain lesions — leaving patients aware but almost entirely without the power to move. Now, a collaboration of American academic researchers has implanted a wireless brain-machine interface, developed by Neural Signals of Duluth Georgia, into a locked-in subject who is almost completely paralyzed.
The system uses brain electrodes to read signals meant for jaw and mouth muscles. An FM radio is used to transmit these brain signals to a computer, which transforms them into recognizable sounds. Currently the system is only able to produce vowels, but with more electrodes and more powerful algorithms it should be able to scale up to fully vocalized words.
From the article abstract in PLoS ONE:
Brain-machine interfaces (BMIs) involving electrodes implanted into the human cerebral cortex have recently been developed in an attempt to restore function to profoundly paralyzed individuals. Current BMIs for restoring communication can provide important capabilities via a typing process, but unfortunately they are only capable of slow communication rates. In the current study we use a novel approach to speech restoration in which we decode continuous auditory parameters for a real-time speech synthesizer from neuronal activity in motor cortex during attempted speech.
Neural signals recorded by a Neurotrophic Electrode implanted in a speech-related region of the left precentral gyrus of a human volunteer suffering from locked-in syndrome, characterized by near-total paralysis with spared cognition, were transmitted wirelessly across the scalp and used to drive a speech synthesizer. A Kalman filter-based decoder translated the neural signals generated during attempted speech into continuous parameters for controlling a synthesizer that provided immediate (within 50 ms) auditory feedback of the decoded sound. Accuracy of the volunteer’s vowel productions with the synthesizer improved quickly with practice, with a 25% improvement in average hit rate (from 45% to 70%) and 46% decrease in average endpoint error from the first to the last block of a three-vowel task.
Our results support the feasibility of neural prostheses that may have the potential to provide near-conversational synthetic speech output for individuals with severely impaired speech motor control. They also provide an initial glimpse into the functional properties of neurons in speech motor cortical areas.
Here’s the visual and audio feedback as presented to the locked-in man during tests:
Side image: (A) Left panels: Axial (top) and sagittal (bottom) slices showing brain activity along the precentral gyrus during a word generation fMRI task prior to implantation. Red lines denote pre-central sulcus; yellow lines denote central sulcus. Right panels: Corresponding images from a post-implant CT scan showing location of electrode. (B) 3D CT image showing electrode wire entering dura mater. Subcutaneous electronics are visible above the electrode wire, on top of the skull.
More at Wired: Wireless Brain-to-Computer Connection Synthesizes Speech
Article in PLoS ONE: A Wireless Brain-Machine Interface for Real-Time Speech Synthesis
Link: Neural Signals homepage…