Using touch sensors, the Palatometer from CompleteSpeech of Orem, Utah is capable of reading how one’s tongue contacts the palate during speech. Developed to help people with speech impediments learn how to speak properly, the device is now being used by research scientists from the University of the Witwatersrand in Johannesburg, South Africa to develop an artificial larynx that can digitally vocalize the speech of mute people.
From Technology Review:
To use the device, a person puts the palatometer in her mouth and mouths words normally. The system tries to translate those mouth movements into words before reproducing them on a small sound synthesizer, perhaps tucked into a shirt pocket.
So far, Russell has trained the system to recognize 50 common English words by saying each word multiple times with the palatometer in her mouth. The information can be represented on a binary space-time graph and put into a database. Each time the user speaks, the contact patterns are compared against the database to identify the correct word.
Russell’s team has tested the word-identification system using a variety of techniques. One approach involves aligning and averaging the data produced while training the device for a few instances of a word to create a template for comparison. Another compares features such as the area of the data plots on the graph, and the center of mass on the X and Y axes. A voting system compares the results of selected methods to see whether there is agreement. The researchers have also tested a predictive-analysis system, which considers the last word mouthed to help determine the next.
More from Technology Review: A Tongue-Tracking Artificial Larynx…
Link: CompleteSpeech Palotometer…
Image: Top: CompleteSpeech’s palatometer. Bottom: The space-time graph of the tongue-palate contact pattern for the word “been.”