Think-A-Move, Ltd., a Beachwood, OH company, is working on implementing various human-device interface technologies including remote control of gadgets with speech or tongue.
Here’s more about the tongue control:
The TAM system for tongue control has several exciting features:
Platform independent software, supporting integration with other software or technology. Specialized, proprietary software, including a recordable database of tongue signals. User dependent: train a profile similar to speech recognition software. Quick training curve: create a database in under 30 minutes.
Tongue motions inside the mouth produce traceable acoustic patterns in the ear canal. Different types of movement, differing in location and speed, produce unique patterns. To use tongue movements for device control, repeatable and comfortable movements, which produce consistent acoustic patterns, need to be identified. As an example, the figure below shows three different tongue motions and the corresponding acoustic signals.
The acoustic patterns generated by tongue motions generally reside in the 20 Hz to 100 Hz frequency range (i.e., most of their energy is in this range). The major portion of the acoustic signal, from start to finish, generally completes within 200ms (0.2 s). This is easily differentiated from speech, which generally resides above 100 Hz, and can last much longer than 0.2 seconds.
More in these videos:
(hat tip: The Engineer)