With the help of a grant from the National Science Foundation, researchers at the University of Washington and Cornell are working on software to help transmit compressed video of sign language using cellphones.
MobileASL is a video compression project at the University of Washington with the goal of making wireless cell phone communication through sign language a reality.
The current wireless telephone network has inadvertently excluded over one million deaf or hard of hearing Americans.
With the advent of cell phone PDAs with larger screens and photo/video capture, people who communicate with American Sign Language (ASL) could utilize these new technologies. However, due to the low bandwidth of the wireless telephone network, even today’s best video encoders likely cannot produce the quality video needed for intelligible ASL. Instead, a new real time video compression scheme is needed to transmit within the existing wireless network while maintaining video quality that allows users to understand semantics of ASL with ease. For this technology to exist in the immediate future, the MobileASL project is designing new ASL encoders that are compatible with the new H.264/AVC compression standard using x264 (nearly doubling compression ratios of MPEG-2). The result will be a video compression metric that takes into account empirically validated visual and perceptual processes that occur during conversations in ASL.
This material is based upon work supported by the National Science Foundation under Grant No. 0514353.
If you are fluent in ASL and willing to participate in user studies, you can sign up here.