Judging by all the media coverage, telemedicine is surely a necessary part of our not too distant future. Yet the technology that powers today’s telemedicine continues to be essentially confined to something resembling Skype on wheels.
At University of Texas at Dallas researchers are working on bringing new technologies together, like haptics, body sensors, and real time data transmission protocols, to allow for more substantial capabilities than just audio and video communication. They envision, for example, a rehabilitation system that can help therapists remotely work with patients on exercise techniques, including being able to feel the motion and strength of their movements while providing real time feedback.
Dr. Mark W. Spong, dean of the Jonsson School and holder of the Lars Magnus Ericsson Chair in Electrical Engineering and the Excellence in Education Chair, is a leading researcher in control and teleoperation – operating of machines at a distance. He is developing techniques to eliminate instability in communicating the data from the haptic devices over the network.
To minimize the amount of data that needs to be exchanged, sophisticated algorithms need to be created. That’s where Dr. Xiaohu Guo, associate professor of computer science at UT Dallas and a project co-principal investigator, comes in. He’s an expert in computer graphics, animation and modeling.
Guo is refining techniques to not only allow the data between haptic devices to be transmitted over the network more efficiently, but also creating 3-D visual images of original movements in real time.
“We do not only want the person to be moving the device, we want them to have a visual feel of what the movement is causing,” Prabhakaran said.
Guo has had success transforming large amounts of data using what is known as spectral transformation techniques. These techniques transform 3-D images into points that represent the surface of an object. The data is then compressed into a smaller form that can be sent faster over networks.
People using this platform would use body sensors similar to those installed in smartphones that can tell whether the user is looking at the device in portrait or landscape views.
“If we put body sensors on the patients, then his or her movements can be tracked with high accuracy,” Prabhakaran said. “The advantage of the sensor is the data that is generated is only a few bytes large, so it is easily transmitted over the network.
“You need a 3-D model to provide visual perspective, but if you are dealing with a lousy network and cannot have consistent visual perspective, the body sensors could provide that information.
Press release: Researchers Pushing the Boundaries of Virtual Reality