Scientists from the University of Bonn have developed novel software that will supposedly improve retinal prosthetic devices. The software, to be presented at the upcoming Hanover Fair (April 16th – 20th) is touted to allow the development of a learning visual prosthesis:
Currently, the results do not meet the high expectations. “The camera generates electrical signals, which are almost useless for the brain,” comments Rolf Eckmiller, a professor at the Department of Computer Science at Bonn University. “Our own system translates the camera signals into a language, which the central visual system in the brain understands”. Unfortunately, the central visual system of each individual speaks a different dialect; this poses a difficulty for the translator function. For this reason, the computer- and neural scientist developed the “Retina Encoder” together with his graduate students Oliver Baruth and Rolf Schatten. At the Hanover Fair he is looking for commercial partners for the next step into clinical trials.
“In principle, the Retina Encoder is a computer program that converts the camera signals and forwards them to the retinal implant,” explains Oliver Baruth. “The encoder learns in a continuous process how to change the camera output signal so that the respective patient can perceive the image.” Currently, tests of the learning dialog process are being performed with normally sighted volunteers. The camera images are translated by the Retina Encoder and subsequently forwarded to a kind of “virtual central visual system.” This simulation mimics the brain function for the interpretation of the converted camera data.
Initially, the Retina Encoder does not know which language the virtual central visual system speaks. Therefore the software translates the original picture – for example a ring – in different, randomly selected “dialects”. This way, variations of the picture emerge, which are more or less similar to a ring. The volunteer sees these variations on a small screen that is integrated in a frame of glasses. By means of head movements, the person selects those variations that appear most similar to a ring. From these choices, the learning software draws conclusions how to improve the translation. In the next learning cycle, several new picture variations are being presented, which look already more similar to the original: during this process, the Retina Encoder becomes adapted step-by-step to the language of the virtual central visual system. In the current tests it works very well; however, the scientists have not yet tested their system in patients. The scientists emphasize that in principle, the Retina Encoder could be integrated in implanted visual prostheses within a few months.
In normally sighted humans, a kind of natural Retina Encoder is already integrated in the retina: specifically, four layers of nerve cells are positioned in front of the photoreceptor cells. “The retina is a transparent biocomputer,” Eckmiller says. “It transforms the electrical signals of rod and cone photoreceptors into a complex signal.” This signal reaches the brain via the optic nerve.
Press release: Lernende Sehprothese auf der Hannover-Messe … (English)
Flashbacks: Learning Retinal Implant System, Optoelectronic Retinal Prosthesis, Good news from MiViP, Second Sight Implant: Positive Results Reported in the Study, Optobionics’ Artificial Silicon Retina ™ microchip