Computers at Microsoft have decided to employ humans in electro-encephalograph (EEG) caps to improve their processing power.
Desney Tan, a researcher at Microsoft Research, and Pradeep Shenoy, a graduate student at the University of Washington, have devised a scheme that uses electro-encephalograph (EEG) caps to collect the brain activity of people looking at pictures of faces and nonfaces, such as horses, cars, and landscapes. The pair found that even when the subjects’ objective wasn’t to distinguish the faces from the nonfaces, their brain activity indicated that they subconsciously identified the difference. The researchers wrote software that churns through the EEG data and classifies faces and nonfaces based on the subjects’ response. When a single person viewed an image once, the system was able to identify faces with up to 72.5 percent accuracy. Results were even better using data from eight people who had viewed a particular image twice: accuracy jumped to 98 percent.
“Given that the brain is constantly processing external information,” says Tan, “we can start to use the brain as a processor.” In one scenario, he explains, pictures would be placed in people’s peripheral vision, which doesn’t require focused cognitive attention, so they could go about their daily tasks.
Today it takes relatively large supercomputers many hours to recognize faces–something a human can do almost instantly. One application for this face-recognition technique could be to use it for quickly sorting snapshots from surveillance videos to find frames with faces and those without, although Tan says this early work is mainly a proof of concept.