New research coming out of the University of Minnesota in Minneapolis and the University of Giessen, as well as the Max-Planck-Institute Tuebingen in Germany, looks at how a person’s awareness of faces affects his or her ability to gauge distances and complete visually guided tasks. The research was published in the open-access online Journal of Vision. In this experiment the researchers used well-known phenomenon called “hollow-face illusion”–perception of inside of a mask of a face as a normal convex face–to conduct an experiment. The Association for Research in Vision and Ophthalmology comments on the study:
Study participants viewed concave and convex faces and made verbal and reach estimates that indicated their perceived shape of the face. In the verbal task, participants gave a verbal estimate of the distance from their viewing position to either the nose or the cheek of the faces.
In the reaching task, participants touched either the nose or cheek of the face within a certain period of time. The time limit was imposed to ensure the response time was similar to that of the verbal task. The study’s data show that the magnitude of the hollow-face illusion is similar for reaching tasks and verbal tasks.
Volker H. Franz, PhD, one of the study’s researchers says, “We show that prior knowledge about the general shape of faces can interact with the information the person acquires from vision at the time of the action. Without the knowledge that they are reaching to a face and what the typical shape of a face is, participants would perform the reaching movements differently. This stresses the importance of object recognition even for very simple motor tasks and shows that humans perform these actions in a more complex way than, for example, typical applications in robotics which do not take into account prior knowledge about objects.”
The same concept reverberates in the study’s conclusions:
Using hollow faces as a target for distance estimations, we have shown that prior knowledge of object shape can dominate shape from binocular disparity information in reaching tasks, as well as in verbal tasks. The shape estimates from the two sources of information are combined, rather than one being thrown out as completely unreliable. The resulting shape estimates are similar for both verbal and reaching tasks, which is what we would expect if the same cue combination strategy is being used for the reaching and the verbal tasks.
Will robots ever be able to have an intrinsic perception of the human face, innate ability to recognize the object? Hardly believable.
The complete article is here.