Can you design a robot that can differentiate a tree limb from a human limb? Such a distinction could be important, especially in the field of medicine, when, for example, a surgical robot must decide which blood vessel to cut, or, in assisting the elderly, which bottle is Crestor and which bottle is candy.
To encourage improvements in the way robots “see” the world, the National Institute of Standards and Technology (NIST) and Willow Garage, a Silicon Valley robotics research and design firm, is teaming up to launch an international “perception challenge”. The competition will debut at the IEEE International Conference on Robotics and Automation in Shanghai, China, on May 9-13, 2011.
The competition involves developing an algorithm that recognizes the identity and pose of 50 common household objects, 35 of which will be known beforehand.
According to the NIST press release, many algorithms already exist to identify objects, but this competition will test their robustness and versatility in new robots:
The new competition will measure the performance of current algorithms that process and act on data gathered with cameras and other types of sensing devices, explains NIST computer scientist Tsai Hong. “There are hundreds—maybe even thousands—of algorithms that already have been devised to help robots identify objects and determine their location and orientation,” she says. “But we have no means for comparing and evaluating these perceptual tools and determining whether an existing algorithm will be useful for new types of robots.”
Press release from the NIST…
Rules and background information from Willow Garage…