Cloud-Based Robot Grasping with the Google Object Recognition Engine
What if robots were not limited by onboard computation, algorithms did not need to be implemented on every class of robot, and model improvements from sensor data could be shared across many robots? What if one could use the Internet, with the millions of photos that are uploaded and made publicly available every day, as a potential source for computation and data about objects, their semantics, and how to manipulate them.
In Cloud-Based Robot Grasping with the Google Object Recognition Engine
, presented at the 2013 IEEE International Conference on Robotics and Automation (http://goo.gl/VRbLqV
) and recently highlighted in the list of influential Google papers from 2013 (goo.gl/heOFbW
), Googlers +Sal Candido
and +James Kuffner
along with UC Berkeley researchers Ben Kehoe, +Akihiro Matsukawa
and +Ken Goldberg
detailed a system architecture, an implemented prototype, and initial experimental data for a cloud-based robot grasping system.
Using a Willow Garage PR2 robot with onboard color and depth cameras, object recognition was performed in the cloud by a using a variant of the Google Goggles object recognition engine, while the Point Cloud Library (PCL) was used for pose estimation, and Columbia University’s GraspIt! toolkit and OpenRAVE implemented for grasping.
To learn more, read the full paper at http://goo.gl/Yuv8Ak