Look Ahead: Point-and-click robot control
Look Ahead: Point-and-click robot control
It takes knowledge and skill to get a robot to do exactly what a manufacturer wants. A new interface designed by Georgia Institute of Technology researchers is reportedly simpler and more efficient than most control interfaces and doesn't require significant training.
It takes knowledge and skill to get a robot to do exactly what a manufacturer wants it to—and those are hard to come by in this skills-gap era. Even the most competently programmed and automated robot will sometimes need a human operator to take control.
A new interface designed by Georgia Institute of Technology researchers is reportedly simpler and more efficient than most control interfaces and doesn't require significant training. The user simply points and clicks on an item, then chooses a grasping method. The robot does the rest of the work.
With a traditional interface, an operator independently controls six degrees of freedom with a computer, turning three virtual rings and adjusting arrows to get the robot into position to grab items or perform a specific task.

Point-and-click control makes robot operation literally easier to grasp. Image courtesy of Georgia Institute of Technology.
With the Georgia Tech interface, "instead of a series of rotations, lowering and raising arrows, adjusting the grip and guessing the correct depth of field, we've shortened the process to just two clicks," stated Sonia Chernova, assistant professor at the university's School of Interactive Computing who advised the research effort.
The traditional ring-and-arrow system is a split-screen method. The first screen shows the robot and the scene. The second screen provides a 3D, interactive view, where the user adjusts the virtual gripper and tells the robot exactly where to go and what to grab. The point-and-click format provides only the camera view, resulting in a simpler user interface. After a person clicks on an object, the robot's perception algorithm analyzes the object's 3D surface geometry to determine where the gripper should be placed. It's similar to people putting their fingers in the correct locations to grab something. The computer then suggests a few grasps. The user selects one, putting the robot to work.
"The robot can analyze the geometry of shapes, including making assumptions about small regions where the camera can't see," Chernova stated. "Our brains do this on their own. We correctly predict that the back of a bottle cap is as round as what we can see in the front."
The point-and-click interface was designed to improve ease-of-use of operations for home-assistance robots, space exploration and search-and-rescue operations. It looks like the interface could also be a time-saver for operating or even programming robots for repetitive manufacturing tasks.
For more information about the Georgia Institute of Technology, Atlanta, visit www.gatech.edu or call (404) 894-2000.



