‘Point and click’ interface simplifies robot control
‘Point and click’ interface simplifies robot control
It takes knowledge and skill to get a robot to do exactly what you want it to do in a manufacturing environment--and those are hard to come by in this skills-gap era. A new interface designed by Georgia Institute of Technology researchers is simpler and more efficient than most interfaces, and doesn't require significant training time. The user simply points and clicks on an item, then chooses a grasp. The robot does the rest of the work.
It takes knowledge and skill to get a robot to do exactly what you want it to do in a manufacturing environment--and those are hard to come by in this skills-gap era. A new interface designed by Georgia Institute of Technology researchers is simpler and more efficient than most interfaces, and doesn't require significant training time. The user simply points and clicks on an item, then chooses a grasp. The robot does the rest of the work.
With a traditional interface, the operator uses a computer screen and mouse to independently control six degrees of freedom, turning three virtual rings and adjusting arrows to get the robot into position to grab items or perform a specific task.
With the Georgia Tech interface, "instead of a series of rotations, lowering and raising arrows, adjusting the grip and guessing the correct depth of field, we've shortened the process to just two clicks," stated Sonia Chernova, assistant professor at the university's School of Interactive Computing who advised the research effort.
The traditional ring-and-arrow-system is a split-screen method. The first screen shows the robot and the scene; the second is a 3-D, interactive view where the user adjusts the virtual gripper and tells the robot exactly where to go and grab. This technique makes no use of scene information, giving operators a maximum level of control and flexibility. But this freedom and the size of the workspace can become a burden and increase the number of errors.
The point-and-click format doesn't include 3-D mapping. It only provides the camera view, resulting in a simpler interface for the user. After a person clicks on a region of an item, the robot's perception algorithm analyzes the object's 3-D surface geometry to determine where the gripper should be placed. It's similar to what we do when we put our fingers in the correct locations to grab something. The computer then suggests a few grasps. The user decides, putting the robot to work.
"The robot can analyze the geometry of shapes, including making assumptions about small regions where the camera can't see, such as the back of a bottle," stated Chernova. "Our brains do this on their own — we correctly predict that the back of a bottle cap is as round as what we can see in the front. In this work, we are leveraging the robot's ability to do the same thing to make it possible to simply tell the robot which object you want to be picked up."
By analyzing data and recommending where to place the gripper, the burden shifts from the user to the algorithm, which reduces mistakes. During a study, college students performed a task about two minutes faster using the new method vs. the traditional interface. The point-and-click method also resulted in approximately one mistake per task, compared to nearly four for the ring-and-arrow technique.
The point-and-click interface was designed to improve ease-of-use of operations for users of home-assistance robots, in space exploration and search-and-rescue operations. It looks like it could also be a beneficial approach for programming robots for repetitive manufacturing tasks as well.
Source: Georgia Institute of Technology



