Machine learning leverage sensors to give robots an effective sense of
touchNow, by mounting GelSight sensors on the grippers of robotic arms, two MIT
teams have given robots greater sensitivity and dexterity. The researchers
presented their work in two papers at the International Conference on Robotics
and Automation last week.
In one paper, Adelson’s group uses the data from the GelSight sensor to
enable a robot to judge the hardness of surfaces it touches — a crucial ability
if household robots are to handle everyday objects.
In the other, Russ Tedrake’s Robot Locomotion Group at CSAIL uses GelSight
sensors to enable a robot to manipulate smaller objects than was previously
possible.
A GelSight sensor attached to a robot’s gripper enables the robot to
determine precisely where it has grasped a small screwdriver, removing it from
and inserting it back into a slot, even when the gripper screens the screwdriver
from the robot’s camera. Photo: Robot Locomotion Group at MIT
In Izatt’s experiments, a robot with a GelSight-equipped gripper had to
grasp a small screwdriver, remove it from a holster, and return it. Of course,
the data from the GelSight sensor don’t describe the whole screwdriver, just a
small patch of it. But Izatt found that, as long as the vision system’s estimate
of the screwdriver’s initial position was accurate to within a few centimeters,
his algorithms could deduce which part of the screwdriver the GelSight sensor
was touching and thus determine the screwdriver’s position in the robot’s
hand.
“I think that the GelSight technology, as well as other high-bandwidth
tactile sensors, will make a big impact in robotics,” says Sergey Levine, an
assistant professor of electrical engineering and computer science at the
University of California at Berkeley. “For humans, our sense of touch is one of
the key enabling factors for our amazing manual dexterity. Current robots lack
this type of dexterity and are limited in their ability to react to surface
features when manipulating objects. If you imagine fumbling for a light switch
in the dark, extracting an object from your pocket, or any of the other numerous
things that you can do without even thinking — these all rely on touch
sensing.”
“Software is finally catching up with the capabilities of our sensors,”
Levine adds. “Machine learning algorithms inspired by innovations in deep
learning and computer vision can process the rich sensory data from sensors such
as the GelSight to deduce object properties. In the future, we will see these
kinds of learning methods incorporated into end-to-end trained manipulation
skills, which will make our robots more dexterous and capable, and maybe help us
understand something about our own sense of touch and motor control.”
Tracking Objects with Point Cloudsfrom Vision and Touch
Abstract—
We present an object-tracking framework that fuses point cloud information
from an RGB-D camera with tactile information from a GelSight contact sensor.
GelSight can be treated as a source of dense local geometric information, which
we incorporate directly into a conventional point-cloud-based articulated object
tracker based on signed-distance functions. Our implementation runs at 12 Hz
using an online depth reconstruction algorithm for GelSight and a modified
secondorder update for the tracking algorithm. We present data from hardware
experiments demonstrating that the addition of contact-based geometric
information significantly improves the pose accuracy during contact, and
provides robustness to occlusions of small objects by the robot’s end
effector.