Sensor enables robots to perform hardness measurements through touch

By mounting a sensor on a robotic gripper, two MIT teams have significantly improved the arm's tactile abilities. The teams used a new type of sensor, known as a GelSight sensor, to enhance the capability, enabling the hardness of touched objects to be accurately calculated upon contact.

The GelSight sensor consists of a block of transparent rubber with one face coated in metallic paint. When the painted face is pressed against an object, it conforms to the object’s shape. As the metallic paint makes the object’s surface reflective, its geometry becomes much easier for the computer vision algorithms to infer. Mounted on the sensor opposite the paint-coated face of the rubber block are three coloured lights and a single camera.

“[The system] has coloured lights at different angles, and then it has this reflective material, and by looking at the colours, the computer… can figure out the 3D shape of what that thing is,” said Adelson, the John and Dorothy Wilson Professor of Vision Science in the Department of Brain and Cognitive Sciences.

For an autonomous robot, gauging objects’ softness or hardness is essential to deciding not only where and how hard to grasp them but how they will behave when moved, stacked, or laid on different surfaces. Tactile sensing could also aid robots in distinguishing objects that look similar.

The challenge of the approach is reconciling the data produced by a vision system with data produced by a tactile sensor. But GelSight is itself camera-based, so its data output is much easier to integrate with visual data than the data from other tactile sensors.

Sergey Levine, an assistant professor of electrical engineering and computer science at the University of California at Berkeley, said: “Software is finally catching up with the capabilities of our sensors. Machine learning algorithms inspired by innovations in deep learning and computer vision can process the rich sensory data from sensors such as the GelSight to deduce object properties. In the future, we will see these kinds of learning methods incorporated into end-to-end trained manipulation skills, which will make our robots more dexterous and capable.”

Justin Cunningham

This material is protected by MA Business copyright
See Terms and Conditions.
One-off usage is permitted but bulk copying is not.
For multiple copies contact the sales team.


Supporting Information
Do you have any comments about this article?

Your comments/feedback may be edited prior to publishing. Not all entries will be published.
Please view our Terms and Conditions before leaving a comment.

© MA Business Ltd (a Mark Allen Group Company) 2022