Deep Learning Robust Grasps with Synthetic Point Clouds & Analytic Grasp Metrics (UC Berkeley)

Robot Grasping Taken to a New Level


Source: The research was the work of Jeffrey Mahler, Jacky Liang, Sherdil Niyaz, Michael Laskey, Richard Doan, Xinyu Liu, Juan Aparicio Ojea, and Ken Goldberg with support from the AUTOLAB team at UC Berkeley.

Nimble-fingered robots enabled by deep learning
Grabbing awkwardly shaped items that humans regularly pick up daily is not so easy for robots, as they don’t know where to apply grip. To overcome this, UC Berkeley researchers have a built a robot that can pick up and move unfamiliar, real-world objects with a 99% success rate.

Berkeley professor Ken Goldberg, postdoctoral researcher Jeff Mahler and the Laboratory for Automation Science and Engineering (AUTOLAB) created the DexNet 2.0 robot, with this high grasping success rate. The team believes this technology could soon be applied in industry, with the potential to revolutionize manufacturing and the supply chain.

At the heart of this highly accurate dexterity rate are deep learning algorithms, the researchers said. The DexNet 2.0 team built a vast database of 3D shapes — 6.7 million data points in total — that a neural network uses to learn grasps that will pick up and move objects with irregular shapes. The neural network was then connected to a 3D sensor and a robotic arm.

When an object is placed in front of DexNet 2.0, it quickly studies the shape and selects a grasp that will successfully pick up and move the object 99 percent of the time. DexNet 2.0 is also three times faster than its previous version, they noted.

For technical paper, click here.
For summary, click here

Leave a Reply

(Note: This name will be displayed publicly)