System Bits: June 13

Deep-learning, nimble-fingered robots; gel for dextrous robotic touch; brain-like computer.

popularity

Nimble-fingered robots enabled by deep learning
Grabbing awkwardly shaped items that humans regularly pick up daily is not so easy for robots, as they don’t know where to apply grip. To overcome this, UC Berkeley researchers have a built a robot that can pick up and move unfamiliar, real-world objects with a 99% success rate.

Berkeley professor Ken Goldberg, postdoctoral researcher Jeff Mahler and the Laboratory for Automation Science and Engineering (AUTOLAB) created the DexNet 2.0 robot, with this high grasping success rate. The team believes this technology could soon be applied in industry, with the potential to revolutionize manufacturing and the supply chain.

DexNet 2.0 robot (Source: UC Berkeley)

At the heart of this highly accurate dexterity rate are deep learning algorithms, the researchers said. The DexNet 2.0 team built a vast database of 3D shapes — 6.7 million data points in total — that a neural network uses to learn grasps that will pick up and move objects with irregular shapes. The neural network was then connected to a 3D sensor and a robotic arm.

When an object is placed in front of DexNet 2.0, it quickly studies the shape and selects a grasp that will successfully pick up and move the object 99 percent of the time. DexNet 2.0 is also three times faster than its previous version, they noted.

Robots get a sense of touch with gel technology
While it has been 8 years since Ted Adelson’s research group at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) described a new sensor technology, called GelSight, that uses physical contact with an object to provide a remarkably detailed 3-D map of its surface, now two MIT research teams have given robots greater sensitivity and dexterity by mounting GelSight sensors on the grippers of robotic arms.

The researchers presented their work in two papers at the International Conference on Robotics and Automation last week.

A GelSight sensor attached to a robot’s gripper enables the robot to determine precisely where it has grasped a small screwdriver, removing it from and inserting it back into a slot, even when the gripper screens the screwdriver from the robot’s camera. (Source: MIT)

In one paper, Adelson’s group uses the data from the GelSight sensor to enable a robot to judge the hardness of surfaces it touches — a crucial ability if household robots are to handle everyday objects.

In the other, Russ Tedrake’s Robot Locomotion Group at CSAIL uses GelSight sensors to enable a robot to manipulate smaller objects than was previously possible.

According to the researchers in the first team, the GelSight sensor is, in some ways, a low-tech solution to a difficult problem — consisting of a block of transparent rubber — the “gel” of its name — one face of which is coated with metallic paint. When the paint-coated face is pressed against an object, it conforms to the object’s shape. The metallic paint makes the object’s surface reflective, so its geometry becomes much easier for computer vision algorithms to infer. Mounted on the sensor opposite the paint-coated face of the rubber block are three colored lights and a single camera.

The system has colored lights at different angles, and then it has this reflective material, and by looking at the colors, the computer can figure out the 3-D shape of what that thing is, explained Adelson, the John and Dorothy Wilson Professor of Vision Science in the Department of Brain and Cognitive Sciences.

In both sets of experiments, a GelSight sensor was mounted on one side of a robotic gripper, a device somewhat like the head of a pincer, but with flat gripping surfaces rather than pointed tips.

Further, for an autonomous robot application, gauging objects’ softness or hardness is essential to deciding not only where and how hard to grasp them but how they will behave when moved, stacked, or laid on different surfaces. Tactile sensing could also aid robots in distinguishing objects that look similar.

Not surprisingly, just as in the work at UC Berkely mentioned above, the MIT research was enabled by deep learning technology. In this case, the data was fed to a neural network, which automatically looked for correlations between changes in contact patterns and hardness measurements. The resulting system takes frames of video as inputs and produces hardness scores with very high accuracy.

Next, the paper from the Robot Locomotion Group was born of the group’s experience with the Defense Advanced Research Projects Agency’s Robotics Challenge (DRC), in which academic and industry teams competed to develop control systems that would guide a humanoid robot through a series of tasks related to a hypothetical emergency.

Interestingly, the researchers added that software is finally catching up with the capabilities of sensors, and that machine learning algorithms inspired by innovations in deep learning and computer vision can process the rich sensory data from sensors such as the GelSight to deduce object properties. In the future, they said we will see these kinds of learning methods incorporated into end-to-end trained manipulation skills, which will make robots more dexterous and capable, and maybe even help us understand something about our own sense of touch and motor control.

Computing system takes cues from human brain
According to a team of researchers at Georgia Institute of Technology and University of Notre Dame some computing problems are so challenging to solve that even the most advanced computers need weeks, not seconds, to process them. To this end, they’ve created a new computing system that aims to tackle one of computing’s hardest problems in a fraction of the time.

Arijit Raychowdhury, an associate professor in Georgia Tech’s School of Electrical and Computer Engineering wanted to find a way to solve a problem without using the normal binary representations that have been the backbone of computing for decades.

Arijit Raychowdhury, an associate professor in Georgia Tech’s School of Electrical and Computer Engineering (Source: Georgia Tech)

The system employs a network of electronic oscillators to solve graph coloring tasks – a type of problem that tends to choke modern computers.

“Applications today are demanding faster and faster computers to help solve challenges like resource allocation, machine learning and protein structure analysis – problems which at their core are closely related to graph coloring,” Raychowdhury said. “But for the most part, we’ve reached the limitations of modern digital computer processors. Some of these problems that are so computationally difficult to perform, it could take a computer several weeks to solve.”

A graph coloring problem starts with a graph – a visual representation of a set of objects connected in some way. To solve the problem, each object must be assigned a color, but two objects directly connected cannot share the same color. Typically, the goal is to color all objects in the graph using the smallest number of different colors.

In designing a system different from traditional transistor-based computing, the researchers took their cues from the human brain, where processing is handled collectively, such as a neural oscillatory network, rather than with a central processor.

“It’s the notion that there is tremendous power in collective computing,” said Suman Datta, Chang Family professor in Notre Dame’s College of Engineering and one of the study’s co-authors. “In natural forms of computing, dynamical systems with complex interdependencies evolve rapidly and solve complex sets of equations in a massively parallel fashion.”

The electronic oscillators, fabricated from vanadium dioxide, were found to have a natural ability that could be harnessed for graph coloring problems. When a group of oscillators were electrically connected via capacitive links, they automatically synchronized to the same frequency – oscillating at the same rate. Meanwhile, oscillators directly connected to one another would operate at different phases within the same frequency, and oscillators in the same group but not directly connected would sync in both frequency and phase.

If each phase represents a different color, this system was essentially mimicking naturally the solution to a graph coloring problem.

The researchers created a small network of oscillators to solve graph coloring problems with the same number of objects, which are also referred to as nodes or vertices. But even more significant, the new system theoretically proved that a connection existed between graph coloring and the natural dynamics of coupled oscillatory systems.

This is a critical step because the team could prove why this is happening and that it covers all possible instances of graphs. Further, this opens up a new way of performative computation and constructing novel computational models. This is novel in that it’s a physics-based computing approach, but it also presents tantalizing opportunities for building other customized analog systems for solving hard problems efficiently. And this could be valuable to a range of companies looking for computers to help optimize their resources, such as a power utility wanting to maximize efficiency and usage of a vast electrical grid under certain constraints.



Leave a Reply


(Note: This name will be displayed publicly)