Future-imagining robots; depth sensors; virus destroying nanoparticles.
Robots imagine their future to learn
By playing with objects and then imagining how to get the task done, UC Berkeley researchers have developed a robotic learning technology that enables robots to figure out how to manipulate objects they have never encountered before.
The team expects this technology could help self-driving cars anticipate future events on the road and produce more intelligent robotic assistants in homes, but the initial prototype focuses on learning simple manual skills entirely from autonomous play.
The technology, called visual foresight, gives the robots the ability to predict what their cameras will see if they perform a particular sequence of movements. While these robotic imaginations are still relatively simple for now – predictions made only several seconds into the future – but they are enough for the robot to figure out how to move objects around on a table without disturbing obstacles.
Interestingly, the robot can learn to perform these tasks without any help from humans or prior knowledge about physics, its environment or what the objects are, the researchers pointed out. This is because the visual imagination is learned entirely from scratch from unattended and unsupervised exploration, where the robot plays with objects on a table. After this play phase, the robot builds a predictive model of the world, and can use this model to manipulate new objects that it has not seen before.
The research team demonstrated the visual foresight technology at the Neural Information Processing Systems conference in Long Beach, California last month.
At the core of this system is a deep learning technology based on convolutional recurrent video prediction, or dynamic neural advection (DNA). DNA-based models predict how pixels in an image will move from one frame to the next based on the robot’s actions. Recent improvements to this class of models, as well as greatly improved planning capabilities, have enabled robotic control based on video prediction to perform increasingly complex tasks, such as sliding toys around obstacles and repositioning multiple objects, the research team reported.
The Berkeley team said it is continuing to research control through video prediction, focusing on further improving video prediction and prediction-based control, as well as developing more sophisticated methods by which robots can collected more focused video data, for complex tasks such as picking and placing objects and manipulating soft and deformable objects such as cloth or rope, and assembly.
Depth sensors for self-driving cars
For the past 10 years, the Camera Culture group at MIT’s Media Lab has been developing innovative imaging systems — from a camera that can see around corners to one that can read text in closed books — by using “time of flight,” an approach that gauges distance by measuring the time it takes light projected into a scene to bounce back to a sensor. Now, members of the Camera Culture group have devised a new approach to time-of-flight imaging that increases its depth resolution 1,000-fold, which is the type of resolution that could make self-driving cars practical, they said.
They expect this approach could also enable accurate distance measurements through fog, which has proven to be a major obstacle to the development of self-driving cars.
At a range of 2 meters, existing time-of-flight systems have a depth resolution of about a centimeter. That’s good enough for the assisted-parking and collision-detection systems on today’s cars but as the range is increased, the resolution goes down exponentially. In a situation that is a long-range scenario, the car needs to detect an object further away so it can make a fast update decision. If a mistake is made, it could lead to loss of life.
At distances of 2 meters, the MIT researchers’ system, by contrast, has a depth resolution of 3 micrometers. The team also conducted tests in which he sent a light signal through 500 meters of optical fiber with regularly spaced filters along its length, to simulate the power falloff incurred over longer distances, before feeding it to his system. Those tests suggest that at a range of 500 meters, the MIT system should still achieve a depth resolution of only a centimeter.
Gold nanoparticles destroy viruses
HIV, dengue, papillomavirus, herpes and Ebola – these are just some of the many viruses that kill millions of people every year, mostly children in developing countries. While drugs can be used against some viruses, there is currently no broad-spectrum treatment that is effective against several at the same time, in the same way that broad-spectrum antibiotics fight a range of bacteria. But researchers at EPFL’s Supramolecular Nano-Materials and Interfaces Laboratory – Constellium Chair (SUNMIL) have created gold nanoparticles for just this purpose, and their findings could lead to a broad-spectrum treatment. Once injected in the body, these nanoparticles imitate human cells and “trick” the viruses. When the viruses bind to them – in order to infect them – the nanoparticles use pressure produced locally by this link-up to “break” the viruses, rendering them innocuous.
Until now, research into broad-spectrum virus treatments has only produced approaches that are toxic to humans or that work effectively in vitro – i.e., in the lab – but not in vivo. The EPFL researchers found a way around these problems by creating gold nanoparticles. They are harmless to humans, and they imitate human cell receptors – specifically the ones viruses seek for their own attachment to cells.
Viruses infect human bodies by binding to replicating into cells. It is as if the nanoparticles work by tricking the viruses into thinking that they are invading a human cell. When they bind to the nanoparticles, the resulting pressure deforms the virus and opens it, rendering it harmless. Unlike other treatments, the use of pressure is non-toxic.
Leave a Reply