What’s For Dinner?

Next frontier for robots—the kitchen.

popularity

Robots, as currently implemented, don’t do well in uncontrolled environments. In factories and warehouses, they are fenced off by yellow safety tape, doing highly repetitive and predictable tasks. When deployed to monitor parks and malls, they are easily thwarted by malicious humans and even unexpected landscape features.

Yet robots able to assist elderly and disabled people will be genuinely useful as the world’s population ages. In theory, a robot could help with cooking, bathing, and other activities of daily life at a lower cost and less intrusively than a human nurse. Achieving this next generation of robots requires a very different level of interaction between robots and humans, and getting there requires significant advances in almost all robotic components.

According to Dieter Fox, Nvidia AI’s senior director of robotics research, robotics research is often narrowly focused on the specific subsystems of interest to a given research group: vision and image recognition, say, or manipulation and motor control. The next generation of robots, in contrast, will require integration across all these systems. A realistic human assistance robot will need perception systems that work with manipulation and control systems as seamlessly as human touch and vision do. It might need to be strong and stable enough to help an elderly person stand up, but delicate enough to handle food, medicines, or utensils.

Nvidia AI’s new Seattle Robotics Research Lab aims to combine Nvidia’s expertise in physics-based modeling and photorealistic rendering with faculty expertise from the University of Washington and other leading research groups. At 13,000 square feet, it can host 50 roboticists, including Nvidia staff, visiting faculty, and interns. It focuses on basic research, and Fox expects that all results will be published and shared with the robotics community.

Among other facilities, the lab includes a standard, human-scale test kitchen. As Fox put it, “Kitchens are hard. If we can do a kitchen, we can do anything!” Using the kitchen as a testbed allows researchers to explore a wide range of increasingly complex scenarios, from identifying and moving items on shelves or in drawers, to assisting with meal preparation. As a first step, the team demonstrated a kitchen manipulation system, able to recognize, track, and manipulate objects. At this stage, the system depends on a pre-existing three-dimensional model of the space. An important early task will be developing robots that can learn about their environment without complicated training.


Video: Nvidia AI’s kitchen manipulation system. Courtesy of Nvidia AI.

Because most humans are familiar with food preparation tasks, using a kitchen as a test bed also provides a concrete illustration of just how far the next generation of robots has to go. Even “simple” human tasks — cooking an egg, filling or washing a glass — require fine control of an enormous number of robotic behaviors. On the way to a capable robotic assistant, Fox said, there are many opportunities to improve robotic navigation, training, and safety around humans.



2 comments

Gil Russell says:

Katherine, I’ve become cautious over the preoccupation of the press/analyst corps describing new market domains, especially those in the AI application space. Many are riddled with untested assumptions that have not been properly vetted out – meaning that they failed to counter test the assumption which bends the data. This is “not a good” thing and leads to false bravado in the pursuit of new markets. Working with human beings requires a very carefully worded control environment which, from what I can tell, doesn’t seem to worry those rushing into what they consider a new and unexplored domain.

Katherine Derbyshire says:

I tend to agree with you. Based on my conversation with Dieter Fox, I would say that kind of testing is one of the goals of the Nvidia lab.

Leave a Reply


(Note: This name will be displayed publicly)