System Bits: May 28

Crocheted robots; humanizing AI agents; training autonomous control systems.

popularity

Home robotics get cozier
Cornell University’s Guy Hoffman was perplexed when he first saw social robots in stores.

“I noticed a lot of them had a very similar kind of feature – white and plasticky, designed like consumer electronic devices,” said Hoffman, assistant professor and the Mills Family Faculty Fellow in the Sibley School of Mechanical and Aerospace Engineering. “Especially when these social robots were marketed to be part of our families, I thought it would be strange to all have identical family members.”

There should be a way to customize robots, he thought, using materials other than plastic, to go with the computing hardware. Hoffman learned how to crochet as one avenue to making robotics cozier and cuddlier than mass-market home robots.

Then he watched another friend crochet part of the robot far faster than he could. “That made me think people who are not engineers could also participate in making a robot,” he said.

These ideas led Hoffman to create Blossom – a simple, expressive, inexpensive robot platform that could be made from a kit and creatively outfitted with handcrafted materials.

“We wanted to empower people to build their own robot, but without sacrificing how expressive it is,” said Hoffman, senior author of “Blossom: A Handcrafted Open-Source Robot,” published in March in the Association for Computing Machinery Transactions on Human-Robot Interaction. “Also, it’s nice to have every robot be a little bit different. If you knit your robot, every family would have their own robot that would be unique to them.”

Blossom’s mechanical design – developed with Michael Suguitan, a doctoral student in Hoffman’s lab and first author of the paper – is centered on a floating “head” platform using strings and cables for movement, making its gestures more flexible and organic than those of a robot composed of rigid parts.

Blossom can be controlled by moving a smartphone using an open-source puppeteering application; the robot’s movements resemble bouncing, stretching, and dancing. The cost of the parts needed to assemble a Blossom is less than $250, and researchers are currently working on a Blossom kit made entirely of cardboard, which would be even cheaper.

Partly because of its simplicity, Blossom has a variety of potential uses, Hoffman said. Human-robot interaction researchers who aren’t engineers could build their own from a kit to use in studies. Because of the ease of interacting with the robot and the hands-on experience of helping to build it, it could help teach children about robotics.

“It’s meant to be a flexible kit that is also very low cost. Especially if we can make it out of cardboard, you could make it very inexpensively,” he said. “Because of computation becoming so powerful, it could be a really open-ended way for people to do whatever they want with robotics.”

The work was partly supported by a grant from Google Creative Robotics.

AI agents can become more human-like
Researchers at the University of Waterloo set out to make artificially intelligent agents display human-like emotions while working with human beings.

“The capability of showing emotions is important for AI agents, especially if we want users to trust the agents and co-operate with them,” said Moojan Ghafurian, lead author of the study and a Postdoctoral Fellow in Waterloo’s David R. Cheriton School of Computer Science. “Improving humanness of AI agents could enhance society’s perception of assistive technologies which going forward will improve people’s lives.”

In undertaking the study, Ghafurian and her co-authors, Associate Professor Jesse Hoey and research assistant Neil Budnarain, all of Waterloo’s Cheriton School of Computer Science, used a classic game called “The Prisoner’s Dilemma.”

The original version of the game sees two prisoners isolated from one another and then questioned by police for a crime they committed together. If one of them snitches and the other doesn’t, the non-betrayer gets three years, and the snitch walks. This works both ways. If both snitch, they both get two years. If neither one snitches, they each only get one year on a lesser charge.

Waterloo’s study substituted one of the human ‘prisoners’ with an AI virtual human and allowed them to interpret the other’s emotions. Instead of prison sentences, they used gold, so the point was to get the highest score possible, as opposed to the lowest. The virtual human was developed by the Interactive Assistance Lab at the University of Colorado, Boulder, in the context of a program to help people with cognitive disabilities, particularly relating to the perception of emotional signals.


Image credit: University of Waterloo

The researchers used three different virtual agents that had the exact same strategy but reflected different emotions. One of them didn’t show any emotion, one of them showed appropriate emotions, and the other generated random emotions.

The 117 participants were then randomly paired with an agent and asked to rate it based on how humanlike they perceived it. They then observed the participants to see how they interacted with the agents, for example, how many times they cooperated.

On average, people cooperated 20 out of 25 times with the agents that showed human-like emotion. For the one showing random emotions, they cooperated 16 times out of 25, and for the one showing no emotion, cooperation occurred 17 out of 25 times.

“Based on our findings it’s better to show no emotion rather than random emotions, as the latter would make that agent look less rational and immature,” said Ghafurian. “But showing proper emotions can significantly improve the perception of humanness and how much people enjoy interacting with the technology.”

The researchers want to eventually design assistive technology for people with dementia that they will feel comfortable using. This research is an important step in that direction.

What makes a machine more than just a piece of hardware? Do we form relationships with our technology? How should we be teaching robots to act? And what are they teaching us?

These are just some of the questions that Kerstin Dautenhahn is exploring as the Canada 150 Research Chair in Intelligent Robotics. Dautenhahn joined the University of Waterloo’s Faculty of Engineering in 2018 to establish the new Social and Intelligent Robotics Research Laboratory.

But advancing state of the art of technology for social and intelligent robots is only part of her research agenda.

“Just because you can build something, doesn’t mean you should,” says Dautenhahn. “We need to understand people’s intentions and expectations towards robots and investigate possible consequences of the robots we build.”

Dautenhahn is one of the founders of the field of social robotics. Her research centers on advancing our understanding of fundamental principles of human-robot interaction and how robots contribute to real-world applications.

“Robots can make a useful contribution to society and to our well-being,” Dautenhahn explains. “We need to broaden our imagination on what a robot can look like and what tasks they can perform so that it complements the skills that people are good at and enjoy doing.”

Can a robot be an acceptable companion to a human?

This question gets to the heart of Dautenhahn’s research. She uses several companion robots for research in her lab and in Waterloo’s RoboHub, a unique facility that encourages multidisciplinary research to explore the potential of robotic technologies.

A companion robot is an autonomous machine capable of carrying out a task that is useful to a human and is performed in a socially acceptable way. This means that the robot is able to interact with humans in a conventional and helpful manner.

“I am particularly interested in applications of companion robots in therapy and education for children, and supporting people with dementia living in long-term care facilities and elderly persons living at home independently,” says Dautenhahn.

Her extensive accomplishments include breaking new ground with robot-assisted therapy for children with autism, who often find communication and social interaction overwhelming and unpredictable.

“Our aim is to make the child feel comfortable with the robot,” explains Dautenhahn. “The experience for many autistic children is to receive negative feedback. The robot makes predictable and positive responses which they can copy and learn from.”

Play sessions with robots can have long-term benefits for a child because they learn social cues and can practice behavior with the robot.

In Waterloo, Dautenhahn is developing a variety of companion robots for children with underserved needs such as autism. Future research will also explore how companion robots can help people with dementia, building off her previous work of supporting independent living for older adults.

Autonomous driving using simple maps and image data
Massachusetts Institute of Technology researchers came up with an autonomous control system that employs simple maps and visual data to enable self-driving cars to navigate through complex environments, much the way human drivers can functionally understand how to get through places where they have never been before.

Human drivers are exceptionally good at navigating roads they haven’t driven on before, using observation and simple tools. We simply match what we see around us to what we see on our GPS devices to determine where we are and where we need to go. Driverless cars, however, struggle with this basic reasoning. In every new area, the cars must first map and analyze all the new roads, which is very time-consuming. The systems also rely on complex maps — usually generated by 3-D scans — which are computationally intensive to generate and process on the fly.

In a paper presented at last week’s International Conference on Robotics and Automation, MIT researchers described an autonomous control system that “learns” the steering patterns of human drivers as they navigate roads in a small area, using only data from video camera feeds and a simple GPS-like map. Then, the trained system can control a driverless car along a planned route in a brand-new area, by imitating the human driver.

Similarly to human drivers, the system also detects any mismatches between its map and features of the road. This helps the system determine if its position, sensors, or mapping are incorrect, in order to correct the car’s course.

To train the system initially, a human operator controlled an automated Toyota Prius — equipped with several cameras and a basic GPS navigation system — to collect data from local suburban streets including various road structures and obstacles. When deployed autonomously, the system successfully navigated the car along a preplanned path in a different forested area, designated for autonomous vehicle tests.

“With our system, you don’t need to train on every road beforehand,” says first author Alexander Amini, an MIT graduate student. “You can download a new map for the car to navigate through roads it has never seen before.”

“Our objective is to achieve autonomous navigation that is robust for driving in new environments,” adds co-author Daniela Rus, director of the Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science. “For example, if we train an autonomous vehicle to drive in an urban setting such as the streets of Cambridge, the system should also be able to drive smoothly in the woods, even if that is an environment it has never seen before.”



Leave a Reply


(Note: This name will be displayed publicly)