中文 English

System Bits: June 10

Georgia Tech’s SlothBot; robot helper; securing the 2020 election.

popularity

SlothBot swings through the trees, slowly
A robot that doesn’t often move, spending its days, weeks, months, in the forest canopy, monitoring the local environment – that’s SlothBot, from the Georgia Institute of Technology.

The robot has two photovoltaic solar panels for its power source. It is designed to stay in the trees for months at a time. It’s gone through trials on the Georgia Tech campus and may soon find its way around the Atlanta Botanical Garden. The researchers who developed SlothBot hope to test the robot on a cacao plantation in Costa Rica, where it will encounter real sloths.

What purpose does SlothBot serve, in addition to environmental monitoring? It could also be useful in precision agriculture, infrastructure maintenance, and security.

The robot will only move when it needs to measure changes in the environment, such as chemical factors and different weather patterns.


Image credit: Georgia Tech

“In robotics, it seems we are always pushing for faster, more agile, and more extreme robots,” said Magnus Egerstedt, the Steve W. Chaddick School Chair of the School of Electrical and Computer Engineering at the Georgia Institute of Technology and principal investigator for SlothBot. “But there are many applications where there is no need to be fast. You just have to be out there persistently over long periods of time, observing what’s going on.”

Based on what Egerstedt called the “theory of slowness,” Graduate Research Assistant Gennaro Notomista designed SlothBot together with his colleague, Yousef Emam, using 3D-printed parts for the gearing and wire-switching mechanisms needed to crawl through a network of wires in the trees. The greatest challenge for a wire-crawling robot is switching from one cable to another without falling, Notomista said.

“The challenge is smoothly holding onto one wire while grabbing another,” he said. “It’s a tricky maneuver and you have to do it right to provide a fail-safe transition. Making sure the switches work well over long periods of time is really the biggest challenge.”

Mechanically, SlothBot consists of two bodies connected by an actuated hinge. Each body houses a driving motor connected to a rim on which a tire is mounted. The use of wheels for locomotion is simple, energy efficient, and safer than other types of wire-based locomotion, the researchers say.

The name SlothBot is not a coincidence. Real-life sloths are small mammals that live in jungle canopies of South and Central America. Making their living by eating tree leaves, the animals can survive on the daily caloric equivalent of a small potato. With their slow metabolism, sloths rest as much as 22 hours a day and seldom descend from the trees where they can spend their entire lives.

“The life of a sloth is pretty slow-moving and there’s not a lot of excitement on a day-to-day level,” said Jonathan Pauli, an associate professor in the Department of Forest & Wildlife Ecology at the University of Wisconsin-Madison, who has consulted with the Georgia Tech team on the project. “The nice thing about a very slow life history is that you don’t really need a lot of energy input. You can have a long duration and persistence in a limited area with very little energy inputs over a long period of time.”

That’s exactly what the researchers expect from SlothBot, whose development has been funded by the U.S. Office of Naval Research.

“There is a lot we don’t know about what actually happens under dense tree-covered areas,” Egerstedt said. “Most of the time SlothBot will be just hanging out there, and every now and then it will move into a sunny spot to recharge the battery.”

A robot will be helping you, after checking you out
Here’s another robot, one developed by researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL). The RoboRaise system incorporates electromyography (EMG) sensors on a user’s biceps and triceps. The robot’s algorithms can continuously detect changes in a person’s arm level, gauging whether the person is struggling to lift an object. It can then help lift what the person is trying to raise.

The team used the system for a series of tasks involving picking up and assembling mock airplane components. In experiments, users worked on these tasks with the robot and were able to control it to within a few inches of the desired heights by lifting and then tensing their arm. It was more accurate when gestures were used, and the robot responded correctly to roughly 70 percent of all gestures.

Graduate student Joseph DelPreto says he could imagine people using RoboRaise to help in manufacturing and construction settings, or even as an assistant around the house.

“Our approach to lifting objects with a robot aims to be intuitive and similar to how you might lift something with another person — roughly copying each other’s motions while inferring helpful adjustments,” says DelPreto, lead author on a new paper about the project with MIT Professor and CSAIL Director Daniela Rus. “The key insight is to use nonverbal cues that encode instructions for how to coordinate, for example to lift a little higher or lower. Using muscle signals to communicate almost makes the robot an extension of yourself that you can fluidly control.”

The project builds off the team’s existing system that allows users to instantly correct robot mistakes with brainwaves and hand gestures, now enabling continuous motion in a more collaborative way. “We aim to develop human-robot interaction where the robot adapts to the human, rather than the other way around. This way the robot becomes an intelligent tool for physical work,” says Rus.

EMG signals can be tricky to work with: They’re often very noisy, and it can be difficult to predict exactly how a limb is moving based on muscle activity. Even if you can estimate how a person is moving, how you want the robot itself to respond may be unclear.

RoboRaise gets around this by putting the human in control. The team’s system uses noninvasive, on-body sensors that detect the firing of neurons as you tense or relax muscles.

Using wearables also gets around problems of occlusions or ambient noise, which can complicate tasks involving vision or speech.

RoboRaise’s algorithm then processes biceps activity to estimate how the person’s arm is moving so the robot can roughly mimic it, and the person can slightly tense or relax their arm to move the robot up or down. If a user needs the robot to move farther away from their own position or hold a pose for a while, they can just gesture up or down for finer control; a neural network detects these gestures at any time based on biceps and triceps activity.

A new user can start using the system very quickly, with minimal calibration. After putting on the sensors, they just need to tense and relax their arm a few times then lift a light weight to a few heights. The neural network that detects gestures is only trained on data from previous users.

The team tested the system with 10 users through a series of three lifting experiments: one where the robot didn’t move at all, another where the robot moved in response to their muscles but didn’t help lift the object, and a third where the robot and person lifted an object together.

When the person had feedback from the robot — when they could see it moving or when they were lifting something together — the achieved height was significantly more accurate compared with having no feedback.

The team also tested RoboRaise on assembly tasks, such as lifting a rubber sheet onto a base structure. It was able to successfully lift both rigid and flexible objects onto the bases. RoboRaise was implemented on the team’s Baxter humanoid robot, but the team says it could be adapted for any robotic platform.

In the future, the team hopes that adding more muscles or different types of sensors to the system will increase the degrees of freedom, with the ultimate goal of doing even more complex tasks. Cues like exertion or fatigue from muscle activity could also help robots provide more intuitive assistance. The team tested one version of the system that uses biceps and triceps levels to tell the robot how stiffly the person is holding their end of the object; together, the human and machine could fluidly drag an object around or rigidly pull it taut.

Keeping the 2020 presidential election secure from interference

Stanford University scholars have developed a comprehensive strategy for protecting the integrity and independence of U.S. elections, especially next year’s presidential election.

Their new report offers more than 45 recommendations for lawmakers and leaders in technology to implement to deter potential threats from foreign and domestic actors trying to disrupt the American electoral process.

As the Mueller inquiry made clear, the scale and scope of Russia’s efforts to interfere with the 2016 election was unprecedented, said Michael McFaul, editor and co-author of the Securing American Elections report. But what was not in Mueller’s mandate was to provide recommendations for how to deter meddling in future elections – which is where the Securing American Elections report comes in, said McFaul, who served as U.S. ambassador to Russia from 2012 to 2014 and is now director of the Freeman Spogli Institute for International Studies (FSI) at Stanford.

“We know more than ever before about what happened in the 2016 election. Now we need to pivot to what needs to be done to prevent it in the future – from concrete legislative acts as well as steps that online platforms can take even without legislation,” said McFaul, who is also the Ken Olivier and Angela Nomellini Professor in International Studies in the department of political science and a senior fellow at FSI and the Hoover Institution.

Authors of the Securing American Elections report include scholars affiliated with the Stanford Cyber Policy Center, a newly launched hub at FSI to bring researchers from across disciplines together to address the threats cyber technologies pose to security and governance worldwide.

The center will be co-directed by Dan Boneh, the Rajeev Motwani Professor in the School of Engineering and head of the Applied Cryptography Group, and Nathaniel Persily, the James B. McClatchy Professor of Law at Stanford Law School. Stanford scholars also include Eileen Donahoe, former Ambassador to the UN Human Rights Council; Andrew Grotto, a former senior director for cybersecurity policy at the White House in both the Obama and Trump administrations; and Alex Stamos, a former chief security officer at Facebook.



Leave a Reply


(Note: This name will be displayed publicly)