Ready For Social Robots?

Companies rev up new technologies, but it’s still unclear how successful they will be.


After years of steady growth, innovation and sometimes disappointment, the robotics market is heating up on several fronts amid some new breakthroughs in the arena.

Both the industrial and service robotics markets are hot. In addition, the consumer market is seeing a new level of interest, as the industry is invaded by the next wave of so-called personal assistant robots or social robots for the home.

Asus, Blue Frog Robotics, InGen, Jibo, NEC, Samsung and others are developing various personal assistant robots for consumers. In addition, a number of companies from China are also developing them.

Personal assistant robots might be the next big thing. Or they could flop. “If successful, Jibo (and its competitors) could usher in a new consumer electronics market—social robots for the home,” said Dan Kara, an analyst at ABI Research.

“Personal robots, often called social robots, are technically advanced robots that interact directly with people and are designed to assist in the home, or to act as a companion,” Kara said. “Personal robots, which are Web-enabled and sometimes humanoid in form, are designed to be the center of control for consumer devices and household appliances, and can monitor the home and respond as requirements dictate. They are loaded with a variety of sensors, with some able to recognize faces and speech and respond to verbal commands.”

Basically, personal assistant robots are advanced versions of smart speakers or home automation systems. Social robots consist of the same features, but they are different than the so-called smart assistants or smart speakers in the market such as Amazon’s Echo and Google’s Home. Generally, smart speakers aren’t robots per se, but rather these systems are fixed, voice-controlled units that can perform many tasks, such as answer questions and play music, using far-field voice recognition technology.

But to be sure, the new social robots are far from being the long-awaited, humanoid-like intelligent robot. These advanced AI-based robots are still several years away from reaching the mainstream.

Having a personal robot for the home isn’t a new idea, of course. Over the years, a number of companies have rolled out these types of systems in the market. “They have not been a success,” Kara said. “They were expensive, and functionally and socially limited.”

The new class of social robots appears to be more promising. “They will have connectivity to the Internet, including access to AI services for content sensitive processing, natural language processing, facial recognition, object recognition, quality cameras and more, along with connectivity to smart home products,” he said.

Still, these systems are limited to one degree or another, prompting many to ask a simple question: When can consumers buy a useful humanoid-like, AI-based intelligent robot for the home?

“We’ve seen a lot of advancements in motion autonomy, but the question is when am I going to have an intelligent personal assistant to help me with my chores,” said James Kuffner, chief technology officer at Toyota Research Institute (TRI), an R&D organization that is developing robotics technology and autonomous driving for cars. Before joining TRI, Kuffner was the robotics director at Google from 2009 to January 2016.

“In the next 5 to 10 years, we are going to see some very capable consumer-level robots,” Kuffner said. “It’s already starting to happen.”

Still, the robotics industry faces a number of technical and cost challenges for both current and future robots. And the industry needs some major breakthroughs in the field of artificial intelligence. “We are still far away from artificial super intelligence,” said Modar Alaoui, chief executive of Eyeris, a developer of emotion recognition and deep learning technology. “That’s basically where cognition comes into place. We are still a little bit away from that.”

The industry is making progress, though, thanks to some major innovations for the various technologies used in robotics, such as machine learning, semiconductors and sensors.

The fragmented market
The definition of a robot is a system capable of carrying out actions automatically. The robotics market can be traced back to 1961, when George Devol developed Unimate, the world’s first industrial robot. Unimate was a programmable robotic arm.

Then, in 1996, Honda devised P2, the world’s first humanoid-like robot. From there, the field of robotics has exploded and morphed into a number of segments. In total, the global robotics industry is expected to jump from $34.1 billion in 2016 to $226.2 billion by 2021, a growth rate of 46%, according to Tractica, a market research firm.

According to analysts, the broad and fragmented robotics market includes the following systems: autonomous vehicles, consumer robots, drones, enterprise robots and industrial robots.

Each of these markets can have different robot types. For example, the consumer market includes toy and educational robots, personal robots, robotic lawn mowers and vacuum cleaner robots.

ABI Research divides the robotics market into three main categories—public, private and consumer. The public sector is sub-divided into two areas—government and research/education.

The U.S. Department of Defense and EPA are just two examples of the government sector. These agencies could use drones and other robot types.

The private sector, meanwhile, is also sub-divided into two categories—industrial and commercial services. For years, industrial robots have been for factory automation. Generally, industrial robots use giant and rotational arms to perform an assortment of assembly tasks. They are used in tasks that are considered too dangerous or demanding for humans.

The industrial robotics market is booming amid a major buying spree in China. China, which uses these robots to automate its various industries, is projected to represent 40% of the world’s industrial robotics market by 2019, up from 32% in 2016, according to the International Federation of Robotics (IFR), a trade group.

In total, industrial robot makers are expected to ship 290,000 units in 2016, up 14% over 2015, according to IFR. Some 70% of industrial robots are used in the automotive, electrical/electronics, and the metal/machinery segments.

Fig. 1. A high-speed metal-plate sorting industrial robot. Source: Fanuc.

Another big robotics market is the commercial service sector, which involves health care, transportation, utilities and warehouses. In total, 41,060 professional service robots were shipped in 2015, up 25% from 2014, according to IFR.

For years, robots have been used in the warehouse. Robots, or sometimes called automated guided vehicles, are used in the warehouse for pick and place and related tasks.

In health care, robots are used for surgical assistance and rehabilitation. Health care is expected to become a sizable market for good reason. The World Health Organization predicts that 22% of the world’s population will be over 60 years old by 2050. This will require an increasing number of health-care workers.

Fig. 2: Toyota’s Robina partner robot, which can provide medical and nursing care or perform housework. Source: Toyota.

To address the problem, Toyota last year introduced a new version of its Human Support Robot (HSR) for use in elderly care. Featuring a maneuverable body and a folding arm, HSR can pick up and retrieve objects.

HSR doesn’t use AI, however. AI, according to Toyota, is not a substitute for human attentiveness. Instead, HSR is controlled on-site or remotely by an operator. The operator’s face and voice is relayed in real-time.

There are countless examples of other service robots. For example, Softbank recently rolled out Pepper, a humanoid robot that can read emotions. Meanwhile, Savioke recently introduced Relay, a mobile service robot designed for the hospitality industry.

What is a robot?
Basically, a robot is a mechanical device that consists of actuators, controllers, effectors, sensors and software. The heart of a robot is a controller. Actuators are components that enable the robot to move. Effectors include arms, legs and wheels. And sensors enable the system to provide information about its surroundings.

“(A robot is a) superset of a computer,” TRI’s Kuffner said. “It is also an electro-mechanical system. It has real-time constraints. You have to deal with physics, uncertainty, actuation and sensing.”

In the late 1990s, Kuffner and Steven LaValle made a major breakthrough in robotics by devising the rapidly exploring random tree (RRT), an algorithm that paved the way for motion or footstep planning for robots.

While the industry has made numerous advancements over the years, it still faces several major roadblocks. What’s held back the advent of practical robots for consumers and other markets are three main issues—capabilities, cost and safety/reliability, according to Kuffner.

“We still need to work on better capabilities,” he said. “Making a robot do a task well is challenging. Right now, (a robot is) very expensive. And testing a million lines of code and ensuring it’s safe and reliable is a big challenge.”

Today, indoor navigation is not the big problem for robots. For this, robots make use of various technologies, such as LIDAR and sonar.

The big issue is manipulation. Industrial robots are fixed and they can manipulate objects. In contrast, mobile robots have navigational capabilities, but the challenge for these systems is to manipulate objects without any errors.

“Robots are great at doing structured tasks,” said Sachin Chitta, chief executive at Kinema Systems, a startup that is developing a self-training solution for industrial robots. “The problem that we’ve been looking at for quite a while is how hard it is to program robots to do simple things. Literally, you are walking the robot through everything it has to do. That works great when the environment stays the same and everything is in the right spot. But there are lots of applications where that doesn’t happen.”

Still, there are a number of new technologies that could help solve these and other problems in robotics, such as 3D vision, chips, machine learning and open-source software. Among them:

3D vision guided robotics (VGR). Traditionally, robots use 2D machine vision technology, which captures images on a flat plane. More recently, OEMs have moved towards 3D VGR technology. This helps process random parts in three dimensions.

New chips. Robots tend to use trailing-edge chips. Today, though, OEMs are evaluating or using leading-edge devices. “The nice thing about vision and motion planning is that it can be done on GPUs,” Chitta said. “There are also FPGAs.”

Machine learning. Advanced AI is still in its infancy, but machine learning is ready. A subset of this is called unsupervised learning, which involves neural networks. In a neural network, the system crunches the data and identifies patterns. Over time, it learns which of those attributes are important. Machine learning is exploding in computing, but the robotics industry hasn’t quite leveraged the capabilities of the technology. “In the future, we think deep learning will have a bigger effect on robotics,” Chitta said. “What’s missing is finding the right places to apply it.” For example, a robot performs a random task. “That’s where it gets hard. The problem is that those data sets are not available.”

Open-source software. Open-source software is widely available in robotics. For example, one technology, called Robot Operating System (ROS), is an open set of software for robotic applications.

Are personal robots ready?
In 2015, 3.7 million household robots were sold, up 11% over 2014, according to IFR. To date, though, the biggest sellers include toy robots and robotic vacuum cleaners.

For years, the industry has rolled out personal robots for the home, but previous-generation systems were unsuccessful. They lacked any practical skills.

As before, the robotics industry continues to work on the technology and is taking several approaches. For example, TRI and others are exploring cloud-based robotics, where the system is connected to the cloud.

“A human as a proxy is controlling the operations of a robot,” TRI’s Kuffner said. “The motion (of the robot) is being sent and transmitted over a network, so the robot can then perform its actions.”

Cloud-based robots don’t require all of the traditional hardware. “We offload the heavy compute tasks to the cloud,” he said. “If you don’t have to carry around a bunch of big iron on the robot, you can make it lighter. It means it doesn’t need as much power, and therefore you can get software updates just by patching software in the cloud.”

Still, there are some issues in terms of power outages, security and cost. They might be better suited for the service industry.

Nevertheless, in the near term the market will see the invasion of new personal assistant or social robots. “The field of social robotics combines three disciplines—sociology, psychology and robotics—to create a new social dynamic between a person and a mechanical other,” said Lynda Smith, head of developer strategy and marketing at Jibo.

Jibo itself is readying a social robot which is based on a quad-core ARM processor, DDR3L DRAM and eMMC NAND flash for storage.

The system, called Jibo, is a fixed unit. It can twist or turn when interacting with a human. “Jibo is a social robot, a companion in the home,” Smith said. “What separates Jibo from the Amazon Echo and other digital assistants is that Jibo initiates conversations when he sees you using a ‘Local Perceptual System.’”

This perceptual technology involves cameras, microphones and touch sensors. The system interacts with humans using “character AI”, a form of AI that involves deep learning. “Jibo individualizes his communications through a face and a speaker ID, and over time learns about you and stores what he knows about you in a ‘Jibo Knowledge Graph,’” Smith said.

Jibo and others are going after the same basic applications, such as entertainment, education, home automation, security and others.

Fig. 3. Jibo’s social robot. Source: Jibo.

But will consumers buy them? And what are the killer apps?

Even the OEMs are grappling with these questions. “We are still figuring out what is the application,” said Arshad Hisham, chief executive of InGen, a personal robot developer. InGen recently rolled out Aido, a personal robot system based on two quad-core 1.6-GHz processors from ARM. The core operating system is a version of Ubuntu. Meanwhile, Aido’s Android front-end runs a different operating system—Android Lollipop. Among its features, Aido makes use of a speech engine based on a neural net architecture for continuous speech recognition. This allows Aido to listen to commands and parse them.

Meanwhile, to help solve this issue, social robot suppliers are taking a page out of the smartphone business market. For this, consumers buy a phone and can download apps on the system. Like the smartphone, robot makers hope to develop an open system that is easy to use, enabling consumers to choose the apps.

But it’s unclear if consumers want another smart device in the home. Generally, smartphones, home automation units and even smart speakers provide many of the same functions as social robots, perhaps at a lower price.

“If consumers prefer personal robotic systems to ambient systems, or impersonal devices such as Amazon’s Echo and Google Home, then (social robots) should find success sooner rather than later,” ABI’s Kara said.

Related Stories
A Robot In Every Home
The IoT is likely to make robots ubiquitous, but they may not look like what you expected. Some may not even be visible.
Over-Trusting Robots
The concept of over-trusting robotic systems.
Neural Net Computing Explodes
Deep-pocket companies begin customizing this approach for specific applications—and spend huge amounts of money to acquire startups.
Seeing The Future Of Vision
Self-driving cars and other uses call for more sophisticated vision systems.
Making Drones Secure
Current-generation drones are imperiled by multiple security weaknesses. Is the semiconductor industry doing enough to address the problem?
Rethinking The Sensor
As data gathering becomes more pervasive, what else can be done with this technology?