Weird Incidents Reveal L5 Challenges

Overcoming glitches and consumer reluctance will require disruptive technology.

popularity

A series of surprising, counterintuitive, and sometimes bizarre incidents reveal the challenges of achieving full Level 5 autonomy in self-driving vehicles, which are an increasingly common site in major cities. While it’s easy to dismiss such anecdotes as humorous glitches compared with the sobering accounts of autonomous tech-related injuries and fatalities, industry executives say these occurrences are worth taking seriously because they highlight the gap between today’s technology and the truly disruptive advancements that will lead to L5.

Bridging this gap will require technical progress as well as a change in mindset for the automotive industry. “Some companies think that what the customer wants is unrealistic, so they don’t even try to create a real solution,” said David Fritz, vice president of hybrid-physical and virtual systems for automotive and mil/aero at Siemens Digital Industries Software. “But if you try to just incrementally improve what you’ve got, you end up nowhere near where you expected, and risk losing the market.”

Most forecasts predict that truly self-driving vehicles need at least another decade of development, and will require low-power AI inferencing, car-to-car communication systems, reliable 5G and 6G connectivity, and smart city infrastructure, to name just a few necessary technologies. Automotive OEMs also must convince the general public that self-driving vehicles are safe and worth purchasing.

Another consideration is that much of the technology that allowed the industry to reach Level 3 will not scale in all the necessary dimensions — performance, memory usage, interconnect, chip area, and power consumption, according to a recent Expedera white paper. Still, many researchers are optimistic that better algorithms and denser, more efficient processing will keep things moving in the right direction.

However, it is widely agreed that facing those obstacles is worth it, given the potential to save millions of human lives and develop world-altering technologies in the process.

When it comes to these strange autonomous anecdotes, they generally fall into one of two categories — unusual behavior from vehicles as they attempt to “think” like a human, and unusual behavior from humans as they grapple with advanced autonomous technology becoming mainstream.

Vehicle challenge: inference
High-tech cars are flummoxed by kangaroos and swans, as well as natural phenomenon. In one incident, a car mistook the moon for a yellow traffic light and tried to keep slowing down.

Siemens’ Fritz described an incident in which a car assumed a multi-level parking garage was in fact layers of vehicles piled on top of one another, and re-routed its path around what it assumed was a horrific accident. He described another situation in which a self-driving AI undergoing testing did not know what to make of an open drawbridge, so it drove straight off the bridge and plummeted into the water below.

In some cases, a car that does not understand what it is sensing will simply stop. “It’s like a toddler that encounters a situation it has never seen before,” said Fritz. “What does the toddler do? It doesn’t make a choice. It just says, ‘No.’”

Frank Schirrmeister, vice president of solutions and business development at Arteris IP, said AI-related identification issues also highlight the importance of operational design domain — the conditions under which a vehicle is designed to function. When those circumstances are not met, the vehicle does not function. According to SAE, those conditions may include “environmental, geographical, and time-of-day restrictions, and/or the requisite presence or absence of certain traffic or roadway characteristics.” An L5 vehicle will have an operational design domain that allows it to drive anywhere a human can drive.

Schirrmeister noted that a vehicle’s operational design domain should be defined such that if the car suddenly finds itself in circumstances that defy the domain, such as encountering objects it neither can identify nor broadly categorize the way a human might in the same situation, the car safely and securely retires itself. “There is something that needs to decide, ‘I’m not hitting the kangaroo.’ Taking a cautious, graceful degradation approach, it could turn on its blinkers and turn right and park the car.”

The cause of a mysterious mass-stopping event that took place this summer has yet to be publicly identified. In what one online commenter said looked like the beginning of a robot uprising, a group of driverless cars lined up in the middle of a San Francisco street and blocked traffic for several hours.

In many cases where a self-driving car makes a counterintuitive maneuver, the vehicle’s AI is encountering an edge case and is unsure how to proceed. Experts say an L5 car will need to learn as it is driving, not only sensing its environment and making decisions based on its training, but also deriving inference and meaning from that data. The complete process requires a sensor suite, high-performance heterogenous processing systems, complex software algorithms, and low-power in-car learning. Some of these elements have yet to be invented.

Vehicle challenge: low-power AI
Could bug brains offer a path forward for self-driving vehicles? At least one company believes so. U.K.-based startup Opteran recently raised $12 million in funding for low-power autonomous AI technology inspired by insect neurobiology.

“We don’t do statistical pattern matching,” said Opteran CEO David Rajan. “There’s no machine learning or deep learning here.” Instead, the technology filters information at the sensor level using proprietary algorithms. The company’s initial focus is visual navigation, a focus that eventually will expand to machine decision-making, hyperspectral vision, image recognition, and dynamic learning-based environmental inputs.

James Marshall, Opteran’s chief scientific officer, said the insect model means the technology uses relatively low-definition cameras, a standard FPGA, and doesn’t incorporate spiking neural networks (SNNs). “The theory is that’s how human brains do it, so that must be the key to scaling up large AI networks. But in our approach we don’t need to use that kind of specialist hardware because it’s not the spikes that matter, per se. In fact, some of the neurons don’t spike at all in insects. Some of the behavior in human neurons may have more to do with keeping the cell itself alive than anything related to computation.”

Insects also don’t have particularly high-resolution vision, Marshall noted, which means information can be processed quickly. “Small amounts of information are sampled, and missing information is filled. Also, if you extract the right information from the sensor level, you don’t need such a heavyweight computation. Avoiding bumping into things, for example, can be done with an efficient computation, which we can deploy on an FPGA at up to 10,000 frames per second because we throw away a lot of the information. That’s inspired by the honeybee visual system.”

Marshall explained this is essentially optic flow estimation for collision avoidance. The process allows for relatively low power usage. The company’s robotic development kit, for example, draws less than a watt of power.

The real test of Opteran’s technology will be whether or not it is embraced by automotive OEMs. That remains to be seen.

Human challenge: trust and adoption
Certain L5 challenges aren’t about technology alone, but rather about the human response to technology. For example, aggressive driver behavior is called road rage, but is it robot rage if the aggressive behavior is directed toward a driverless vehicle?

Regardless of the terminology, reports of such incidents are on the rise as semi-autonomous cars become an increasingly common sight on city streets. On one occasion, a man waved a .22-caliber revolver at a self-driving van in Chandler, Ariz. In a separate instance, a taxi driver exited his vehicle and slapped the front window of a self-driving car in San Francisco, scratching the glass. Other individuals reportedly attacked driverless vehicles with rocks and sharp objects, or tried to run the cars off the road.

It is not clear what is motivating these attacks, though some experts say it is an expression of fear, anger, and lack of control within a rapidly changing society. And these are the same emotions automotive OEMs are facing from potential buyers of increasingly autonomous vehicles.

Fritz noted that 5G/6G-enabled customization and communication could be the keys to overcoming consumer reluctance, and described a car that can be customized to stream multimedia during a morning commute. Equally alluring is a car that would explain to passengers its decision-making process. “Imagine yourself sitting in the backseat, no driver in the front, and you say, ‘Car, why are you not turning right?’ It responds, ‘I can’t turn right because there’s an ambulance coming.’ It’s okay, and it relieves the human anxiety, which is a real impediment to the adoption of these vehicles.”

Methods of vehicle-to-pedestrian also are under consideration. Researchers in Japan conducted multiple studies around the usage of “animated googly eyes” on self-driving cars to show pedestrians what a vehicle is and is not sensing in its environment. In 2017, the researchers evaluated this idea in a VR environment, and found that pedestrians made safe street-crossing decisions more quickly and felt safer when the eyes were in place. A follow-up study presented in September of this year used robotic eyes and reported similar results.

If robotic eyes seem somewhat strange, how about a man disguised as a car seat? That’s what a local news outlet discovered after tracking down what appeared to be a new self-driving car on the streets of Virginia. It was, in fact, a Virginia Tech Transportation Institute worker collecting data for a study on driverless cars. According to the institute, the goals of the study included investigating the potential need for additional exterior signals on automated vehicles, and ensuring pedestrians, cyclists, and other drivers are accommodated.

Humans often act unpredictably or counterintuitively, which adds another layer of difficulty for vehicle engineers, Arteris IP’s Schirrmeister noted. “Determining whether or not a pedestrian sees oncoming traffic or is planning to cross the intersection is a far different process for a human than an AI. What happens when the foot position of the person at the intersection looks as though they aren’t even trying to cross the street? Those are weaknesses that can confuse the AI. A similar challenge is teaching an AI to anticipate and appropriately respond to aggressive driving behavior from other vehicles.” 

Human challenge: legal issues
“Ain’t nobody in it. This is crazy.” Those were the words of a San Francisco police officer earlier this year when he pulled over a car for driving without its headlights on at night, only to find there were no humans in the vehicle. In 2015, law enforcement pulled over a self-driving car in Mountain View, Calif., for driving too slowly. Police released the car without issuing a citation in both cases. A test driver wasn’t so lucky in 2018 when police issued a ticket for allegedly driving too close to a pedestrian while the car was in self-driving mode.

From a vehicle technology standpoint, traffic stops are a fairly straightforward scenario. A 2016 Google patent describes multiple ways for a car to sense flashing lights and interpret them as belonging to a police car or other emergency vehicle. In one method, cameras and lasers perceive light being emitted near the car within a three-dimensional “bounding box.” Then, GPS data is used to determine the location of traffic lights or other objects that also could emit light, and the system filters false positives. The car determines whether the lights are flashing by checking for an on-off pattern, and uses geographical data to determine the spacing and color of lights for emergency vehicles in the area. If there’s a match between the observed flashing lights and these templates, the system maneuvers the car to a safe parking location.

Fig. 1: A self-driving car could detect the approach of a police vehicle by identifying flashing lights that match a particular template, as is depicted in this illustration from a 2016 Google patent. Source: U.S. Patent Office

Fig. 1: A self-driving car could detect the approach of a police vehicle by identifying flashing lights that match a particular template, as is depicted in this illustration from a 2016 Google patent. Source: U.S. Patent Office

Instead, the complexity of this scenario arises from the fact that police departments generally have no experience with driverless cars and are unsure how to interact with this type of vehicle. There are few laws or other legal precedents to guide the way. It is possible law enforcement eventually will follow the example of the insurance industry, where liability for infractions and accidents is slowly shifting from the driver to automotive OEMs. That appears to be the case in the U.K., where the government unveiled a plan in August reflecting this liability shift for insurers, law enforcement, and other stakeholders. Recommendation 44 of the plan reads: “While a relevant ADS feature is engaged, the user-in-charge should not be liable for any criminal offense or civil penalty which arises from dynamic driving.”

The topic will become even thornier as driverless cars become more pervasive. Traffic citations will become less common as human error is removed from the driving process, which will in turn greatly reduce a major source of revenue for police and other government services. U.S. police pull over approximately 20 million drivers a year.

At the same time, law enforcement is likely to adopt its own driverless vehicles for policing purposes. Based on the capabilities of self-driving cars, police agencies will be able to upload predictive policing data into each officer’s patrol car at the beginning of their shifts, one law enforcement magazine speculated, such that the patrol car would know when and where to drive in an effort to provide a visible presence to deter crime. This will create yet another set of ethical and technical issues for society to untangle.

Conclusion
The ethical, legal, and technical processes required for full self-driving vehicles continues to be complex. If the past few years are any indication of the future, the maturation of the self-driving vehicle industry will be both heart-wrenching and sometimes strange. Fritz believes automotive ecosystem executives must stay focused on the broader mission of helping humanity. Citing the example of a young adult with seizures for whom a self-driving vehicle could be life-saving, “There’s a very human side to this that we often forget.”



Leave a Reply


(Note: This name will be displayed publicly)