Designing Crash-Proof Autonomous Vehicles

A lack of supervision and regulation is allowing unnecessary accidents with AVs. More strenuous processes are needed.

popularity

Autonomous vehicles keep crashing into things, even though ADAS technology promises to make driving safer because machines can think and react faster than human drivers.

Humans rely on seeing and hearing to assess driving conditions. When drivers detect objects in front of the vehicle, the automatic reaction is to slam on the brakes or swerve to avoid them. Quite often drivers cannot react quickly enough. Reaction to other drivers’ reckless or wrong way driving also can happen too late to avoid accidents.

According to a 2014 consumer report, human errors caused 90% of the traffic accidents that resulted in crashes. The National Highway Traffic Safety Administration (NHTSA) reported that more than 42,795 fatal accidents occurred in 2022. Human errors were a main factor. NHSTA proposed using driver assistance technologies to help reduce the number of accidents. The potential benefits of autonomous vehicles include increasing safety by reducing traffic jams and accidents, easing congestion to decrease air pollution, lowering driver fatigue, and bringing mobility to those unable to drive conventional cars.

As V2X matures, vehicles will be able to communicate with each other, and will slow down automatically when there is an accident or road hazard ahead without any human intervention.

However, a report by National Public Radio revealed that in the span of less than a year in the 2021/2022 time frame, nearly 400 crashes (including 273 involving Teslas), occurred from vehicles equipped with automated driver-assist technologies.

“NHTSA publishes periodic reports about ADAS-related crashes for Autonomous Driving Level 2 and Level 3 to 5 vehicles,” noted Thierry Kouthon, technical product manager for security IP at Rambus. “The reports give crash statistics, which manufacturers were involved, and the consequences. It is still difficult to point out the actual reason for the crashes because of the expertise and data required for a thorough investigation and determination. This industry is still in its infancy.”

The data came from accident reporting from OEMs as required by the NHTSA’s Standing General Order for the first time in 2021, updated in April 2023. OEMs must report any accidents that occurred for vehicles equipped with Level 2 to Level 5 autonomous driving capability to the NHTSA within 24 hours. In June 2022, the NHTSA released a summary of crashes reported in the summery report on Level 2 and Level 3 to 5 for the first time. Yet the initial data had minimum correlation among various accidents from OEMs and was inconclusive.

Today we still know very little about why and how autonomous vehicles crashed.

Fig. 1: OEMs were required by NHTSA to report accidents involved vehicles equipped with Level 2 automated driving systems (ADS) starting 2021. (Level 2 to 5 accidents were published in a different report). Source: NHTSA report released in June 2022

Fig. 1: OEMs were required by NHTSA to report accidents involved vehicles equipped with Level 2 automated driving systems (ADS) starting 2021. (Level 2 to 5 accidents were published in a different report). Source: NHTSA report released in June 2022

Why do autonomous vehicles keep crashing?
Summoning a driverless taxicab (robotaxi) using a mobile app promised a new era of transportation convenience, but the reality has been very different. Case in point: In June 2022, a robotaxi owned by Cruise collided with a Toyota Prius. According to the San Francisco police department spokesperson, as a result of the accident, passengers in the robotaxi needed medical attention.

Then, in March 2023, a driverless Cruise Chevy Bolt rear ended a San Francisco Muni bus. The autonomous vehicle (AV), with no passengers inside, suffered minor damage with almost no damage to the Muni bus. As a result of this accident, Cruise recalled 300 robotaxis and issued a software update.

According to Cruise CEO Kyle Vogt in a public statement, the accident was caused by the software’s mistaken prediction about the bus’ behavior, which led to the brakes being applied too late. Vogt further said that Cruise autonomous vehicles had driven more than 1 million miles with no collisions related to this particular issue.

The incidents from Cruise are far from isolated. Multiple collisions have occurred with Tesla in “autonomous mode” and with other robotaxis.

“Concerning the causes of crashes, it is typically a combination of various aspects,” said Frank Schirrmeister, vice president of solutions and business development at Arteris IP. “As many experts have pointed out, it is critical to install layers of checks to allow some level of graceful degradation. If and when the AI confidence level of object detection falls below a specific level, due to worse visual conditions or a specific situation it has not been trained for, the set of controls needs to be updated. The industry must find a way to capture and model the driver experience and use common sense to determine safety control aspects.”

One thing seems to be obvious. Crashes occur due to misjudgment and wrong predictions, as well as inaccurate calculations of AI. In the incident of Cruise’s collision in San Francisco in 2023, the Muni bus was only moving at 10 mph. The AV should have detected the real object directly in front of it. If it had done so, there would have been plenty of time to activate the emergency braking system (EBS).

Others question whether other types of sensors, such as radar sensors would have detected a real object up ahead and allowed the car to stop in time, or whether the AI is to blame.

Challenges in designing crashproof autonomous vehicles
Advanced Driver Assistance Systems (ADAS) are supposed to fix these issues and help humans react faster in emergency situations. ADAS relies on sensors to collect data, on AI algorithms, and on ECUs to process the data and to activate the EBS when necessary. If everything works as planned, ADAS should do a much better job than human drivers without accidents.

One common factor in all the AV crashes that have occurred is that the AVs were traveling too fast relative to the braking distance. Due to the high speed, the distance between the AV and the object in front of it was too close, and the vehicle did not have enough time to stop. In almost all cases, the AI software assumed everything was “normal” when it was not.

Two things need to happen to prevent autonomous vehicles from crashing. One is detection and determination of hard objects on a collision course with the AV. The second is real-time EBS activation.

Since the AI algorithm encompasses a number of variables — including the calculation of the distance and speed of the moving vehicles/objects in front of the AV — if the object is moving, the algorithm must determine how time will elapse before the AV collides with the object if the AV does not slow down. What also needs to be taken into consideration are weather and road conditions. It is much easier to handle the vehicle on a sunny day than on a snowy day with black ice.

Even more challenging is that the algorithm must consider whether the objects in front are moving vehicles, pedestrians, animals, or other random moving objects. In poor weather conditions, especially with black ice, are AVs smart enough to determine what distance is required to stop the vehicle from colliding with objects in front?

Even more challenging to an AV’s training is whether, in a different scenario, the algorithm would know what to do in a circumstance where traffic signals were down and police officers or others must direct traffic. What will happen if sensors provide conflicting data to the vehicle? If one sensor says that the traffic light is green, but a police officer says stop, what will the vehicle do next?

New OEM challenges
Autonomous driving is very complex. AI will only do what it has learned. There are so many components working together, including AI, ADAS, ECUs, chips, and high-speed communications. Which of these components are responsible for vehicle safety, including crashproof operation? Since all of these components are interconnected, even though an individual component operates flawlessly, how can OEMs ensure that the inter-networking and interfacing between components are working properly without incidents such as delay and errors? OEMs will need to work out all of these issues.

“The interaction between the various components is critical to safety and security,” Schirrmeister said. “Discussions about scalability and hierarchical FMEDA are looming from an IP vendor’s perspective. We see them as we support specific network-on-chip (NoC) capabilities like socket parity/ECC support for control and data and many others to meet ISO 26262 requirements. The problem is similar to constrained random testing in functional verification of chip design, which is now applied similarly to creating specific scenarios for traffic situations that need validation. Just like in the area of processor IP, aspects like “system ready certification,” as pioneered by Arm originally for server designs, may become something to consider at the higher level of complexity of sub-systems and chips for automotive integration.”

How then can hard objects be detected in front of the AV, and stop the vehicle in time? Sensors, such as radar, can be installed to continually monitor what is in front of the AV. As soon as an object is detected, upon determining it is within the collision course, the emergency braking system can be applied to stop the vehicle. This is essential in a properly-designed AI for AVs.

As Paula Jones, director of vehicle automation and chassis at Infineon Technologies explained, “Radar technology is essential for autonomous driving design. It can penetrate through heavy rain and dense fog. When there is a solid object in front, radar will certainly detect that, and the imaging capabilities for object detection of radar continue to improve over time. Additionally, with the growth of SAE L1/L2/L2+ requirements, the number of radar systems is expected to grow 24% annually.”

Fig. 2: High-resolution radar improves angle and classification which helps AV with distance and speed detection. Source: Infineon Technologies

Fig. 2: High-resolution radar improves angle and classification which helps AV with distance and speed detection. Source: Infineon Technologies

AI is a key component in AV, particularly in terms of programming AI to learn faster, become smarter, and make less mistakes.

“Training sets for autonomous driving functions are very important,” said David Fritz, vice president of hybrid and virtual systems at Siemens Digital Industries Software. “Taking a 16-year-old young driver as an example, he or she first needs to get a learner’s permit, then take driving lessons, then pass the driving exam before a legal driver’s license is issued. Why should we treat autonomous vehicles any differently? Instead, car makers are trying to rush to be the first to provide an affordable Level 5 autonomous platform. As a result, we are seeing a lack of supervision and regulation, resulting in unnecessary accidents. What we need is for autonomous vehicles to go through a similar phased process, or milestones, before being allowed to operate on the road in a busy city. We’ve had conversations with the regulatory bodies to emphasize the importance and possibility of putting autonomous vehicles through a series of virtual driving tests, using technologies such as digital twin, to ‘earn their learner’s permit.’ The AI model of the vehicle, complete with sensing and actuation, should be deployed in some controlled scenarios to show proof of correct operation under the laws of the appropriate region.”

Then, correlating the behavior of the digital twin with the physical vehicle is a critical next step in the process, adding confidence that the physical vehicle will perform as the digital twin does in situations that are too difficult or too dangerous for physical vehicles.

“Should an accident occur, by including simple digital twin instrumentation in the vehicle, the data collected can be replayed in the digital twin to diagnose accidents or poor decisions made by the physical vehicle,” Fritz said. “High-fidelity digital twins allow you to look inside the digital twin and the AI engine to see everything happening and determine why certain decisions have been made. We call that post-production digital fit. And you can do a level of diagnosis not possible in most physical vehicles. This is common practice for mission-critical defense and military projects.”

Increasing safety with AI-based cameras
Deploying AI used to be expensive and consume a great deal of power. That’s not necessarily the case for inference chips, particularly once the algorithms are trained.

“Automotive applications, such as processing images of high-resolution cameras, have become more practical due to new innovations, enabling cost and power reduction,” said Geoff Tate, CEO of Flex Logix. “An example would be the InferX AI IP, which can run multiple AI models simultaneously, including the Yolov5L6, 1280 x 1280 pixels at 30 frames/second. So 100% reconfigurable GPU performance is now possible.”

Still, by all accounts, ADAS will continue to evolve for the better as vehicle architectures improve. For example, the inter-networking and inter-operation among all the components within autonomous vehicles — ECUs, the different sensors monitoring the inside and the external environment of the vehicles, infotainment systems, telematics to support 5G and V2X, OTA, and security concerns — will make automotive designs more complex.

New, highly computerized automotive architectures such as software-defined vehicles (SDVs) have severed the relationship between hardware and software in ECUs.

“In other words, ECUs are no longer compact, physical electronic components directly connected to sensors and actuators,” Rambus’ Kouthon said. “They are applications running on a server platform that is directly connected to the sensor and the actuators. This separation between the sensors/actuators and the ECU software/application creates the opportunity to ‘download’ new ECUs in the vehicle as opposed to installing them physically. More importantly, it provides the ability to simulate the vehicle operations using digital twins — software replicas of the vehicle that typically run on a remote server in a cloud environment. The replica runs exactly the same ADAS software as the real vehicle and can be used to carry out any number of simulations, including crashes. Accuracy of digital twins in simulating the behavior of the actual vehicle will be a necessary requirement for OEMs and manufacturers alike.”

Another challenge lies with the OEMs, as most have different divisions that build different models. Each division has its own profit-and-loss responsibilities for the entire model/product lines. It would be unusual for one division to share its knowledge with another division, so unless the corporate takes control of the sharing of knowledge, it may be difficult for all the models to have the same ADAS safety features.

“One of the challenges OEMs face today in regard to security and safety is the lack of a unified test environment for autonomous driving supporting all the different model vehicles,” observed Chris Clark, senior manager in the automotive group at Synopsys. “One model should not be more or less secure than another model from the same OEM. A unified data set used in ADAS needs to be available to all model vehicles. In other words, you cannot have one ADAS design with better inferencing capability than others. Autonomous driving provides great features and capabilities. While it is very safe, the driver is still the last point of control for the vehicle.”

Conclusion
Although AVs and ADAS may have benefits, it has been observed that during the test phase, many AV crashes have occurred. How to prevent AVs from causing damages to people, other vehicles, and properties remains a challenge to OEMs, and instead of relying on AI prediction and inferencing alone, real-time hard objects detection, emergency braking system activation and digital twin technologies need to be employed.



Leave a Reply


(Note: This name will be displayed publicly)