More electronics everywhere and more connectivity create issues involving power, performance, cost, and security.
The role of AI/ML in automobiles is widening as chipmakers incorporate more intelligence into chips used in vehicles, setting the stage for much safer vehicles, fewer accidents, but much more complex electronic systems.
While full autonomy is still on the distant horizon, the short-term focus involves making sure drivers are aware of what’s going on around them — pedestrians, objects, or other cars that might cause accidents — and ensuring they are paying attention. According to the World Health Organization (WHO), about 1.3 million people are killed each year in road accidents, including more than 38,000 people in the United States alone during 2020. Even though fewer people were driving during the beginning of the pandemic, the accident rate increased by 7.2% over the previous year, according to the National Highway Transportation Safety Administration (NHTSA).
Safety concerns have increased the demand for smart technology. “We are seeing more and more intelligence being integrated into chips,” said Ron DiGiuseppe, Automotive IP and subsystem segment manager for Synopsys’ Automotive Group. “SoCs are doing so much these days — including ADAS, speech recognition, internal and external monitoring — and soon they will also perform battery management and preventive maintenance for EVs. The purpose of in-cabin monitoring is to reduce traffic accidents. One of the causes in traffic accidents is driver distraction or drowsiness. By monitoring and detecting driver behaviors, the AI SoC will provide feedback and alerts. When sensors detect the nodding of the head or the closing of the eyelids, the algorithm will know the driver is not in a position to continue driving and will suggest finding a safe place to take a break. Additionally, when the steering wheel sensors do not sense enough pressure, the algorithm will alert the driver to put their hands back on the wheel.”
Smart technology also pushes automotive design to the leading-edge nodes, where there is less industry learning. “A lot of the classic automotive folks had their own fabs, so they were pretty confident in their technology,” said Lee Harrison, automotive IC test solutions manager at Siemens EDA. “What we’ve seen over the last three to five years is that’s changed immensely. Automotive customers typically were at the 120nm process node, whereas the mobile folks were pushing 14nm and 7nm. Now, the automotive folks are pushing the envelope, and that’s being driven by all the capabilities that are being asked for in all these automotive devices. The latest data systems are at 7nm and 5nm. But they don’t have the data for those. In the past, they’d have five years of data before doing anything.”
Luxury car brands have added some of these features over the past couple of years, but the technology is becoming more ubiquitous and the chips being used are more complex. AI systems frequently are developed at the most advanced process nodes because they need more compute elements to process data in real-time. That, in turn, is raising some familiar challenges, as well as some that are unique to automotive.
“The potential for AI in the vehicle is huge, and we’re only beginning to tap into what’s possible,” said Dennis Laudick, vice president of automotive go-to-market in Arm’s Automotive and IoT Line of Business. “But in achieving this, the appetite of AI for compute is insatiable. One of the biggest challenges for the automotive industry is how to achieve the huge benefits of AI, but in a hardware cost and power profile that is suitable for consumer automotive.”
This is increasingly true for intelligent sensing devices, and for the automotive AI SoCs that have made intelligent in-cabin monitoring systems (ICMS) possible.
“Advancements in dependable automotive technology and sensors, including external radar and camera and sensor fusion, have helped move ADAS, vehicle connectivity, and mobility services forward,” said Bill Stewart, senior director of vehicle automation and chassis at Infineon Technologies. “These include exterior assistance systems that provide warnings to prevent collisions. In recent years, in-cabin monitoring systems have been added to further increase driving safety.”
Fig. 1: ICMS technologies including vision cameras, radar, and NIR are gaining momentum. Source: Infineon
An ICMS uses surveillance, monitoring, and alarms to increase automotive security and safety. It consists of a driver monitoring system (DMS), an occupant monitoring system (OMS), and other object detection technologies to determine the presence of a child, animals, or items such as a cell phone and keys that are left behind inside the cabin. While DMS has been on the market for some time, OMS is relatively new.
“Compared to DMS, where the sensor is looking only at the driver, in OMS one or more sensors are monitoring all occupants in the vehicle,” said Amol Borkar, director of product management and marketing, Tensilica Vision and AI DSPs at Cadence. “This opens up a whole array of use cases and applications. With a combination of camera, microphone, temperature, and other sensors, one could do things like smart airbag deployment. Depending on the point of collision in an accident, and by using AI networks to analyze the occupant’s seating position, airbags could be deployed to be most effective and reduce the chances of injury. Another safety measure could be using AI networks to detect the presence of children or pets that are accidentally left in a locked vehicle during an everyday grocery run. If detected, the driver could be warned, and an alarm sounded. Coupled together, DMS and OMS could monitor the activity level inside the vehicle cabin and indicate the potential for driver distraction.”
The sophistication of driver monitoring systems varies. To monitor driver distraction, fatigue, and the emotional states, for example, an algorithm may detect one or more parameters such as breathing activity, facial expressions, pupil dilation, eye blinking patterns, and even heart rate activity. Some OEMs have started to integrate DMS and health monitoring, where vital signs are tracked to determine if the driver is in good health. If the driver has a heart attack, for example, the ADAS can take over and safely maneuver the car to the side of the road. This is still quite a few years away. For the most part, DMS today primarily detects a driver’s drowsiness, distraction, or even sudden sickness such as fainting while driving. OMS is used to monitor the vehicle occupants/passengers to detect if they are wearing seatbelts and, in some cases, allow them to interact with the OMS.
What’s happening inside the vehicle
Technologies used for in-cabin sensing are cameras, near-infrared (NIR), sensors, radar, and ultrasound. NIR is by far the leading technology used by many OEMs, including Audi, BMW, GM, Ford, Mercedes-Benz, Nissan, Hyundai, and Mazda. Tesla primarily uses radar, while Toyota uses a combination of radar and camera. These technologies can perform eye/gaze tracking, detection of hand motions/position, facial recognition, occupant/child presence, and steering hand pressure sensing.
NIR. Near-infrared uses LED with light source ranges from 850 to 1,050nm. The automotive industry tends to use around 940nm, which is beyond the visual spectrum and poses no risk to the human eye. Additionally, sunlight does not affect NIR. The light emitted by the diodes bounces back from the target object(s) to an LED transceiver, enabling time of flight (ToF) calculation to determine the object’s distance. After thousands of light beams are received, a final 3D image in pixel format will be developed. These 3D images are simulations of people inside the cabin. If the driver suddenly passes out, for example, the NIR sensors would be able to detect that the driver is not sitting straight. The algorithm will then interpret that as an emergency situation.
Developers are constantly adding new features to the NIR technology. For example, Radiant Vision Systems’ NIR cameras provide two simultaneous outputs for better accuracy. The Radiant Vision Systems’ NIR intensity lens system operates at 850 or 940nm with 0.05 degrees per image sensor pixel. To ensure eye safety, NIR equipment must comply with IEC 62471 and IEC 60825-1 standards for all light sources, including NIR.
There also are new developments on the chip side. For example, ASICs are now available for simultaneous DMS and OMS applications. OMNIVISION’s single package AI-based ASIC integrates RGB‑IR image signal processing (ISP) with two AI neural processing units (NPUs) and embedded DDR3 memory (2 Gb). Combined with Smart Eye’s AI algorithms, OMNIVISION’s GS sensor, which supports 940nm with a small 2.2‑micron pixel, is compliant with the General Safety Regulations (GSR) and Euro New Car Assessment Programme (NCAP).
A single-chip continuous wave time-of-flight (cwToF) sensor from Melexis now supports both 850nm and 940nm wavelengths with the MIPI CSI-2 serial interface to the host ECU.
Radar. In ADAS applications, radar has the advantage of providing high resolution without being impacted by weather conditions, such as rain. Vision cameras and lidar, which rely on light to detect target objects, may be blocked by non-target objects, such as hail. Similarly for ICMS, automotive 60GHz radar sensors (4GHz bandwidth) are effective in providing high-resolution short-range sensing. Some of the applications include detecting the presence of a child or pet. Besides monitoring for seat belt use, it also can track the vehicle occupants’ biometrics and vital signs.
Multiple companies are working in this space, including Infineon, Vayyar, Gentex, ams OSRAM, and Veoneer.
Radar also can be used in exterior sensors. If vehicles are equipped with self-parking features, those sensors can detect the presence of pedestrians, animals, or nearby objects, and apply emergency braking to avoid collisions.
Security concerns
While more electronics can greatly improve on vehicle safety, it also opens the door to more cyberattacks. The rate of attacks is increasing significantly. There are reports of vehicle controls have been taken over by hackers, and ambulances being routed to the wrong locations.
“Security and privacy are critical and fundamental not only to AI and vehicle implementations, but also to any modern devices employing any compute system,” said Arm’s Laudick. “It’s an area we spend a lot of effort focusing on, but also one that is constantly evolving. It will continue to be a focus for us going forward.”
The centralization of automotive intelligence and the replacement of electronic control units with high-performance SoCs also provides a single target for attackers to control the whole vehicle. As 5G, V2X, and smart infrastructure connections grow, the attack surface will widen, and the possibility of gaining access to the central SoCs will increase. Design efficiency and low latency are vital elements for safeguarding data security as vehicle connections multiply.
“AI inference tasks need another level of protection,” said Bart Stevens, senior director of product marketing at Rambus. “AI edge processing must securely manage its AI models, as they represent high value. Cars are also getting smarter in detecting the safety of passengers (and the driver) by monitoring the state of the driver. The data gathered during this monitoring could lead to privacy issues, thus the need to protect the privacy of these data points. When vehicle-to-vehicle, or vehicle-to-infrastructure (i.e., vehicle-to-everything) becomes mainstream, efficient and standardized low-latency security protocols need to handle the V2X communications sessions. At any given time, a vehicle may need to securely set up and tear down hundreds of such sessions to surrounding vehicles and to surrounding infrastructure such as in-road detectors, light posts, etc. This calls for systems capable of handling high-performance low-latency security protocols. These systems may even need to be post-quantum crypto-proof.”
They also need to be traceable. “The development and integration of AI into automotive SoCs is a sophisticated process,” said Paul Graykowski, senior technical marketing manager at Arteris IP. “In addition to achieving power, performance, and area (PPA) goals, design teams must balance design tradeoffs between architectural specifications and physical constraints. Beyond meeting these requirements, automotive designs have the additional demand to be functionally safe and intrinsically secure. To ensure compliance to ISO 26262, one must maintain requirements traceability of design artifacts, along with proven verification and validation of said requirements.”
And to prevent such attacks, automotive electronic designs must start with security from the ground up. ECUs need to have built-in security, and SoCs with should incorporate security building blocks or security IPs. Furthermore, extensive design simulation, verification, and tests need to be in place.
In September 2022, the NHTSA updated and released its Cybersecurity Best Practices for the Safety of Modern Vehicles. The new release is a product of coordinated agency research, voluntary industry standards, and lessons collected from motor vehicle cybersecurity research over the years. Though non-binding, this document provides a collection of industry learnings and best practices.
Additional challenges
Selecting the right technologies for ICMS, and effectively balancing performance, power efficiency, and cost, remain key concerns in automotive design.
Infineon’s Stewart pointed to other design considerations. “An ICMS architecture may include standalone systems with a centralized processing unit or multiple MCUs and independent sensors,” he said. “In designing 2D and 3D cameras for DMS to meet the Euro NCAP 2025 requirements, OEMs will need to consider the functions of convenience and security. Some of these features may include driver distraction prevention, ensuring safety once seated position changes are allowed, or unblocking the car in an emergency situation. Additionally, for OMS design, 60GHz radar sensors may be used for turning on seat heating, seat belt alarm detection, or smart airbag deployment. More importantly, potential lifesaving functions such as child presence detection, rear occupant alert for a forgotten pet or object, and occupant status monitoring to monitor occupants’ health conditions would be very useful.”
Conclusion
Intelligent in-cabin monitoring systems technologies enable vehicles to avoid traffic accidents including injuries, damages, and crashes. Currently, NIR is the leading technology used by ICMS followed by radar and vision cameras. Increasingly, regulatory bodies such as General Safety Regulations (GSR) and Euro New Car Assessment Programme (NCAP) are requiring automakers to increase safety by using technologies including ICMS. Future legislation has been proposed for automakers to include in-cabin capability of child presence detection (CPD), rear occupant alert (ROA), and occupant status monitoring (OSM). These will further drive the demand for ICMS in automotive design, and improve the safety of vehicles.
But all of this comes at a price, and it will be an ongoing challenge to rein in costs, reduce the amount of power used by these systems, and to keep everything secure as more systems are interconnected and centrally managed.
Related Reading
Big Changes Ahead For Inside Auto Cabins
New electronics to monitor driver awareness, reduce road noise, ensure no babies or pets are left in hot cars.
Privacy Protection A Must For Driver Monitoring
Why driver data collected by in-cabin monitoring systems must be included as part of the overall security system.
Challenges Mount In New Autos
Technology differentiation is both a selling point and a potential liability.
You left out vehicle security. A DMS can be retasked as a biometric security system by using facial rec on the driver.