Gearing Up For Level 4 Vehicles

Autonomy will likely come in different stages of L3+. What’s missing, and which technology and business challenges need to be solved.


More autonomous features are being added into high-end vehicles, but getting to full autonomy will likely take years more effort, a slew of new technologies — some of which are not in use today, and some of which involve infrastructure outside the vehicle — along with sufficient volume to bring the cost of these combined capabilities down to an affordable price point.

In the meantime, many of the features that will be included in autonomous vehicles are likely to be rolled out individually. Mercedes-Benz recently introduced Level 3 autonomous driving, and other automotive OEMs will soon follow. Moving from Level 3 to Level 4 is much more complicated. But L4 features will offer major autonomous driving benefits never seen before.

Fig. 1: The six levels of autonomous driving, as defined by SAE J3016. Source: SAE

Fig. 1: The six levels of autonomous driving, as defined by SAE J3016. Source: SAE

SAE International and the International Organization for Standardization (ISO) define six levels of autonomous driving, from 0 to 5. The latest release is the SAE J3016 Recommended Practice: Taxonomy and Definitions for Terms Related to Driving Automation Systems for On-Road Motor Vehicles. This document is not a specification or a requirement. Is simply provides a full definition of the six different levels of autonomous driving.

SAE Level 4 is defined as, “High driving automation in which the sustained and ODD-specific performance by an [automated driving system] of the entire DDT (dynamic driving task) and DDT fallback (functions) without any expectation that a user will need to intervene.” Operational design domain (ODD) refers to the operating conditions in which the driving automation system will function as specified. These ODD conditions include environmental, geographical, and time-of-day restrictions, and/or presence/absence of certain traffic or roadway characteristics. In other words, the ADS can only operate within the predefined conditions, such as within a city with the proper infrastructure.

Fig. 2: Operational design domain (ODD) refers to the operating conditions in which the driving automation system will function as specified. Source: SAE

Fig. 2: Operational design domain (ODD) refers to the operating conditions in which the driving automation system will function as specified. Source: SAE

The automated driving system, also referred to more commonly as the advanced driver assistance system, or ADAS, includes the hardware and software required to perform all the dynamic driving tasks on a sustained basis. Typically, it is used in SAE L3 to L5 driving.

Dynamic driving task (DDT) refers to all the real-time operational and functional activities required to operate a vehicle on the road such as stepping on the accelerator, signaling before making a turn, decelerating when the vehicle in front slows or stops, braking, lane changing, etc. SAE is very specific in defining such tasks, because for autonomous vehicles each action and decision is by design.

The key difference between L3 and L4 is that L3 is conditional driving automation. While it provides many autonomous functions, the driver is expected to take over when necessary. At L4, automation provides driving capabilities with no expectation of human intervention, so moving from L3 to L4 is a major jump. And while it is tempting to modify or add to existing L3 designs, designing L4 vehicles from the ground up is better. OEMs are looking at convenience and comfort with L4, but they also want to achieve safe autonomous driving, as well.

“There are several aspects of Level 4 that really take a quantum leap forward in terms of the amount of compute and the sophistication of the compute,” said David Fritz, vice president of hybrid and virtual systems at Siemens Digital Industries Software. “It would require a complete redesign. You’re not going to get there incrementally. If you look at the ADAS right now, they are very nominal. An ADAS can do automated cruise control, lane departure warning, emergency braking, and things like that. But these are a small fraction of what it takes for Level 4. Not only that, you have to make a quantum leap forward in your compute capabilities, including AI. You need to figure out how to do those things and get your ASIL-D certification. You pile those things on top of each other, and it’s a big task if you’re trying to incrementally get there.”

The most significant change from L3 to L4 is that there is no driver fallback, so the vehicle is in control of steering, braking, and all maneuvering, said Paula Finnegan Jones, senior segment director for vehicle automation and chassis at Infineon Technologies. “The early applications of L4 will likely be localized driverless taxis or commercial trucks with a more attractive operational business model. But in any case, safety and reliability will be at the forefront of all technical requirements. An architecture that is designed from scratch for L4 or L5 will be in a better position for optimized design than those being adapted from L3, since there are significant requirement differences.”

At the most basic level, L4 vehicles will not be required to have pedals or a steering wheel installed. “Redundancy and significant functional safety requirements will be at the forefront of any L4 system for sensor fusion and compute,” Finnegan Jones said. “Meeting those requirements for L4 enablement will take a portfolio of ISO 26262 qualified power, memory, SoC, and external MCU safety companions.”

Major OEMs pursuing L4
The race to autonomy is on, and the rewards will be significant. In fact, the L4 ADAS revenue may reach $230 billion by the year 2035, according to McKinsey & Co., and all of the world’s major automotive OEMs are working on L4.

Fig 3: ADAS revenue forecast to 2035. Source: McKinsey & Co.

Fig 3: ADAS revenue forecast to 2035. Source: McKinsey & Co.

Some of the features required for autonomous driving have been rolling out individually. For example, some Mercedes-Benz models available in Germany, such as the S-Class EQS and EQE, have pre-installed Intelligent Park Pilot, which is capable of driverless parking under SAE Level 4.

The German Federal Motor Transport Authority (KBA) issued the world’s first certification for the operation of series production vehicles for parking garage P6 at Airport Stuttgart at the end of 2022. Those vehicles use the Intelligent Park Pilot to self-park in a reserved parking space without the driver. Guided by the redundant sensor system, which includes lidar, radar, and cameras in an appropriately equipped parking garage, the vehicle will automatically stop when encountering such obstacles as pedestrians or other vehicles.

BMW said it will introduce its new 7 Series with Level 3 capability by early 2024. BMW also said it is looking to add Level 4 parking solutions (Automated Valet Parking Type 1 and Type 2) in the Neue Klasse models in 2025.

Toyota, meanwhile, is partnering with Aurora and May Mobility to develop Level 3 to Level 5 capabilities. Other OEMs will soon follow.

In the area of autonomous trucking, which operates mostly on the freeway, Torc and Daimler Truck are jointly developing an integrated autonomous truck based on Daimler’s autonomous-ready Freightliner Cascadia. Torc said it plans to commercialize the L4 trucking technology by 2027.

Fig. 4: Autonomous trucking, which operates mostly on the freeway, would benefit from the L4 technology. Source: Torc

Fig. 4: Autonomous trucking, which operates mostly on the freeway, would benefit from the L4 technology. Source: Torc

L4 challenges
Initially, OEMs and technology companies were very enthusiastic about Level 5 fully autonomous vehicles being rolled out in the 2023 timeframe (and in some cases, even earlier). Multiple hurdles remain, and OEMs are still finding only limited success with Level 5. Crashes happen more frequently than expected, resulting in injuries and, in some cases, fatalities. The complexity of designing fully autonomous vehicles has far exceeded the expectations of OEMs and technology companies. On the other hand, achieving L4 is more feasible because the human driver can still take over if autonomous driving fails.

Fully autonomous vehicles are so difficult to get right because there are too many unknowns. It is difficult to predict what will happen on the road, but it is even more difficult to respond safely in real time to those events.

“Most of the road vulnerabilities could be tackled by L3 cars,” said Amit Kumar, director of product management and marketing for Tensilica Vision, radar and lidar DSPs at Cadence. “However, to reach L4, one has to be extremely cognizant of the corner cases, which are few and far between, but they do arise and could be hazardous. A recent example, where a kid was hit by a Tesla running on FSD mode, may seem like a worst case, but the Tesla autonomy suite also helped save the kid’s life. However, the difference between L4 and lesser autonomous vehicles is that such events could be prevented if the vehicles have higher and deeper levels of perception with additional sensors. In such a case, the moment the kid started running toward the road, the vehicle would have had some extra milliseconds to perceive a threat and avoid the collision. To go from current level (L3) to this level (L4) needs a lot more data and understanding of corner cases. In short, the technology is there. However, use cases are still evolving and being documented.”

For this to work as planned, the infrastructure also needs to be updated, allowing communication between current structures and vehicles. That can alert a vehicle that an object is not in the field of view of a sensor, for example. “A classic example is the V2X infrastructure setup, which will help L4 and accelerate L5 adoption safely,” Kumar said.

Key challenges
Many factors impact day-to-day driving, but there are four main AV design challenges:

  • Driving conditions. Weather, road, and traffic conditions directly impact autonomous driving. Making the right decisions to respond requires a faultless integration of sensors, ADAS, and AI software.
  • Limitations of available autonomous technologies. Even though AV technologies have made great improvements over the years, they still contain flaws and vulnerabilities. The safety factor plays a very important role here, and OEMs are facing constant challenges to achieve higher reliability and safety. Additionally, AV technologies will need enormous compute power.
  • Regulatory and legal implications. Human drivers employ common sense. For example, when a human driver sees a police officer at the center of an intersection waving their hands to direct traffic, the automatic reaction is to slow down and follow instructions. Likewise, when driving on a narrow road under construction or repair, humans reduce the speed of the vehicle when observing a “slow” sign held by a worker. Will AVs be able to make a distinction between a police officer’s hand signals and a pedestrian crossing the road while waving to a friend?
  • Lack of smart infrastructure, including V2X. Perhaps the most difficult aspect of L4 driving is the lack of smart infrastructure. For L4 vehicles to operate safely and reliably requires fully functional L4 design working harmoniously with the external environment. It is not enough to design a very smart L4 vehicle that functions faultlessly. To avoid crashes, L4 AVs need the support of smart infrastructure and V2X.

Further, almost all AV technologies rely on some sort of AI. Some AI functions more effectively than other AI does. A well-designed, AI-equipped ADAS may make better predictions and decisions during driving. An ADAS with a flawed AI design may be prone to accidents. However, despite these variations, there’s no regulation to ensure that all AI performs equally well, especially throughout its expected lifetime.

There are other challenges, as well. What decisions should AVs make when facing unforeseen circumstances such as tornadoes and wildfires? AI is intelligent, but it doesn’t know everything.

“To be truly Level 4, not only do you need to do more on board, but you also have to start thinking about the vehicle-to-vehicle infrastructure communication,” Siemens’ Fritz said. “It’s really easy for a Level 3 vehicle to get confused, and then the auxiliary driver is going to have to fix the situation. Comprehending the situation and formulating a plan to get back on track can add immense compute requirements, which will consume more power. Remember, Level 4 is driving unaided in an urban environment. Think about driving through downtown Manhattan, Chicago, or San Francisco. The number of unanticipated situations is myriad. We’ve seen lots of cases where the L3+ vehicles moving toward Level 4 get into situations where they constantly need to be bailed out. L4 means that the vehicle needs to figure that out for itself or remote connections need to tell the vehicle what to do. That’s where your vehicle infrastructure could come in.”

L4 design approach
Moving from L3 to L4 to L5 is a dynamic process. It has been clearly defined that L3 requires a driver and L5 does not. Given that, why is L4 needed to go directly from L3 to L5? It is confusing to say L4 requires a human driver to be present, yet there is no expectation for human intervention.

Perhaps, the best answer comes from the idea of “sustained and reliable technology performance.” As of today, AV technology is not 100% dependable. The accident records of L5 robo-taxis are unacceptable, so many believe L4 provides the best choice of working out all the AV bugs. The presence of a human driver provides the confidence that things are manageable in case of a problem.

Other challenges and considerations in designing L4 include software, simulation, new technologies, and tests.

Software considerations
Software remains a critical factor in L4 and L5. But as it exceeds multiple millions of lines of code, it also requires robust processes for software development, test, verification, and updates.

“There are two aspects here — the software development process and the assessment of performance and functionality,” said Frank Schirrmeister, vice president of solutions and business development at Arteris. “For software development, virtual prototypes have long been a development vehicle to allow ‘shift-left’ for software development. They are augmented with FPGA-based prototyping and emulation, often in hybrid use modes, to enable the consideration of more levels of fidelity. Virtual prototypes solve the issue of early software development on hardware representations of various fidelities.”

Initiatives like the Scalable Open Architecture for Embedded Edge (SOAFEE) have emerged for cloud-based software development process automation, working on the vision to “bring cloud-native development paradigm and its ubiquitous ecosystem to the highly diverse, heterogeneous compute platforms that will power the next generation of automotive and safety-critical systems.”

“In contrast to the virtual platforms’ objective to enable early hardware/software development and check hardware/software interaction, SOAFEE focuses on helping software development across different development platforms,” Schirrmeister said.

That’s a starting point, but more is required. “Moving from L3 to L4 requires much new software to be written to enable the vehicle to accurately understand all of its surroundings, predict future behavior of moving objects within its purview, and then accurately plan and execute the autonomous driving function,” said Robert Day, director of automotive partnerships for Arm’s Automotive Line of Business. “The software also has to make decisions in cases where it cannot accurately predict and plan its route, as there is no driver to hand back control to. This has implications on testing and safety cases so that the vehicle can safely operate under conditions that it cannot accurately understand. This might ultimately require the vehicle to ‘phone home’ for assistance, but it still needs to get to a safe place to do so.”

Simulation and digital twins
Simulation will continue to be an important area because it is not practical to road test all the different versions of L4 designs. For the most part, simulation also relies on software, although test models including AI, digital twins, and others still need to be developed.

Renesas’ concept of digital twin with DevOps includes cloud-based development environments to evaluate the performance of its devices, with plans to develop applications for ECUs and vehicles equipped with its devices. “We need to collect live data from the edge in order to understand how our devices, software, and development environments are used at the application development sites and for actual vehicles on the edge,” said Hirofumi Kawaguchi, vice president of Renesas’ HPC Software Solution Division. “In the virtual environment on the cloud with the system-level simulators, the live data will be used to reproduce the usage of our products on the edge. This is an example of digital twin. By analyzing the real usage reproduced in the virtual environment on the cloud, we can define the requirements for the next products and develop our products with the latest values.”

New technologies and tests needed
New technologies including ADAS will continue to develop and improve, and new sensors, and vehicle architectures will evolve. More tests are needed to ultimately achieve AV safety and reliability, including getting ASIL-D and ISO 26262 certified.

While it is beneficial to add technologies and new improvements to the existing L3 design as a pathway to L4, ultimately it will require a completely new architecture to achieve true L4. The ADAS system needs to be multiple times more powerful than L3’s. Additionally, new sensors need to be installed to increase a vehicle’s understanding of the environment. It takes more than just detecting what is in front and applying emergency braking when necessary. L4 needs to know more than that. It needs to know what to do if there’s a roadblock, accident, detour, and the like.

“L4 Vehicles will have enhanced sensor suites, with an increased number of sensors, as well as new types and higher-performing sensors,” noted Infineon’s Jones. “An L3 vehicle may have 5 to 8 cameras, while an L4 will have 12 or more. The cameras also will have higher resolution. Radar requirements for angular resolution also will rise, the number of lidars will increase for improved coverage and incoming data, and an enhanced sensor suite will require more processing for sensor fusion and decision-making, which will drive power consumption and require higher computer performance. Going from L3 to L4 we will see TOPS increase by a factor of 10 due to greater availability of raw data, and many more Ethernet channels for data throughput will be introduced.”

Because the requirements for each level are determined by operational capabilities, there is no textbook definition or specification on the number of required sensors.

“One can easily expect a monumental increase in the number of sensors going from L3 to L4 because each task now requires much more automation with higher safety requirements,” said Amol Borkar, director of product management, marketing and business development for Tensilica Vision and AI DSPs at Cadence. “For example, Tesla’s L3 is comprised of 20 sensors (ultrasonics + camera), while estimates it will need nearly 30 camera, 20 radar, and 5 to 7 lidar sensors for operational L4 platforms. You can see that each type of sensor has nearly doubled, not to mention the new event cameras, which can detect movement at the equivalent of 100,000 fps or higher. These could be critical to avoid accidents, as they could help the prediction flow much sooner than image-based cameras. This requires a lot of compute to handle both the data bandwidth and decision-making steps. So the architecture is likely to evolve to handle such massive requirements from all aspects — physical size, form factor, thermals, and last but not least, BOM cost if L4 is to go into mass production.”

Additionally, advanced testing will be required to ensure L4 is functional and safe.

Sven Kopacz, autonomous vehicle section manager at Keysight Technologies, noted that tools for simulation and testing of the components or models in the ADAS system, functional testing, and simulation of the entire vehicle or sub-systems in the context of the whole car can be done using hardware-in-the-loop techniques, and even simulated environments using dynamometers as part of a vehicle in the loop test system.

“Vehicle and traffic environment simulators can be part of this testing as it moves from sub-systems to complete vehicle tests,” Kopacz said. “An important link in the digital chain is when the simulation meets the hardware implementation of system parts — for example, when the real radar sensor or the road environment is introduced to the system. These components are ‘emulated’ to the ADAS system, allowing cost-efficient bench-based testing of the overall function ahead of track testing. A comprehensive and repeatable testing process encompassing real-world scenarios and corner cases will be an absolute necessity for attaining the level 4 autonomy standard for vehicles. In fact, the importance of ensuring both operational functionality and important safety measures are reliably validated will gain more significance at level 4, as human override within the geofenced operational domain is only optional rather than required.”

L4 AVs offer many benefits to drivers, but getting there will take some time. Most likely, that will happen in stages of L3+. With more automation, more advanced ADAS, sensors, software, simulation, and tests will be needed. But even though it will be a continual work in progress, OEMs will benefit from it. New L3+ and L4 features will provide additional convenience to customers and help OEMs differentiate their products with new value-added offerings.

Related Reading
Designing Crash-Proof Autonomous Vehicles
A lack of supervision and regulation is allowing unnecessary accidents with AVs. More strenuous processes are needed.
How Many Sensors For Autonomous Driving?
Sensor technologies are still evolving, and capabilities are being debated.

Leave a Reply

(Note: This name will be displayed publicly)