Couple computing hardware models to sensor models for better simulation of automated driving functions.
By Ron Martin and Christoph Sohrmann
Over the past few years, there has been a marked expansion in research and development activities related to driver assistance systems as well as highly automated and connected driving systems. However, this has yet to translate into a higher degree of automation in the average production vehicle – especially for SAE Level 3 and above. The next step in automated driving calls for a more complete appreciation of the environment, which at the moment poses a serious challenge. The answer lies in a vast array of sensor systems involving cameras, radar, lidar, ultrasound and other technologies. But integrating these systems requires not only a complex setup but also the development of suitable algorithms such as for sensor fusion, segmentation, object recognition and classification, and path planning and control – all managed within a suitable E/E architecture.
To reduce complexity, automated driving functions are generally divided into three main phases: sensing, planning and acting. All three functions are performed by different hardware components: sensors, a range of ECUs and actuators. Figure 1 shows the basic hardware components involved in a vehicle’s automated driving functions.
Fig. 1: The basic hardware components involved in a vehicle’s automated driving functions.
In a dynamic environment, the vehicle assesses its surroundings and processes the information in the three main phases. During the sensing phase, the data captured by the smart sensors is collected with the help of application-specific integrated circuits (ASICs) and pre-processing software, and forwarded to a central onboard computer for further processing and analysis. This is complemented by wireless connections to elements in the vehicle’s surroundings such as traffic lights or other vehicles (V2X communication) and to the cloud. In the planning phase, all this information is then used to decide on the correct course of action and to plan a new path. Finally, in the acting phase, the information is sent via zone controllers to actuator-specific ECUs, which manage the actuators responsible for steering, accelerating and braking. Different manufacturers and vehicles can implement the processing chain on different hardware architectures. The figure shows a sample hardware chain embedded in a central E/E architecture.
To make the development process more efficient, virtual hardware models can be employed at different stages. During early development, for instance, virtual models of the components can help optimize the costs, power consumption and reliability of both software and hardware. At the verification and validation stage, it is essential to ensure the safety and functionality of the automated functions. That involves testing the system’s performance in common and less common scenarios and environmental conditions (known as edge cases), which runs to several million test kilometers. This kind of testing cannot be performed in a real environment. The current state of technology means that during early development only the sensor models are coupled to a dynamic environmental model, while the computing hardware (ASICs, ECUs, etc.) is neglected due to the modeling and coupling complexity. Finding a way to couple computing hardware models to sensor models would be a major step forward in the development of sensor systems and suitable E/E architectures.
Despite the significant advances made in automated driving over the past few years, there are still several questions that remain unanswered. Particularly with regard to the development of environmental sensors, three core aspects will have to be tackled over the coming years:
These three aspects are hierarchically connected. To close the knowledge gap between OEMs and providers of sensor systems, it will be essential to pool their expertise and formalize it by creating standards. Developing and supporting standardized interfaces between virtual models will enhance integration and interoperability, and pave the way for more precise and efficient simulations of automated driving functions. Since test tracks and public roads are ill-suited for performing the full battery of safety-relevant functionality tests required for the type approval of the many different future vehicle models, virtual models will be required there as well. And given that it will also be impossible for technical inspectors to validate the models used for these tests, systematic classification in the form of a credibility assessment must be part of the package. Fraunhofer IIS/EAS is working with research and automotive partners in multiple projects with a view to developing a method for estimating model accuracy. This will soon make it possible to use virtual models to test and verify automated driving functions.
Christoph Sohrmann is the group manager for virtual system development for Fraunhofer IIS’ Engineering of Adaptive Systems Division. He received a B.Sc. in Computational Science at the Chemnitz University of Technology and a PhD in Theoretical Solid State Physics at the University of Warwick (UK).
Leave a Reply