Systems & Design
SPONSOR BLOG

Functional Safety Across Analog And Digital Domains

Planning and analyzing fault campaigns holistically is key in safety design.

popularity

The autonomy of vehicles has been all the rage recently. There are different levels of autonomous driving, with level 5 “Full Automation” being the target the industry is working towards, and Level 2 “Partial Automation” and Level 3 “Conditional Automation” being the level at which the automotive sector currently delivers the most technology. The amount of electronics in cars has been increasing at breathtaking speed. As a result, functional safety and safety design are arguably among the most critical aspects when developing electronics. It’s a fast-moving environment, as the flurry of recent announcements shows.

First, what are we protecting with safety? Put very simply: human life. As an example, according to the Euro NCAP, just for Europe, we were facing 51 per million deaths in 2019. Collectively, Europe has reduced the number of road deaths by 6% over the last five years alone. The intent is for new technology in the realm of ADAS to steepen that curve. Even with full automation, most issues are still caused by humans, either in another vehicle or a misbehaving pedestrian. A recent study from IDTech found that poor performance of the system caused only 1% of autonomous vehicle accidents, i.e., 2 out of 83 cases.


Source: Cadence, NCAP Video

Safety is critical, and it is an ethical, logistical, technical, and legislative challenge, as argued by Sascha Spillner recently.

So, what can go wrong technically?

A lot can go wrong in electronics. Things can go wrong in the digital and analog domains in semiconductor design, as well as in software, and at the hardware/software interface. Also, design teams need to consider the reliability of components throughout their lifecycle. Given the cross-domain dependencies, another significant issue is communication. Engineers from different domains are still unfamiliar with each other’s domain knowledge, or worse, each other’s domain-lingo. The integration of planning for safety, and its analysis during a project, are crucial aspects not bound to any one domain.

For digital safety, designers use fault simulation to assess the impact of stuck-at faults and related items. Safety verification runs in parallel to functional verification. Formal verification optimizes the fault list and avoids dynamic simulation for a subset of faults. Analog components are too often treated as “black-box” today, handled outside the safety verification process. Once all the effects are understood, design teams implement safety mechanisms like safety islands, triple modular redundancy, and logic isolation. In an ideal world, automation will drive these implementation aspects top-down. Analog monitoring functions like voltage or clock monitors can be enhanced by analog safety mechanisms and integrated on a single chip.

Bottom line, to enable design for safety, design teams desire an integrated environment to plan and analyze both digital and analog aspects of fault campaigns holistically and launch campaigns. Such integration allows both the proper tracking of progress and helps avoid communication mistakes. In addition, the verification of analog and digital needs to guide implementation via automation.

Personally, I am getting used to safety features like lane-assist, audio warnings for proximity, and even the capabilities of automatically following the car in front of me on the highway. However, where is this automation leading us? Just a couple of months in, and I am trusting the audio signal for the blind-spot warning just as much as turning my own head. But what about a teenager who is just learning to drive? Should there be a mode that switches off all these capabilities while a beginner is learning?

It is comforting to see some manufacturers requiring users to have a minimum safety rating for manual driving, as a requirement to use the latest self-driving beta. Has anybody else heard the excuse that the nanny tanked the driving rating while using the car, jeopardizing the qualification to use the beta software?

Interesting times, and hopefully safer, with all the recent enhancements the industry is making in the world of design tools.



1 comments

Tom Kunich says:

I believe that due to legal ramifications that you should strongly avoid using terms like “self driving”. A 2% failure rate is very high if human deaths are concerned and this sort of software should always be considered a work in progress. A driver should always be alert and with his hands on the wheel and ready to assume control. I do not see stop signs of stop lights and the like to be a problem but maneuvering which is a problem for live humans.

Leave a Reply


(Note: This name will be displayed publicly)