A Combined Design And Verification Flow For Safety-Critical Designs

Formal verification is at the heart of finding both introduced and random errors.

popularity

By Tom Anderson and Srikanth Rengarajan

I welcome my co-author for today’s post, Srikanth Rengarajan, vice president of product and business development from Austemper Design Systems.

We would like to focus on safety-critical designs, a topic very much in the news these days because of the public’s fascination with autonomous vehicles. This consumer category now joins medical electronics, aeronautics, aerospace, and military applications in having high demands for safety. The consequences of semiconductor component failure in any of these domains are severe, involving serious injury or even death.

Design and verification of any complex chip is hard enough, but additional challenges arise for safety-critical designs. Many of these challenges are reflected in the requirements imposed by a set of related standards from various international organizations. The goal of these standards is to define a rigorous development (design and verification) process for safety-critical hardware projects and to set rules for the robustness of the resulting designs. The safety standard for automobiles, ISO 26262, is widely discussed as it is being extended to cover other vehicles.

These standards impose stringent requirements for functional safety that span the entire life cycle of a system, from concept to development through production, and even into decommissioning. They define two categories of component faults that can compromise safety. Systematic faults are design errors introduced during development, either through human error or tool malfunction. Random faults occur during the operation of the component due to external effects; the classic example is an alpha particle hit flipping a bit in a memory.

Although the safety standards have specific rules about process and traceability, elimination of systematic faults involves the same sort of design discipline and thorough verification that should be used for any complex chip development project. Because of the certainty it provides, formal verification is at the heart of this process. Designers should specify assertions about how their designs should and should not operate and can be involved in the formal verification process without having to learn complicated testbench languages and methodologies.

Formal tools automatically scrub a design of many types of common errors, verify that the design matches the assertions, and compare two versions of the design to ensure that they are equivalent. These techniques have been well established for many years in teams designing and verifying ASICs and full-custom chips. Today’s FPGAs can replace ASICs for many applications, including those for which functional safety is a priority. Accordingly, more FPGA teams are using formal tools, including sequential equivalence checkers tuned for FPGA-specific optimizations.

Random faults pose unique design and verification challenges for safety-critical applications. By their nature, random events cannot be predicted and must be handled “on the fly” when they occur. This basic rule is hard to meet but easy to state: a high percentage of possible random faults in safety-critical logic must be detected, and either corrected or used to raise an alarm. Many random faults are transient and can be cleared by a system reset. Such a major disruption must be handled carefully –– by having an autonomous vehicle park first, for example.


Random faults in safety-critical applications challenge the design and verification flow. After inserting a hardware safety mechanism into the design, formal verification can ensure that faults are handled properly when they occur.

The hardware safety logic to detect and handle both transient or “soft” and permanent or “hard” random faults can be designed manually, but this takes significant project time and requires specific expertise. Ideally, a specialized synthesis tool can create the circuitry to detect faults, correct errors and raise alarms as required. Elements of the circuitry might include parity bits, error-correction code (ECC) logic, double-redundant logic for crosschecking or triple-redundant logic to correct faults by voting.

Once the hardware safety logic has been inserted into the design, equivalence checking can verify that the normal or “mission” functionality is not affected when no faults occur. When faults happen, the safety standards require a clear understanding of their effects and how they are handled. The quantification of this understanding starts with a list of possible faults. Analysis of this list can be automated with the use of formal tools:

  • Fault injection efficiently adds faults from the fault list into the design
  • Fault propagation analysis trims out all faults that do not propagate since a fault that does not compromise mission functionality does not matter and formally determines which propagatable faults can be detected
  • Fault detection analysis verifies that safety mechanisms perform properly in the event of random faults
  • Failure mode contribution analysis determines coverage for the safety logic

As announced recently, Austemper and OneSpin formed a partnership to foster adoption of functional safety practices for automotive electronics and other mission-critical applications, validating a combined design and verification flow that matches the description above.

More details on that partnership will be presented at a joint session in the OneSpin booth at the upcoming Design Automation Conference (DAC) in San Francisco.

Srikanth Rengarajan is the vice president of product and business development at Austemper Design Systems.



Leave a Reply


(Note: This name will be displayed publicly)