Addressing a common misconception about what the safety standard requires.
ISO 26262, the automotive functional safety standard, requires the assessment of software tool confidence levels (TCLs) as either a TCL1, TCL2 or TCL3. Part 8:2011, clause 11.4.5 of the ISO 26262 standard provides a methodology with guidance for software tool classification and qualification. It applies to software tools used for the development of safety-critical designs where it is essential that each tool’s function operates correctly.
There is a common misconception that the goal of ISO 26262 part 8, clause 11―confidence in the use of software tools―is to assess all tools in an SoC development flow as TCL1. This is incorrect. The actual goal is to construct and describe a development flow in which you have high confidence that the software tools you’re using do not introduce, or fail to detect, errors. It is not always possible to assess a tool with TCL1, in which case you must perform a qualification of that software tool.
In fact, it is acceptable to use software tools that are assessed at TCL2 or TCL3 as long as you perform additional qualifications. The acceptable qualifications methods are listed in ISO 26262-8:2011, table 4 for tools classified as TCL3 and table 5 for tools classified as TCL2. The methods are the same for either a TCL2 or a TCL3 assessment, but different qualification methods are preferred for different ASIL classifications. Figure 1 shows a consolidation of the tables with the preferred methods highlighted and descriptions follow.
Figure 1. Methods for Qualifying Software Tools
1a Increased confidence from use
Companies with experience developing automotive SoCs may have extensively documented histories of EDA tools used for similar design types. In this case, it may be possible to qualify a tool as having increased confidence from use. Method 1a requires evidence that the tool has been used for the same purpose with comparable use cases, comparable operating environment and configuration, and that the specification of the tool is unchanged. It also requires that the occurrence of malfunctions and the corresponding erroneous output(s) of the software tool acquired during previous development periods or releases be documented in a systematic way. If you are using an EDA tool for the first time or entirely new design, or if the tool has substantially changed since you last used it, this qualification option may not be appropriate.
1b Evaluation of the tool development process
Method 1b requires access to the tool supplier’s internal development process. The evaluation of the tool development process is usually performed by an assessment based on an appropriate national or international standard certification organization. Synopsys has worked with SGS TÜV Saar, a leading inspection, verification, testing and certification company, to do an independent assessment of many of our tool development processes. We provide our customers with the results of these assessments to help simplify their tool qualification efforts.
1c Validation of the software tool
For most EDA tools, the recommended qualification method to use is 1c, validation of the software tool. In this method, your design or CAD team should validate that the tool complies with your company’s specified requirements in the context of its safety requirements and related use cases. You should analyze potential tool malfunctions and corresponding erroneous outputs, as well as the mitigation measures to prevent or detect them. In addition, you must consider the reaction of the tool to anomalous operating conditions (e.g. incomplete input data, incomplete installation, use of configuration, settings that are not considered ‘safe,’ or settings that are undocumented). At most large-to-medium–sized SoC design companies, CAD teams evaluate incoming new tools or tool releases using company-specific test cases, and for consistency, often run these in the form of regression tests where known-good outputs are compared with the outputs of a new tool version.
It is important that your test cases include the use cases that are specified for the safety requirements of your target design. If you do not include the use cases, you will need to develop and validate additional test cases with the tool before deploying it. In addition, complete design test cases may be used to flush an entire EDA tool flow to check for malfunctions related to tool interactions. Other sources of data for this type of qualification are the tool’s user guide or release notes, and the supplier’s online support center.
You may use software tools that have been assessed at TCL2 or TCL3, along with a 1c validation, for any ASIL level as specified in ISO 26262-8, tables 4 and 5.
1d Development in accordance with a safety standard
The 1d method is less commonly used for EDA tools because they are usually developed and used for a wide variety of industries and applications. No safety standard is fully applicable to the development of EDA software tools for safety-critical automotive designs. Instead, you can select a subset of requirements from an existing safety standard that is relevant to safety-critical automotive designs.
As a leading provider of EDA software tools for automotive system-on-chips (SoCs), designers frequently ask for our insight on assessing TCLs that are consistent with ISO 26262 requirements as documented in clause 11 of part 8 titled, “Confidence in the Use of Software Tools.” To help designers understand the intent of this clause and provide guidance on how to effectively meet the standard’s requirements, we offer a white paper on this topic, entitled “How to Assess TCLs for ISO 26262.”
I’ve couple of questions in this regard:
1. Per the standard, 1a, 1b, 1c and 1d are alternate entries. And as per the standard itself “an appropriate combination of methods shall be applied in accordance with the ASIL indicated”. From that, i would assume that for an ASIL-D development using a TCL3 tool (e.g. ATPG tool) I need to use a combination of 1c and 1d. Is that understanding correct?
2. The validation of the tool should meet the below criteria. Given that, isn’t it better if the EDA companies perform these rather than pushing this to design houses?
11.4.9.2 The validation of the software tool shall meet the following criteria:
a) the validation measures shall provide evidence that the software tool complies with specified requirements to its purpose as specified in the classification;
NOTE 1 The validation shall provide evidence that the assessed tool errors either do not occur or will be detected.
NOTE 2 The validation can be performed either by using a customized test suite developed by the user or by the tool vendor (if the test suite of the vendor includes the tool use cases of the user).
EXAMPLE The standard for a programming language helps to define the requirements for validating the associated compiler.
b) the malfunctions and their corresponding erroneous outputs of the software tool occurring during validation shall be analysed together with information on their possible consequences and with measures to avoid or detect them; and
c) the reaction of the software tool to anomalous operating conditions shall be examined.
Hi Prasanth. Yes, I agree. For 1b Evaluation of the tool development process and 1c Validation of the software tool is does make a lot of sense that EDA companies take care of that as these may be very expensive activities for a design house, hardly justifiable.
Here is a link with more on this topic:
https://semiengineering.com/demystifying-eda-support-for-iso-26262-tool-qualification/