Verification Signoff Beyond Coverage

Ensuring implementation and verification match the customer’s requirements.

popularity

A common design view of verification signoff is to start with a comprehensive verification plan, covering every requirement defined among specifications and use-cases, the architectural definition, and any other relevant documents. Tests are then developed to cover every feature of the verification plan. Those tests are run and debugged, and identified issues are addressed within the design. This process iterates until the agreed level of coverage is met. Functional coverage is the metric by which this process is gauged, and it works well within its scope. The major electronic design automation (EDA) vendors have tools to run simulations, accumulate coverage statistics, and help further advance those metrics. But this is not the whole story in signoff. Coverage measures compliance to the verification plan, which is several steps removed from the customer’s requirements. How do designers know critical information was not dropped or added along the way?

What else matters in signoff?

Everything before the internally developed functional specifications/requirements matters. It is not possible to fully close the loop between customer requirements and the implementation/verification unless these are included in the analysis. Now value chains are compressed, and systems-on-chip (SoCs) are becoming more application-specific. Customers expect designs tuned to their exact needs, so what they defined as requirements must be matched in implementation/verification. It will not be well received if the customer discovers they are expected to patch around mismatches.

The challenge here is that definition may be a rather mixed bag of inputs: Word, PDF, DITA-based documents, spreadsheets, Simulink, SysML or virtual model prototypes, and software loads that should run on the final hardware (maybe with allowance for some changes). There may also be documented requirements in DOORS, Jama Connect, or a similar format. How does the verification team, design team, or architect signoff that the implementation and verification match the requirements? They will do their absolute best, of course, but where is the detailed and auditable process to ensure that every requirement maps to an implementation realization and that verification of the implementation is adequately covered?

To make this more concrete, suppose there is a feature that one important customer wants, but no one else requires. Maybe through oversight or misunderstanding, this feature does not make it into the functional specification/requirement. It happens all too often.  Even the ideal of 100% coverage will not catch this problem as coverage is only as good as the verification plan. There is a big problem if the verification plan does not accurately reflect the requirements.

Or suppose that during design, a team decides that it cannot implement exactly what the specification asks for, but an alternative is implemented with the belief that it will be just as good or even better. The team is unaware that this change will impact performance in some rare but important use case. Maybe this will be caught in simulation? Perhaps, but it is very difficult to be comprehensive in system-level testing. There is a real risk that this problematic change will survive all the way to silicon.

Requirements tracing complements coverage for verification signoff

Running equivalence checking between Word documents, virtual models, and register-transfer level (RTL) is not likely to be possible in our lifetimes, but it does not have to be. Systems builders and software teams are already actively using requirements traceability as a very robust method to track correspondence between top-level and implementation-level requirements, down to realization, verification and test. This traceability is supported with requirements tracing using Requirements Interchange Format (ReqIF) with platforms like DOORS and Jama Connect tools.

Although those tools are designed for easy adoption in the software world, they do not understand hardware semantics. They do support “foreign object” linkages to connect to design and verification data, but the burden for making those linkages correctly falls on design and verification engineers. This might not be so bad if there were only a few hundred objects to track. But think of the memory map, the interrupt map, the IO muxing map; these can run to tens of thousands of objects or more. Manual updates of all these objects through design changes and repartitioning becomes extremely difficult, if not impossible.

A better approach is through traceability management, which can connect to tools like ReqIF at the customer side and directly to the artifacts tracked by the design and verification teams. Understanding design semantics allows the possibility to infer connections between requirements and implementation and to keep those tracking correctly as the project evolves. This method ensures auditable linkages from customer specifications, down through design requirements and ultimately to the realization of the SoC.

This is traceability which can complete the verification signoff objective with low impact on SoC design teams. The kind of traceability support you will find in Arteris Harmony Trace.



Leave a Reply


(Note: This name will be displayed publicly)