Systems & Design
SPONSOR BLOG

Verification Planning And Management With Formal

Getting a unified look at what formal, simulation, and acceleration/emulation contribute to verification.

popularity

Over the last twenty years, formal verification has grown from a niche technology practiced only by specialists to an essential part of mainstream chip development. Along the way, several advances were needed to make wider adoption of formal feasible. These included the standardization of assertion languages, enhanced formal engine performance and capacity, better debug capabilities, and pushbutton “apps” to solve specific problems. In addition, formal tools and their results were integrated with the results from traditional simulation tests so that team members and leaders could have a unified view of verification progress.

This may sound obvious, but it was not the case for many years. Even for chip teams using formal tools, it was common to hear management complaining that they didn’t know how to gauge what value “that group of PhDs in the corner” was adding to the verification process. For those considering adoption, one of the usual questions was “if I use formal, what do I not have to test in simulation?” It was hard to answer these concerns without a unified look at what formal, simulation, and acceleration/emulation contributed to verification.

At OneSpin’s recent Osmosis users’ group meeting in Munich, one of our expert users gave an excellent talk to address exactly this need. Antti Rautakoura is an SoC/ASIC Verification Specialist currently on leave from Nokia to pursue his PhD at Finland’s Tampere University. He began by discussing the incredible complexity of today’s large chips, including the increase in parallelism to compensate for the fact that single-thread performance is falling behind Moore’s Law. This puts even more pressure on verification teams, whose work typically consumes about two-thirds of the total project effort.

Antti noted that there are many verification tools available and it isn’t always clear which will provide the most benefit on a project. He said that the key is starting with a solid verification plan and then choosing the right tools with the right metrics. High-quality metrics provide concrete evidence of the value added by a given tool or, alternatively, show redundancy in the process. We agreed strongly with his statement “Every documented, communicated, and measured flow has value.” The key is a verification plan that covers all tools, followed by a set of metrics that can accurately measure progress.

Verification plans used to be simple lists of testcases to be written and run in simulation. Antti notes that this is no longer the case. Verification plans should be based on design features, with information on which tool or method will be used to verify each feature and metrics to determine when verification is complete. For example, a feature might list coverage points that must be reached in simulation, formal, or both in order to consider the feature verified. Antti noted that such an approach “is plan driven and result centric by nature” and “can give insight to true completeness of the verification.”

While there are some vendor-specific solutions for integrating formal with planning and metrics from simulation, these work only within their closed tool flows. Antti stressed the importance of support for third-party verification planning tools, and described how he uses the OneSpin PortableCoverage solution to meet this need. He employs the OneSpin Verification Planning Integration App to import the results from formal analysis into any of the leading planning tools. He stressed that users must understand the mappings of results between simulation and formal, and examine bounded (incomplete) proofs carefully to see if more verification is needed.

Antti then discussed the role of coverage in a combined flow. He sees high value in the OneSpin Coverage Closure Accelerator App, which identifies portions of the design that cannot be reached by any simulation test or formal analysis. Such “dead code” should either be removed or documented if it is due to reuse or debug structures. He pointed out that dead code determination may be influenced by constraints on the design and stressed that the verification team must review the constraints carefully to ensure accurate results.

If dead code is not removed from the design, it must be excluded from coverage results so that its lack of reachability does not artificially reduce the coverage metrics. This also means that the verification team does not waste time and resources running simulations that try to hit unreachable points in the design. For reachable portions of the design, coverage metrics from formal analysis should be integrated with those from simulation. Antti uses the OneSpin Verification Coverage Integration App to bring metrics from the OneSpin Quantify App into the coverage viewer of any major third-party simulator. Quantify provides model-based mutation coverage, the industry’s most accurate metric for judging how well a design is covered by assertions.

Antti said that our PortableCoverage flow enables shared responsibility between the design and verification teams, and praised our efficient Apps and support for third-party tools. He also gave a very nice summary for the value of verification planning and management:

  • Early planning helps to catch unclear requirements, documents or even architectural flaws early before any lines of verification code are written
  • The entire process and its artifacts are documented
  • Verification progress is visualized
  • Robust verification planning is an essential part of requirement traceability

We are grateful to Antti for presenting his talk and sharing his expertise with our users. A video of his full presentation is available here – https://www.onespin.com/resources/videos/osmosis-2019/antti-rautakoura-nokia. We hope that you enjoy it as much as we did.



Leave a Reply


(Note: This name will be displayed publicly)