Automatic coverage model generation technology continues to advance.
Verification is all about mitigating risk, and one of the growing issues alongside of increasing complexity and new architectures is coverage.
The whole notion of coverage is making sure a chip will work as designed. That requires determining the effectiveness of the simulation tests that stimulate it, and its effectiveness in terms of activating structures of functional behavior and design.
“Historically there were two classes of coverage—structural and functional,” said Harry Foster, chief verification scientist at Mentor, a Siemens business. “Structural coverage includes things like code coverage and line coverage, whereas functional coverage is behaviors of a design, such as looking at all the appropriate packets coming into an interface or all the state machines. Both have existed for some time.”
SoCs added new types of coverage, mostly related to system-level aspects of the design such as performance, power, security and safety. “At the same time, system-level analysis has grown in importance, doesn’t fit into the traditional notion of coverage, and happens to be very statistical in nature,” Foster said. “Nonetheless, we’re still trying to go back to see whether the testbench generating the stimulus allows us to test performance, power, safety, and security.”
Automatic generation of these coverage models has proven helpful for some time.
Structural coverage in the software world goes back decades, while the hardware world only introduced structural coverage (such as line coverage in about 1990), which automatically generates a coverage model, Foster said. “This pulls out the appropriate structures, builds a coverage model, and has been practiced for years. When it comes to functional coverage, this has been an area of pretty active research in academia for about the past 15 years with mixed results.”
More recently, commercial tools help identify missing coverage properties using data mining techniques, he said. “The idea is that I run the simulation and then this data mining tool essentially goes through the waveforms that are generated and determines if it can identify some interesting properties. Then the user can analyze and determine where the coverage holes are.”
The introduction of the Portable Stimulus standard, which describe test intent, provides a natural type of coverage model, as well. “The reason for this is that we compile the Portable Stimulus into a graph, and the graph generates stimulus by traversing the graph,” said Foster. “You can tell two things from this. First, you can tell the input space about what percentage has been covered. Second, you can see what use cases have been covered because each traversal through the graph is basically a use case.
New advances include the ability to generate the test, and as part of that it see what the coverage will be of that test execution at generation time, according to Larry Melling, product management director in Cadence‘s System and Verification Group. “It is now possible to generate, test and rank the tests before even executing a test. That means much more efficient regressions where the testing can be ranked and ordered, according to the kind of coverage that it’s going to deliver in order to have a more efficient regression. Tests of low value can be weeded out—along with tests that have a low return from a coverage perspective—from the regression plan or the test plan without spending any CPU cycles on actually running those tests.”
Correlated with this are post-execution coverage analysis capabilities. Portable Stimulus models and tool engines can capture information about the execution in order to write coverage queries.
“You can write, ‘I’m wondering if we have tested this situation that these things happened in parallel and this occurred while that was going on and etc.,” said Melling. “After the fact, you actually can go through, build up, and understand more about what your testing covered and give yourself a more complete picture as to what has been exercised in the design through post-process PSS coverage interrogation models.”
Increasing activity around the ISO 26262 Automotive Safety Standard, which requires the most rigorous coverage analysis is driving additional efforts. “The well-known V-Model used in ISO 26262 and adopted in other applications suggests a closed-loop verification process where requirements are mapped to the verification process that generates coverage data fed back directly to the requirements. Portable Stimulus provides an effective method map requirements to scenarios, drive verification testbenches and then feed coverage data back. This is a powerful way to match coverage at the system level directly to requirements and the original design specification. We are marching toward a point where the verification of an executable intent specification can be directly measured,” noted Dave Kelf, chief marketing officer at Breker Verification Systems.
All manner of data inspires change
With the uptick in activity around everything related to data analytics/big data analysis, machine learning technologies are giving engineering teams reason for pause because so much more can be done with data than ever before
“It’s going to change the landscape in verification, especially because you are doing so much more than you could even have described in your original coverage model thinking at the IP level as you’re assembling this stuff,” Melling said. “The testing that you’re doing there really becomes how to enable and visualize all the activities and all the things that are going on, and then to put that into something well understood like a coverage model or a metric driven approach to give engineering teams the view of, ‘Oh, you have actually tested what I want, I can see those results, I can visualize the completeness of it, and feel more confident that I’m going to get what I want at the end of the day.’”
He suggested this is fertile ground for EDA providers to be taking advantage of because it’s an area where automation can make a huge difference. “A lot of these things just won’t happen and haven’t happened because they weren’t automated. They weren’t packaged in the right way. The data wasn’t accessible. All these ‘weren’ts and won’ts and couldn’ts’ were basically preventing us from taking advantage of a lot of the work that had been done, and information that was available from that work. This automatic generation of coverage models is one of the first indicators that this is not only possible, but it changes the game in terms of not being two distinct efforts. If somebody’s having to figure out what they are testing, what they are covering, what they are learning from this, along with figuring out how to generate all of the stimulus to be able to do it, that takes a lot of time.”
Drawbacks to automatic generation
While reactions are usually positive to automatic generation of coverage models, one concern is that these new capabilities could create a false sense of protection. Coverage models still need to be checked carefully.
On the bright side, formal and assertion-based verification is moving up the ranks as a way to alleviate that concern, and that the engineering team is still going to be writing those assertions and coming up with checks that give confidence they’re not just blindly accepting auto-generated coverage models.
“Formal verification is a technology moving in leaps and bounds,” Melling said. “A lot of that is also based upon a data-driven kind of approach, where we’re able to improve efficiency of the formal algorithms because we’re learning as we go and leveraging those kind of machine learning technologies to make it more efficient and more applicable in a broader space.”
This automated approach is gaining ground throughout the EDA world.
“In addition to the many other benefits of assertions and/or cover properties, these can be used to generate certain types of coverage models automatically,” said Nicolae Tusinschi, product specialist, design verification at OneSpin Solutions. “In simulation, assertions and cover properties that are triggered define a form of coverage that is already integrated with other metrics, such as code coverage and functional coverage. In formal verification, assertions and cover properties are triggered, and these results, if not a must, should be integrated into the coverage database and coverage viewer along with the simulation metrics.”
He noted that formal verification also can provide model-based mutation coverage, which is highly valuable because it measures how well the design is covered by the assertions and cover properties. “Most coverage metrics only report whether a part of the design has been stimulated. Mutation coverage measures whether or not bugs in the design would actually be detected by the assertions. This form of coverage also should be integrated with the other results from simulation and formal verification.”
Putting this all in perspective, Foster said that given the amount of time verification engineers spend in various areas of verification, automatic coverage is by far the most important topic.
Case in point: According to his research, verification engineers spend their time in the following manner:
Still, going forward, the biggest challenge will be managing and leveraging all of the data effectively, Melling asserted.
“[Big verification data] is going to set new requirements in that the data is coming from multiple sources, and you really want to be able to combine data to create more complex data sets to plug into these neural network type of algorithms that can be learned from. So the big challenges will be in creating the infrastructure to be able to automate this,” he said.
Leave a Reply