Can Big Data Help Coverage Closure?

When does a large amount of data become Big Data, and could system-level verification benefit from it?

popularity

Semiconductor designs are a combination of very large numbers and very small numbers. There is a large numbers of transistors at very small sizes, and databases are often large.

The chip industry has been looking at machine learning to effectively manage some of this data, but so far datasets have not been properly tagged across the industry and there is a reluctance to share that data.

“Today we collect data, structure it, use it to learn and interpret on the edge, but it’s a limited action field,” said Aart de Geus, chairman and co-CEO of Synopsys, during his keynote speech at the Synopsys Users Group. “In the future we will predict things by virtue of the intersection of hardware, software and AI/machine learning and Big Data.”

The amount of data generated by sensors and the digitization of market segments will grow exponentially in the future. “This is really big and really small data,” said de Geus, referring to the volume of data as well as how data will be used at the atomic and molecular level for designing and manufacturing chips. “If you change one parameter, what is the impact on a block? If you simulate it again and again, then you generate a lot of data. The question is whether we will be able to apply that to machine learning so we can design chips in a much shorter amount of time.”

These are big unknowns, and there are gaps to fill between where the industry is today and where it will need to be to achieve those goals. For one thing, there are aspects of the design and verification flow that are not precise, based on best guesses and guidance from company experts. One such example is coverage and verification closure. The state space of a leading-edge design is so massive that there is no hope of considering exhaustive verification. As a result, statistical techniques may be considered essential.

The development of the Portable Stimulus concept potentially transforms coverage from being an unstructured data problem into a structured data problem. So does that mean coverage can now be solved using Big Data techniques? Or will it simply remain a task associated with a large dataset?

What is system coverage?
The first problem is defining what system coverage means. Mark Glasser, a principal engineer at Nvidia, defines it as whether “all of the behaviors that you are interested in have occurred or not.”

System-level coverage is different than block-level coverage. “System-level coverage is all about use cases,” says , CEO for Oski Technology. “I assume that everything has been verified exhaustively at the block-level and that sub-systems are correct. What is left when you assemble the system? Have you verified all the use cases? This includes performance, safety, and security.”

Care has to be taken in defining the context of the system. “Coverage means different things depending upon where you are in the supply chain,” points out , CEO of Test and Verification Solutions. “Companies delivering autonomous vehicles have a very different definition compared to what semiconductor companies would define it to be.”

, CEO of Breker, put some figures to it. “When considering the entire verification space using graph-based techniques, you quickly realize that you are dealing with very big numbers. A graph for a unit could have 1030 paths and a graph for a system could be 10300 paths. How many simulations can you run? Perhaps 105 or 106. How many in emulation? Perhaps 108. How many in silicon? Perhaps 1012. So we have 1012 samples and a space of 10300.”

At the block level, the industry has relied upon to define verification progress. This came about because of the constrained random methodology, which generated testcases without any knowledge about what those tests would accomplish.

Tests generated by Portable Stimulus (PS) have well-defined coverage. “System testing is very much done using directed testing and directed testing has never made use of coverage, never mind functional or any other kind of coverage,” says Ashish Darbari, CEO for Axiomise.

As companies adopt a Portable Stimulus strategy for system-level verification, it becomes possible to generate many more cases than possible in the past, and it becomes reasonable to ask when enough testcases have been generated. That, in turn, requires a definition for system-level coverage.

“As systems have become more complex, we have to do a lot more than just coverage of functionality,” explains Bartley. “We also have to consider power, hardware/software co-verification, multiple clock domains, performance etc. The definition of system coverage will change.”

Continuing with functional coverage
Should functional coverage continue to be used at the system level? Not according to Hamid. “We have trained engineers for the past twenty years to use implementation coverage as a proxy for our intent coverage.”

Darbari agrees. “We have been talking about implementation coverage for far too long, and tracking intent is a great step forward,” he says.

But not everyone is willing to walk away from functional coverage. Constrained random methodology sometimes generates scenarios that were never anticipated, and these can be tracked by functional coverage.

“It is very easy to produce a coverage model,” says Bartley. “It is very difficult to produce a useful coverage model. You can do crosses and create huge models that are impossible to hit and aren’t very useful.”

So while new coverage models are being considered, the old ones also are being preserved. “When we talk about PS coverage, nobody is saying that you have to give up functional coverage or code coverage,” says Singhal. “They are there, and after a few years we will have experience with system-level coverage. But we don’t have to give up anything today.”

Is it really Big Data?
Just because a problem is large and produces a lot of data does not mean that it is a Big Data problem. “Autonomous vehicles have massive data sets and would swamp anything that we produce for an SoC,” Bartley points out. “Big Data techniques have two main uses—are we trying to train something or are we trying to make a decision?”

It helps to look at the end user problem. “We have storage farms with terabytes of data collected from simulation, including coverage data,” says Glasser. “If you look at that as a giant dataset, we need to find ways to analyze that and come up with more interesting ways to declare if the design is ready to go or not.”

To Glasser, it is the techniques that are important. He wants to see statistical analysis applied to the data, such as clustering and curve fitting, which would enable him to extract information and make decisions. “It does not matter if you have 5 data points or 5 billion data points. Viewing coverage as a big data problem opens up new possibilities for these kinds of analysis.”

Bartley agrees that the size of the data may not be the most important factor, instead looking at how we treat the data and the decisions that can get made from it. He points to one successful attempt within the industry. “Arm has presented very structured ways of collecting data and analyzing it and this allows them to make decisions about readiness to ship. They are trying to structure the dataset to try and pull out information on which they can make decisions. That is a Big Data technique.”

The problem is that the term Big Data has been defined by the industry and means something very specific to them. “I don’t think it is a big data problem because we don’t have enough data,” asserts Darbari. “The problem with machine learning is that we would need a lot of training sets before we would have models that make sense on new inputs. I am not sure the traditional techniques of machine learning would apply, but statistical techniques such as co-variance and co-correlation would make sense.”

Are we done yet?
For every project, there will be that point in time when the boss asks, ‘Are we done yet? Any answer requires data to support a conclusion.

“Verification is a risk-analysis problem,” says Hamid. “We have to decide how much verification we can afford versus the risk of having something go wrong. Maybe for a device that will go into a toy, it doesn’t matter if it has problems. But if something going into my car that is safety critical, it really matters. The beauty of graph-based models is that we can reason about the entire space. Instead of guessing, you can reason and decide. We know that some scenarios are more important than others.”

Bartley believes more process is needed. “We are in the game of risk management and we have been for a long time. When it is purely a commercial decision, then you can make them but when you get into safety and security, standards are involved and it is no longer a commercial decision. You have to demonstrate that you have met the standard. That makes it harder.”

It also requires the right view of completeness. “We are at an inflection point where we have incredibly complex systems, and we are asking how we think about them, how do we reason about them, so that we can properly verify them,” explains Glasser. “We need to introduce the notion of abstraction. When we talk about graph modeling and thinking about processes and resources, what we are doing is raising the abstraction. We are thinking about use cases and system operation, not about registers and datapaths. That is what PS does and allows us to think in terms of graphs and provides a tool for reasoning about the system and how we want to attack the verification problem.”

Including all the data
Part of the problem stems from the evolution of verification from a single tool to a flow and concepts such as coverage have not yet managed to stretch across all the tools. This includes simulation, emulation, FPGA prototyping and formal. “We have formal verification that matches gate level to RTL, but you almost always find a problem, be it a timing constraint issue or you forgot something,” says Singhal. “I don’t think one solution provides the complete answer.”

“A lot of verification is done using formal and I don’t see that in PS today,” points out Darbari. “I would love to join the dots so that the whole idea of intent-based declarative programming and discovering scenarios and gaps could be leveraged at the system level. Until I see a complete story that joins all the verification paradigms I remain skeptical.”

As more data can be brought together, better decisions can be made. “I want coverage heat maps or clustering so that I am able to figure out smarter ways to look at the test suite and figure out which ones are doing something useful and which are not,” says Glasser. “Which are redundant? There is analysis we can do on the coverage data that would help with that. It could make the loop shorter and to use resources more effectively.”

At the end of the day, data from different coverage models and different platforms can be merged, but it is still a binary decision. “It is hard, and I am not sure that Big Data helps with that,” says Bartley. “Arm has structured all of the data from different sources in a way that allows them to make that decision.”

For most, it remains a vision. “Big Data by itself is not an answer. It is just data, and we want to extract information from it,” concludes Hamid. “We need all coverage information, not just from functional coverage, but from formal and from testcases you are running post-silicon. We collect data and we have to turn that into information.”

Related Stories
System Coverage Undefined
What does it mean to have verified a system and how can risk be measured? The industry is still scratching its head.
Verification Of Functional Safety
Part 2 of 2: How should companies go about the verification of functional safety and what tools can be used?
Which Verification Engine?
Experts at the table, part 3: The value of multiple verification engines, and what’s driving demand for verification in the cloud.



Leave a Reply


(Note: This name will be displayed publicly)