Data-Driven Verification Begins

Experts at the Table: What determines good vs. bad data, why the EDA industry is so slow to catch on, and what verification engineers are really looking for.

popularity

Semiconductor Engineering sat down to discuss data-driven verification with Yoshi Watanabe, senior software architect at Cadence; Hanan Moller, systems architect at UltraSoC; Mark Conklin, principal verification engineer at Arm; and Hao Chen, senior design engineer at Intel. What follows are excerpts of that conversation, which was conducted in front of a live audience at DVCon.


(L-R) Yoshi Watanabe; Hanan Moller; Hao Chen; Mark Conklin. Photo by Brian Bailey/Semiconductor Engineering

SE: Why are we seeing so much interest in data now, and how will this affect verification?

Watanabe: Data-driven verification has been happening for many years on a small scale and individually. Customers have spent time analyzing verification data and collecting more data, and then thinking about how to improve, which is where we see metric-verification in the context of UVM and in the context of a sign-off flow. Metric-driven verification sets the stage. Those key ingredients are available for analysis. More recently, with systematic analysis of the data, data science and machine learning are coming into the picture. We see the opportunity to utilize this data in a more systematic and automatic way, allowing engineers to continue doing what they were already doing, but in a more transparent manner. If you look at where this is going, it’s going to create a closed-loop verification flow. Analyzing the data, collecting the data systematically, and then providing a feedback loop to the inputs will allow you to achieve your goals. You can eliminate redundancy and focus on the specific gaps you need to fill. Such a closed-loop feedback loop will help. This is the next phase of verification. In 10 years, young verification engineers will laugh at how senior engineers had to deal with verification closure without having a feedback loop.

Moller: The need for this data is huge. As verification engineers, we collect data and utilize whatever we can for closing the loop. Verification never really finishes. You tape out because you have a certain level of confidence that your design is meeting requirements. But until the design is deployed and an application is running on it, you don’t really know what’s going to happen. You want to collect and mine this data and then let somebody sift through it and find all of the gold nuggets.

Chen: What motivated us to pay more attention to data is that we want to make our design and verification more efficient—especially the execution part. One example is that we run formal regressions for days. Imagine if there’s a project milestone and we get a late design change. As a verification engineer, that will cause a lot of stress over whether I can close my regression before the milestone. That really motivated us to pay more attention to analyzing the data. If there is any data-driven approach that can automate a process and help us to improve efficiency, that will make our lives better. With the advancement of recent data-mining techniques, a lot of enhancements already have happened in the tools. For example, we can take the formal engine proof information from our regressions and use that data to guide the tool to pick the best engine to solve each formal property in the next regression. That way the regression can be done a lot faster. The data was there for a long time, and the idea of using that data also existed for a long time. But it’s only recently that we have come up with the techniques to fully automate the process to make it efficient.

Conklin: We’ve barely scratched the surface of the data we can collect for driving engineering efficiency. We’ve all done coverage, but if you really think about it from that perspective, coverage ends up being a management directive rather than what engineers are able to influence. There’s always the question, ‘Should I really be focusing on these last few things?’ The data we’ve had from doing verification for the last 20 years isn’t interesting anymore. You do a few things for coverage and you’re done. But we have all of this big data that everyone else is using. We’re the industry that’s enabling them to do that, but we’re just getting to the point where we’re thinking about it ourselves. The complexity of verification has changed. Twenty years ago you probably could design a processor and do a testbench by yourself. That’s not possible now. There’s been expansion into formal and other areas, where there’s an army of verification people for one design, and you never really know when you’re done. That’s one of the problems with verification. But we really have the opportunity to begin collecting and normalizing some data and to gain efficiencies that can impact the final product—and to save the engineers’ time, which is the most important.

SE: Is the data you’re getting these days good? And what delineates good versus bad data?

Moller: If you find a bug, that’s good data. But even if you didn’t, you learn something from that.

Conklin: That’s a tough problem. Getting data scientists into our industry is another new thing. We do have a lot of good data. But even if you’re going to use that data for something like machine learning, there’s a lot of work that goes behind that to see if the data is good. Still, you can use those tools we have now to have an outcome and train a model, and that model can tell you whether your data is good or not, rather than just guessing.

Chen: If the data is consumed by engineers, it has to be user-friendly or user-centric to be good. Otherwise it will waste engineers’ time. A long time ago I was involved in a chip-level CDC (clock domain crossing) task. At that time, there was not a smart hierarchical CDC flow. I remember how frustrated I was when I ran a CDC tool for two days and came back with more than 30,000 lines of errors in my CDC report. But I still needed to get a data dropdown. It took us more than a month to collect the data and analyze it. There were a lot of back-and-forth discussions between the chip-level and IP owners just to figure out whether the issue was at the chip level or the IP level. We’re getting a lot of data, and engineers are handling a lot of data every day. But if the data is not user-friendly or presented in a well-organized way, it’s hard to be efficient.

SE: Is it the data that is good or bad, or is that application of that data that’s good or bad? And is there a distinction made there today?

Watanabe: There is a distinction. One is according to types of data. The other is the content of that data. In terms of type of data, the issue is what type of data you need to collect in the first place. This is certainly application-specific. In my experience, even to find out what types of data will be needed in order to solve this problem is not the easy problem. You often do not know if you have all of the sufficient types of data that are available. If not, what types of data do you have to collect in addition to what you already have? That’s one problem. The second problem is about the content of the data. If you now have all the types of data that you think are necessary and then look at the contents of the data and what you’ve collected of a particular type, you see whether that is good enough to do something meaningful. For example, if you try to optimize your verification efficiency and want to run some sort of sensitivity analysis, the parameters of this kind of thing seem to be very sensitive and effective to tune to a particular design condition like buffer maximization. But if you don’t see any variation in the data when you run lots and lots of verification, you cannot figure out anything. So for constrained random type of simulation, that does much the same type of thing all the time. As a result, you don’t see much variance and you cannot learn anything out of that. That’s another dimension of data quality. Even though you know exactly what you have to collect, the contents of the data are not necessarily good enough in order to take any meaningful actions.

Moller: If you don’t know what data to collect, you can’t blame the data for being wrong. So you’ve collected a bunch of stuff and you’ve got this pile of data sitting there, but you don’t know how to look at it. There has to be some intelligence guiding the collection and analysis of the data.



Leave a Reply


(Note: This name will be displayed publicly)