Experts at the table, part 3: The panelists discuss use cases and how they differ from today’s verification strategies, but what will solutions look like?
Semiconductor Engineering sat down to discuss the state of the industry for functional verification. The inability of RTL simulation to keep up with verification needs is causing rapid change in the industry. Taking part in the discussion are Harry Foster, chief scientist at Mentor Graphics; Janick Bergeron, fellow at Synopsys; Ken Knowlson, principal engineer at Intel; Bernard Murphy, chief technology officer at Atrenta and Mike Stellfox, fellow at Cadence. In Part One, the panelists discussed the industry challenges and the emerging verification platforms. Part Two discussed some of the changing needs for verification and that a paradigm shift is required. What follows are excerpts of that conversation.
SE: At the end of part two, the panelists had agreed that a paradigm shift was necessary in verification. Where is that going to happen?
Murphy: The paradigm shift is not happening in the EDA companies: semiconductor companies are getting chips out, so they are the ones doing the innovation.
Foster: It is made more complicated by adding issues such as security. Before this it was just a Functional Verification problem. Then we layer on top of that a power problem, then we layer on top a security problem. We have software on top of that. It has become a different problem. We are verification something radically different than what we did years ago.
Stellfox: It hasn’t caught up in a lot of companies yet. An example is an getkc id=”81″ kc_name=”SoC”] company who brought up all of these concerns, but they still only verify connections at the SoC-level. Even though software and hardware teams have gotten closer together, coherency, security and power, which are split in the middle of the two groups and are SoC specific, are still largely ignored.
SE: If these things are important what is the industry doing to address them. How do we define coverage for security?
Murphy: People intimately involved with this say that there is no answer to this today. There are discussions happening about what it means to quantify security, but today it is not defined.
Knowlson: Security is front and center. There is a lot of demand for use-case driven verification. When you have a massive system with an infinite number of ways in which these things can be used, and if I attack this using the traditional mechanisms, I just don’t have enough time in the day. If I could do the analysis and figure out what my key scenarios are, then I can dig into some of the flows, do the functional level using the Virtual Prototype and then go into Emulation for some of cycle accurate information – but it is a lot of work to go from a use-case to what it really means in the hardware and have all of the pieces in place.
Foster: One of the problems is with the definition of use cases. This is a fundamental piece of SoC integration validation.
Stellfox: This is definitely an area for innovation. There is a lot that can be done and the way to attack this problem is through use cases. Today, some poor engineer has to take some description of a use case and write a bunch of C directed tests to mimic how that case should exercise software and hardware. When you are talking about a dozen or more cores in an SoC, writing a C test is a daunting task.
Knowlson: And there is no guarantee that the test will do what the software does.
Bergeron: Validation is a more accurate term here because we are verifying something that we know and expect. Why did constrained random verification come up with such amazing results? Because it was able to do the things we didn’t think of. If you know what the case is going to do, then it isn’t random.
Foster: It depends on how the tests are abstracted. We can use analysis on those to come to a notion of system-level Coverage which is radically different than what we have had before. Statistical coverage is a new way of thinking about coverage at the next level. We can’t use the mechanism being used to describe functional coverage at this level.
Bergeron: But doesn’t a use-case approach make things such as security a lot more difficult because of the holes in the cases we didn’t think of?
Murphy: That is exactly the problem.
SE: There appears to be an assumption that a use case is a directed test. Why is it not a definition of an objective which can be shown to be met in any number of ways that could be randomized, and why cannot that be crossed with another use case to create just as many random scenarios as we have today?
Foster: Part of the reason is that humans are not good at reasoning about concurrency. In our minds we can describe something we want to see but to describe a use case where this and this happens is not an easy thing to do.
Stellfox: That means it is a perfect opportunity for automation.
Knowlson: And that can take us to the next level of multiple simultaneous use cases.
Stellfox: You need to be able to abstractly define a use case such as a video comes in through the modem and is stored into memory and is then sent to a display. That is something that a human can reason about and is a use case. Then you can define a whole set of use cases and while as a human you may not what to think about how to exercise the device in all of those different ways, you want to use technology and automation to produce the legal stimulus that can exercise and check those cases.
Foster: certainly, coming up with an optimal set…
Bergeron: Given the size of the state space, we don’t have a choice, we have to limit it. We have experimented with tools and they have never fulfilled the promise. They speed up the closure and fill in a coverage point quickly. That is not the point. The coverage model is not the point. The journey is as important as the destination. You want random to take you places you didn’t think of.
Foster: It is the cycles of learning that occur…
Bergeron: You can’t create use cases for those interesting things. It is the surprises that you want.