Software-Driven Verification (Part 3)

Experts at the table, part 3: New verification problems require a rethinking about the entire methodology.

popularity

has been powered by tools that require hardware to look like the kinds of systems that were being designed two decades ago. Those limitations are putting chips at risk and a new approach to the problem is long overdue. Semiconductor Engineering sat down with Frank Schirrmeister, group director, product marketing for System Development Suite at Cadence; Maruthy Vedam, senior director of system validation engineering at Intel; Tom Anderson, vice president of marketing at Breker; , founder and chief executive officer of Vayavya Labs and John Goodenough, vice president of design technology at ARM, to talk about . Part One provided the panelists views about software-driven verification and how it relates to constrained random generation and UVM. Part Two discussed use cases and stated that it is more than picking the obvious scenarios, it requires deep analytics. What follows are excerpts from that conversation.

SE: Do we need new forms of coverage to track system-level concepts?

Goodenough: The term we use is statistical coverage. System-level means that people focus on questions such as have I tested this working at the same time as that. That is not enough. You have to know where your critical problems are and this can be different from system to system. Event ordering problems are the kinds of things that even architects may not have considered.

Vedam: As we go up in integration we get into a very interesting situation. The number of complex states increases mainly because of concurrency. Integration layers the functionality of each particular IP on top of each other and this determines what it can or cannot do as a whole. This almost creates an exponential. It is a case of understanding what we need to test and making sure we get that from the notions of . Coverage can generate a lot of data. It can get tricky to understand the data so understanding what we need to test and making sure that coverage can represent that appropriately is important so that we can make better decisions.

Goodenough: Right – I am not short of ways to generate tests, I am short of understanding all of those cycles I ran on simulation, or on silicon, I am short of understand if I am done.

Schirrmeister: And you need to exclude the things that you know are not relevant for this particular design in this particular use model. From an academic coverage perspective, you may not have covered everything but the important thing is knowing what you don’t need to cover.

Goodenough: The conversation with the customer is always challenging because you know they will say “you have a bug and why didn’t you find it?” You have to have the data to be able to respond. Sometimes you can see that this particular event was never triggered by all of the cycles of verification that we ran and this leads to continuous improvement.

Schirrmeister: The challenge is that you have multiple engines, such as simulation, emulation, prototyping. Each of these has different characteristics from an accuracy perspective, so there are cases where it runs in one environment but not in another. One way this can happen is when you forget to initialize memory. The hardware would work sometimes depending on the state of the memory. This means that there are different things that you can find at different levels of accuracy.

Goodenough: One of the challenges with UVM when moving from block-level simulation environments to sub-system is that you have to have congruency between the environments. Otherwise you can run the same payload and get different results. While different behavior can help you find bugs, this is an exception.

Patil: Are we trying to define the quality of the validation? There is a difference between metrics-driven and completeness and most of the time you are not bothered about completeness, you are concerned about quality. Can you capture the entire set of scenarios or at least the interesting ones?

Vedam: Agreed. I could argue we are never done with verification. The key is to know exactly where we are, i.e. clearly understanding what has been verified and what is pending. So in a sense “completeness” is a quality metric! Customers do not expect a single drop that is completely verified. Particularly early on in the product development life cycle, they prefer a partially verified drop sooner, as long as they clearly understand which features are guaranteed to work and which ones are “at risk”.

SE: How many problems can be covered up with software?

Schirrmeister: First time silicon success means that you can work around all of the problems with software. There will be errata chips or lists about what you should not do with this chip. The software delivered with the chip is a paired combination with the hardware and will mask some of the issues.

Vedam: If you take what you are delivering as a system, and you have software masking a problem, then it is not a problem.

Goodenough: Unless it degrades performance and then it may be a performance problem.

Anderson: If the errata are such that you can’t divide this range of numbers by this range of numbers, then the results can be really messy. You can describe what you can do with the chip or do everything in software but that kill performance. This does not work.

SE: Are performance and security other areas that we need to start verifying?

Anderson: Performance, power, security are all part of functionality. If you fail on any of these fronts you do not have the product you intended.

Goodenough: UVM provides a way of constructing a validation environment. To validate a block you need multiple validation environments. UVM cannot do it all. It is not a panacea. It builds one class of validation environment. For security, you need another validation environment and some people are using data coloring and apply formal techniques. Power often uses software-driven dynamic simulation. They won’t try to use UVM for that.

Schirrmeister: We need to be careful about defining the object that is being verified. If you are verifying at the block-level, you may build a UVM verification environment. But there are other verification scopes, such as sub-system and system. Each of these has different engines and you are trying to do different things. At the block level we use simulation because we need the debug insight, and we use on the side, but at the higher level we need to run a lot more cycles, so you need hardware assisted techniques and some items you can only do post silicon.

SE: This is not a new problem. Companies have been solving it in various different ways. It has quickly become a public issue. Where will be in 12-18 months?

Goodenough: We are actively driving our EDA partners to provide us with some of the pieces we need for this puzzle. I want to know how to automate my so that I can make my engineers more effective and be able to measure what they are doing.

Schirrmeister: The IP provider and the sub-system provider have different needs. Whenever standardization kicks in, and we are now talking about making stimulus portable across engines, then it is a sure sign that people want to make sure they are not locked into a proprietary solution. They want to ensure that their internal challenges are heard and they want to know that it is scalable. At the recent Accellera meeting there were some really cool presentations given by people. The problem is on the table and we are willing to respond and we are active with customers.

Goodenough: The problem we have is a scalability problem. Our challenge is to get through the volume of verification in the most efficient way possible. Software-driven verification is one part of the sand box.

Vedam: It is an efficiency thing. I am looking to the EDA vendors to help us take a systems view of this and bring a standardized view of what system means for everyone along the product life-cycle and for them to be able to interact with it so that you can maximize reuse.

Goodenough: At the moment our problem is scalability but the other side of this is that as I get more efficient at running simulation cycles, I have pushed the problem somewhere else. Now I have to get more efficient debugging.

Schirrmeister: 18 months from now, I would venture that we will have to have come to grips with big-data tools and companies like Splunk, companies that EDA has never heard of, and being able to look into the log files of simulations trying to make sense of all of the data that is generated.

Goodenough: We have invested a lot over the past two years building a big data backend for our verification infrastructure.

Anderson: There is a solution out there that works today for automating the generating of C tests that is scalable. I think there is evidence that it is accelerating with increasing adoption and more vendors are offering solutions in this space. We don’t see the lack of a standard as a hindrance, but it is likely that in a year or two years from now, software-driven verification is likely to become a fairly standard methodology.

Patil: If I look at this as a tool provider, then standardization is important because if we go to multiple customer accounts and create customized solutions for them, then it complicates things. It is important that it fits into as many flows as possible. People have to believe in it, they have to know the real benefit. What does it yield that has not been possible in the past? This has to be articulated for it to become mainstream.



Leave a Reply


(Note: This name will be displayed publicly)