Are Simulation’s Days Numbered?

Experts at the table, part 1: The verification of blocks and IP is undergoing significant change and the experts have some diverse opinions on the way forward including the role of simulation.

popularity

In the latest EDAC report, the value of IP surpassed the value of CAE tools for the first time. Verification tools are an important part of establishing confidence in IP blocks and simulation has been the mainstay of that IP verification strategy. But simulation is under increasing pressure, particularly for full-chip and SoC verification, because it has failed to scale. While it still remains applicable to block-level verification, block sizes are increasing. Will the last bastion for simulation fade?

Semiconductor Engineering sat down with , CEO for Adapt IP, Pete Hardee, product management director at Cadence, Dave Kelf, vice president of marketing at OneSpin Solutions, Lauro Rizzatti, an emulation expert, and Arturo Salz, scientist within the verification group of Synopsys. What follows are excerpts of that conversation.

simrtble

SE: Do we have everything we need for block-level verification?

McNamara: IP and IP sub-systems have changed the game. It had been that you did block-level verification of the various components, then you stitched them together, did some system-level verification to make sure that the edges work and you tape it out. Then you found out if you didn’t do a good enough job. IP has changed it such that those components are not being developed by people in the building. This adds an extra dimension of commercial interaction. The verification flow has to change. The flows in place are good but this adds a new challenge.

Hardee: We are seeing an evolution in block-based verification, especially for IP vendors. I liken it to the migration of quality over the years. It used to be quality control where you were finding everything late. Then it moved forward with quality assurance. And now we are trying to build quality in from the start. You see the same thing happening with verification methods. You could just do constrained random simulation and bombard it with everything you can at the end — and some IP vendors have pretty big teams doing just that. Formal verification is moving that forward, and you now have smaller teams starting a lot earlier. They are using formal to do a couple of things, property-based verification to confirm that the IP is behaving correctly, and bug hunting that can penetrate much deeper than in the past. Formal does a much better job at finding the corner-case bugs than constrained random simulation. In the IP market you can make recommendations about how to use the IP, but you are not fully in charge and have limited control over what a customer may do. It is not just about exercising the IP in the domain of how it should be used. Finding those outside of intended usage can also be critical. With formal there is really no such thing as a corner case because you are exercising all combinations of inputs.

Kelf: Recently, Renesas had a problem. They were trying to figure out how, as an IP vendor, to distribute their own IP within the company, and to get the people consuming the IP not to misuse it. They did the obvious things such as adding assertions around the interface, but they went further and created assertions within the IP that reached into the main design using some interesting abstractions in the assertions. This way they could check things such as the way in which interrupts were handled. They had tried doing this kind of thing in the past with simulation, but they found a number of advantages associated with doing it this way and it proved to be a lot more efficient. They have now built a whole methodology around it.

McNamara: With formal you are proving a property and that this IP will generate the correct answer in all situations. You are not exercising it. You are just showing it to be a fact.

Hardee: Not all the time. It is important to do that a lot of the time and prove the properties to confirm the behavior, but for most bug hunting you are not proving the property. It does not matter if a property completes. What you are doing is penetrating deeply into the state space and finding counter examples. Every counter example is a bug that has been driven out of the IP. The way in which formal scales today is less of an issue than it used to be. Part of that is because of the change of mindset in that it is not always about completing properties.

Rizzatti: is taking over for the largest IP blocks. Simulation just doesn’t scale any more. The algorithms used by simulation have not managed to make use of parallel architectures.

McNamara: Emulation is the fully parallel simulator. We have tried to parallelize simulation on a multi-processor, and you run into the problem that the number of active threads is actually fairly low and for most of the logic, nothing is happening.

Salz: We have done parallel simulation in the past and had some success, but you need a lot of cores. Even with 64 cores, maybe you get close to 100X. Emulation gives you another couple of orders of magnitude improvement. You can’t compete with that. But let’s not put simulation down.

McNamara: Simulation gives you extraordinary visibility and better debuggability. You can prove a property or you can find a lack of deviants from what you expect. Bug finding is part of showing that it does what I want, and then if it doesn’t it provides you with the testcase.

Hardee: It is driven by properties that will try and send the search ever deeper into the design.

McNamara: And formal can help with things such as reachability. You can simulate for years and attempt to get more coverage but you will never know if you never actually tried the special case. Formal can tell you that there is no way to get there. I do like the concept of the assertion that can track the interrupts. I, as a user, don’t know this kind of thing. There is possibly a spec somewhere which didn’t get read. But to have the tool reach out and tell you what they expect.

Kelf: There is a contract, and the issue is really one of how do you prove the contract. From an IP vendor point of view, you need to provide the interrupt here and at this time before this other thing. You really need an executable contract. Then you can look at the verification technology that lets you prove that contract.

McNamara: Emulation, simulation, formal — you should use all of them and they all have special places where they are the best solution. The circuit may actually work if you violate a property and run for many years without an issue, but you are in a space that the IP vendor has said that it cannot warrant nor define its behavior.

Kelf: Yes, you have to work out which techniques works for each paradigm.

Salz: In verification I don’t think we have ever dropped a technique. From directed test, to simulation, to random constrained to mutation… it keeps moving, and it moves much faster than design technologies. They are still stuck using RTL. Verification has created so many new techniques. Now, when talking about what is design and what is IP, the industry is moving toward an IP model. Even in large integrated companies they are using IP. In some cases, the IP is developed in house. The problem moves to the integration, where there is a huge amount of firmware and software. Formal has a problem proving software. I agree that formal is very important and becoming mainstream because it’s easier to use, but the big disruptions are software and low-power design. Low power comes into play post IP, at system integration time. The IP vendor probably never thought about shutting down pieces of the CPU, and if you are reaching into the IP and doing something that was not proven by the IP vendor, then that makes problems more severe. The existing methodologies are beginning to break down. Random constrained, formal — they are all having problems.

Rizzatti: Today, emulators can handle power and the verification of low power. They can provide an activity database for power estimation and they can also handle software which formal and simulation cannot do. Simulation just does not have the power.

Hardee: Yes, to check the integration, in the presence of software, emulation will continue to have a role.

Salz: Or virtual models.

Hardee: Where emulation is a little weaker on power is when you are talking about power intent and bringing a power domain back up. An emulator can suffer from just being two-valued logic, and it does not have the X value, so really the correct design initialization and X propagation is problematic. Formal and simulation are the technologies that can handle this.

Salz: It is not a fundamental problem. Three-valued logic can be built into emulation. It costs additional hardware resources.

Rizzatti: Quite a bit. It will cost you 4X. Most people think it is 2X but it is actually 4X.

Salz: You have doubled the number of wires for sure. For processors, it may depend upon how clever you are.

Kelf: But this is where formal is so useful. You can track these kinds of problems.

Salz: Formal doesn’t handle four-state logic very well. It has the same problems as gate level.

Kelf: But you can handle unknown states.

Hardee: That depends upon whose formal you are talking about. X-propagation modeling can be fairly silicon accurate and it does get around the problem you have in simulation of optimism at RTL and pessimism at gate level. IP customers often use X-propagation to thoroughly check the correct initialization of the design and to ensure that the design cannot get into a bad state because you are not in charge of how your customer resets the design. So exhaustively checking the correct bring-up of the block is important.

Salz: What is the correct state? That is where formal gets tough and the customer has to tell you. It is usually not that easy and that is what makes it hard to do.

Kelf: There is another dimension to this and that is security. You can take X-propagation and use the same technique and apply it to the security issue. When checking for backdoors within IP you may have to check that it is secure, as well. Formal and automated apps that focus on this specific problem can be applied, and you don’t have to figure out the stimulus, which can be quite complex.

Hardee: For full security verification you do need a combination of techniques, because what can be achieved with formal is to check that there are no hardware back doors, that there is no leakage or possibility to exploit the hardware. But the thing that is monitoring and controlling that is software. Security software is only as good as the hardware platform. So while we can check the hardware platform, you still have to check the software is behaving correctly, and for that we are back to emulation.

Kelf: The contract is layered. There are a basic set of checks that formal and simulation can tackle to make sure the whole thing is operating and then emulation to layer the software on top and then you can layer on issues such as security.

Salz: An emulator helps you verify the software in parallel with the hardware development.

Related Stories
Racing To Design Chips Faster
As the market begins shifting toward more vertical solutions, methodologies, tools and goals are changing on a grand scale.
Reaching For ROI
Calculating the return on investment for power and performance is getting more complicated.
Debug Becomes A Bigger Problem
EDA companies have been developing more integrated debug flows that bring execution engines and hardware and software closer together, but is that enough?



Leave a Reply


(Note: This name will be displayed publicly)