Are Simulation’s Days Numbered?

Experts at the table, part 3: Panelists discuss integration, visibility and an increasing number of issues that can only be addressed at the system level.

popularity

Semiconductor Engineering sat down to discuss the limitations of simulation in more complex designs with , CEO of ; Pete Hardee, product management director at Cadence; David Kelf, vice president of marketing for for OneSpin Solutions; Lauro Rizzatti, an emulation expert; and Arturo Salz, scientist within the verification group of Synopsys. In part one the panelists discussed the increasingly important role that formal and emulation are making. In part two they discussed some new challenges that block level verification is addressing. What follows are excerpts of the next part of that conversation.

simulation numbered

SE: What was a system today is a sub-system tomorrow. IP has already transitioned from being simple peripherals to being large sub-systems. How much bigger will they become?

Salz: I don’t expect to see blocks becoming much bigger. The CPUs and GPUs will be the biggest blocks, but systems will keep getting bigger and contain more memory. The bigger the memory, the harder it is for formal verification tools to be able to break the problem down. This is an ongoing problem. You will always outstrip capacity be it in simulation, or formal. We build systems that are bigger than any of them can manage.

McNamara: Engineers always run at the red line. If we make the red line higher, they will just run up to it by building bigger things.

Salz: At the same time the design schedules are being shrunk. What used to be five years for a reasonable block is now six months and there are calls for three months. That means you get three months for the hardware and the rest has to be done in firmware or software. Patching becomes a lot harder.

Kelf: The block size does not appear to be expanding much. There are just more blocks. The algorithms are not changing much. They are getting faster, but most of the fundamentals do not change. (See also Bridging the IP Divide) This means we can keep using the same tools. Size is not the issue, but there are an increasing number of issues, such as security.

SE: When do system-level integrators reach the limit for the number of blocks?

Salz: The first problem I hear is that the integrator takes too long to integrate. The time until execution of the first test is getting larger and this is a matter of methodology. How do you package IP so that it can be used at the next level? We are not investing enough in this area.

McNamara: Virtual models were mentioned earlier. That is the new technique that will help. There is no reason for me to design the processor and the bus. Let me just buy that whole thing as a working system and when I do full-chip simulation, do I need to emulate all of that or just use a virtual model for it? It will run software and send these commands over to the GPU or to my frame buffer and that is all I need.

Salz: That is how we operate. First we hit the bottleneck of integration. This can introduce a lot of bugs. We need to fix those. Then we bring in low power and that is a can of worms. Then you just have to tape out because you have run out of time. Moving to an IP model is a good thing for the industry but the systems are getting bigger and bigger.

Kelf: The problem doesn’t appear to be an interesting enough problem to gather much attention.

McNamara: The visualization problem is there and is being made worse by IP sub-systems because the old visualizations no longer work. There needs to be more effort there. The system is also getting bigger in different ways. Consider the which is a system of systems that are distributed across a factory floor or house or whatever. How do you verify that? If the sensor node works perfectly by itself but the central node doesn’t do something correctly, then the whole system fails. People will want the whole system to be verified. Emulation is being used automotive which is basically 70 CPUs moving in a close formation.

Salz: Automotive also has a huge amount of analog and most of our techniques fall flat there.

SE: UPF 3.0 (IEEE 1801) allows us to formalize an internal aspect of the IP, namely power intent, that can be used externally. That is an axis of visibility into the block. What other are needed?

Salz: If you look at UPF it is very low level and enables you to define power intent outside of the design. I expect to see IP supplier provide a few things externally, such as saying you can have a low-power mode, standby, medium and high performance. They are the only options. You cannot make things up yourself and create bugs that I cannot fix.

McNamara: And if you want too many power modes I can’t verify them. With four variables I stand a chance, but more than that…

Salz: Virtual platforms have that problem, as well. You can build that knowledge into the model. I expect that to evolve and become more mainstream. IP suppliers will start thinking about how this can be used. IP-XACT is the only thing we have to connect the systems together.

McNamara: It is too low-level.

Salz: But we need the methodology where IP is wrapped with IP-XACT otherwise it cannot be used.

Kelf: Other areas are emerging, such as safety. ISO 26262 says that all of these blocks have to meet certain safety criteria and if you have a fault inside the block, the fault has to be corrected. Greater than 90% of the block has to be covered in this manner. This is getting back to fault simulation, so now you have the need to inject faults as you test it and the guy doing the system has to be able to certify it. He has to do the safety simulation; it cannot be done at the block level.

Salz: And the standard calls for gate-level faults. They haven’t even moved up to the RT level.

McNamara: There is a problem delivering IP that will give a meaningful error message to the end user. But now we have a federal body saying we must do this for safety. It is like a Russian doll. IP has to do it for the level above him and the next level up until you reach the entire system.

Kelf: You can imagine, if the IP was semi-opaque, that you could inject the faults without knowing too much about where the faults are going so long as it gets a reasonable coverage, and then see the alarm signals that come out. It will be a big change.

Salz: Licensing will come into this. It is possible that people would pay more if you give them more visibility or higher quality or the ability to probe within the block. We are at the boundary of that and have just started to explore proxies and other licensing schemes. That is an evolution of IP packaging and delivery.

Hardee: What is clear is that the system integrator is pushing all of those requirements to the IP provider. You cannot do all of this from the top. It has to be designed in from the start and pushed down to all of the suppliers.

SE: What is the biggest change that you would like to see and what is the biggest pain point today?

Kelf: Debugging.

McNamara: As a tool vendor, you get paid early when your tool supports a sub-set of the total market needs. People will buy that and use it. With IP, they want to evaluate it before they make a decision and everything has to work. They put you through the wringer in 2016 for an IP that they may buy in 2017, so the money comes late and the work is early.

Salz: This is true for large IP vendors as well. IP today comes with a large amount of firmware and you don’t get paid until everything has been delivered.

McNamara: This is a limiter to IP growth. All of the revenue is in the future and tentative, all the costs are real and now.

Hardee: We have figured out many of the problems associated with the integration of IP into the system. The piece we haven’t figured out is the integration of the unit verification environment into the system verification environment. That is a huge problem with the existing constrained random testbench. We will see people regressing back to simulation in isolation when they discover a problem. The idea that we can move to an assertion-based verification environment does have the advantage that we have reuse of the IP verification environment with the system verification environment and provides a lot more visibility into what is happening with the IP block. What I would like to see is more IP vendors who are adopting formal for their verification purposes, supply their formal testbenches to the customer.

Salz: It is an old idea. At the block level you have assumptions and they become your constraints at the higher level for formal or for constrained random generation. That information is never lost. The issue is: can the formal tools be used at all of the levels?

Hardee: Capacity issues are dealt with to some degree by having the mix of engines. Assertion-based verification is not just formal. At the block level it could be formal and in many areas formal can extend beyond the block level. For some areas it struggles which is when you export the assertions to simulation and emulation.

Salz: Debug is important and we have been investing in that quite a lot. We have built some common debug for formal and simulation. Automatic root cause analysis is also important.

McNamara: Yes, visualization is sorely lacking in some tools.



Leave a Reply


(Note: This name will be displayed publicly)