System-Level Verification Tackles New Role

Experts at the table, part two: Panelists discuss mixed requirements for different types of systems, model discontinuity and the needs for common stimulus and debug.

popularity

Semiconductor Engineering sat down to discuss advances in system-level verification with Larry Melling, product management director for the system verification group of Cadence; Larry Lapides, VP of sales for and Jean-Marie Brunet, director of marketing for the emulation division of Mentor Graphics. Tom Anderson, VP of marketing for Breker provided additional content after the event. In part one, panelists discussed the differences between block-level and system-level verification and the differences in thinking that is required. What follows are excerpts of that conversation.

SE: We have been talking about problems in the most advanced SoCs, but what about other types of devices, such as those for the IoT?

Lapides: Now you have the cloud and the data aggregation level, and you have, tens, hundreds or even thousands of microcontroller nodes and you need to make sure that they work together and this is not a single chip. Have people ever tried doing this with an emulator?

Brunet: Yes. What is interesting about IoT is that with any system you have interfaces and within an IoT system, somewhere there will be something that is not standard. If everything is standard that IoT company has no value add. That component has no model, no spec, no standard. They probably don’t want to share it with you. That interface is non-standard and so it requires a physical ICE (In Circuit Emulator) target, an external I/O, and you have to be able to deal with non-deterministic aspects of it, randomness of the target. This is common for all IoT customers.

Lapides: While I have seen that, I would argue that for IoT, the software is often the differentiator. In mobile or ADAS, the hardware is differentiating, but less so for IoT. There are some significant system verification problems that we are just beginning to get started on, mainly because IoT is still such a buzzword rather than a commonly understood thing. There are cars and factor networks and we are starting to see these things.

Brunet: It is becoming a big thing in Europe. If you look at CES, first were American companies, second was French and third was German. Overall, it was European-centric for IoT, and Europe still has good car makers, so we see a lot of interest from them.

Lapides: We are seeing lots of activity in Europe, lots in Asia, but not so much in the United States. The system-level problems are being attacked in those places more consistently. Embedded World was a great example of that. It was looking under the skin and the way to attack the problems, such as security in the context of IoT, and that different levels of security are required at different levels of the network because you cannot have heavy security at the sensor level or the node level. It is too expensive.

SE: Abstraction is one area that we have had problems with. We have a model for synthesis , a model for performance verification, a model for the virtual prototype and they are all different models. Will we ever be able to have a single model or a model flow?

Lapides: Looking at the problem from the outside, we have a much better idea of the model flow, but executing to that flow is very difficult. Getting an IP provider to provide RTL and a functional model that is probably just a golden reference model for verification purposes is about as much as can be expected. So, we understand the models and when they are needed, but I don’t think there is consistent execution for the development of the models.

Melling: There is no consistent execution because it is expensive. It always comes down to a model being successful if it provides a return on investment. For a number of the abstract models, especially when looking at SystemC-TLM, people haven’t been able to justify the level of investment it takes to get all of the models required. The approach seems to suggest that a mixed-language world is inevitable. When you have models you need to be able to take advantage of them and get some return from testcases that may fit well with them. Customers often don’t run homogeneous verification. Many of them have constellations of different simulations, each targeted at various categories of problems. Few of them talk about a unified modeling language that would go from the abstract all the way down. VHDL tried to anticipate that with having different implementations of the same entity, but it was never truly realized. For stimulus we are entering into a new era with , and we are saying that we need stimulus for all of these things and asking if that can be the same.

Anderson: The portable stimulus approach holds the promise of a single model to define verification intent, from which you can generate test cases for various platforms. IP vendors can provide models that you can plug together for full-chip verification. On the design side, high-level synthesis holds the promise of a single design model that will work in virtual platforms for architectural work and then be synthesized into RTL and beyond. But unfortunately that’s not the case right now.

Brunet: We will have more convergence with Portable Stimulus. There is no issue with methodology and models. We have flows to do the things that people need to do. The challenge is when there is a configuration or adaptation of the specification that is unknown. The customer wants to keep this secret, and this creates a verification challenge—and that means it is a model challenge. They have to make that decision.

Lapides: Security figures into that, as well. Customers that are actively involved in designs that have a security requirement are limiting access to the design even within their own company. In one example, there is a company where only four people have access to some information.

Brunet: How do you model something like that?

Lapides: It comes down to how easy is it for customers to build their own models and are they willing to invest the time to do it. It goes back to the ROI. With methodologies still evolving, there is no clear consensus about the system level and what the ROI looks like.

Brunet: We often see that customers do not rely on the fidelity of the model so they go with a physical target and the problem then is one of non-determinism. You find a bug, run a sequence and you may find a bug, but you cannot easily reproduce the bug because there is something in there that is random.

Melling: At the end of the day, the customer is going to run the full test suite on an RTL model using emulation or FPGA prototype to get the confidence that they need.

SE: Debug may be one thing that brings many of these together. It is a common point that spans all of the tools.

Brunet: The designer should not have to care about where the design is running, be it in simulation or emulation. That is the engine. Debug is a big challenge.

Anderson: You want a common debug environment that works with all platforms, from simulation to silicon, and that handles both hardware and software. Portable debug really has to be part of any portable stimulus solution.

Lapides: Being able to do good hardware/software co-verification takes a lot of effort, and being able to do that across the platforms adds to the challenge. Debug is not just about setting breakpoints. It requires being aware of the operating system, being able to set breakpoints that are on an OS event or on context switching, for example. There are a lot of things that have to be debugged at the system level.

Melling: That is the pain point that is emerging, and one that people will have to throw money at, because the one thing we know about debug is that you can’t schedule it. You can’t predict it. It messes up more schedules than you can imagine. In the past, a second problem was being able to run a big enough workload. It is these workloads that make for the really complex debug problems at the system level. When you could only run short runs, the debug problems were smaller.

Anderson: Debug used to be about waveforms. But now, with embedded processors and hardware-software co-verification, it’s a much bigger problem. We in EDA have had to learn from our embedded colleagues about thread debug, layers of software, and other complicated topics.

Melling: I know companies that have spent huge amounts of money on memory corruption issues.

Lapides: Think about the extension of SystemC and the types of things that people are looking at. First they wanted to be able to configure their SystemC models in the same way and be able to parametrize them. That is a fairly easy design problem. The Command Control Interface (CCI) allows you to add things into SystemC so that it can integrate with analysis tools and provide callbacks.

Brunet: In the past there were different users. You had the simulator guys looking at waveforms and you had FPGA prototype guys, who were often software guys, and had nothing to do with testbenches or simulation. Then you look at emulation and it takes a little bit of both of those and they have their own tools. A common debug platform is very difficult. We are getting to the stage where we have a common debug platform that is shared amongst the tools.



Leave a Reply


(Note: This name will be displayed publicly)