Big Shift In SoC Verification

Experts at the Table, part 2: Debating the usefulness of graph-based verification and whether there needs to be a new breed of verification engineer to handle issues that cross boundaries between hardware and software. What works, what doesn’t, and why.

popularity

Semiconductor Engineering sat down to discuss software-driven verification with Ken Knowlson, principal engineer at Intel; Mark Olen, product manager for the Design Verification Technology Division of Mentor Graphics; Steve Chappell, senior manager for CAE technology and verification at Synopsys; Frank Schirrmeister, group director for product marketing of the System Development Suite at Cadence; Tom Anderson, vice president of marketing at Breker Verification Systems; and Sandeep Pendharkar, vice president and head of product engineering at Vayavya Labs. What follows are excerpts of this discussion, which was held in front of a live audience at DVCon.

SE: What has to happen in verification now that hasn’t happened in the past?

Schirrmeister: First of all, you need to have a processor in the system. So we have customers doing two things. They use the existing processors, in most cases, and then sometimes they add processors for specific engineering cases. I call it the next version of BiST, because it’s some level of testing. But in this case it’s software testing. The other change is an educational one. Jim Hogan said at a luncheon presentation that he believes this conference will double in attendance next year. One of the aspects here is that you need to be able to understand software. The verification engineer needs to know how to generate software tests himself using the tools automation, but he also needs to know what’s going on from the software side in order to do that.

Knowlson: What we’ve seen, to some extent, is that validation software will exercise the hardware differently than the production software. That’s a big concern. It’s hard to get everyone on the same page to talk about this, so we started using a variation of use cases. There are system-level use cases, in that they touch multiple components, even if we’re focusing on a specific IP. But they’re simple enough and short enough that we can identify what the IPs are in the flow, and then we can work with the validation teams and the design teams. And then the software teams will write their software to support these, because these are basically requirements. It’s one way of more directed testing, depending on what the platform and the software are going to do. You have to know what your platform is going to do.

Anderson: You want the validation software to stress the chip differently than the production software, but within the bounds of legal behavior. A lot of us have experience in different representations of the system level that enable descriptions of how the IP blocks are connected and how things are put together in real user scenarios. Graph-based verification is something we have experience with. Those who haven’t worked with graphs, do you think it’s a good idea? We found a common language between the architect, the validation teams, the embedded guys and the hardware guys. But it’s also a way to capture what you just described.

Olen: That’s one of the other areas where things have really changed. The hockey stick is the ability to traverse across different engines. To be able to move from the simulator to architectural exploration to an emulator to a prototype and into first silicon, and even field returns and manufacturing diagnostics. The graph-based description is at a higher level of abstraction that allows you to use a common level of description and retarget across the different engines, depending on what your needs are, at each level.

Schirrmeister: Graph-based techniques are a good start, especially for debug. There are issues of scale, potentially, because at the end of the day there is no one person understanding all the aspects to even develop the tests. And to develop the tests by hand is becoming a really complex problem. Graph-based techniques help. But ultimately you want to do more model-based techniques where you self-generate based on the constraints in the system. Graph-based is good for debug, but there is a lot more to be done.

Pendharkar: I agree. We have been looking at graph-based techniques. It’s good to get started. I don’t know that I can really use it for my complete SoC. There are so many IP blocks and ways that transactions can occur. Interactions can happen between these IPs, so I’m not sure how scalable it is. We also had a lot of talk about verification and software guys being in different silos and people not being able to relate to them. The software guy is not on track with the hardware guy, and the hardware guy often doesn’t care about the software. We provide the data sheet and they’ll figure out how to write the software. But then you talk to the software guys and they talk about all the things missing in it. They have to chase the guys on e-mail, chase the spreadsheets and ask for significant inputs. That’s what I hear from two people in the same organization talking. We’re trying to get the verification and the software guys talking together.

Olen: If you’re having trouble scaling graph-based descriptions across multiple engines and across the system, you may be using the wrong graph-based description.

Schirrmeister: It’s not just about scale across these engines. It’s scale across the problem. If the designer builds a graph, it’s all manual. You build it out. At the end of the day, someone needs to decide which graph is relevant. If you have a couple hundred IP blocks being integrated, the complexity of building the relevant test scenarios to execute all the different ways of how memory paths can be activated is very akin to a coverage problem. You want to form the constraints of a piece and define how to get to this piece. It’s something we’ve heard from customers as well. It’s not our assertion.

Anderson: But people are doing it every day. There are a lot of people out there using graph-based verification today.

Schirrmeister: It’s a good start. I’m not disagreeing.

Anderson: But it is scaling. It’s working in large projects today.

Schirrmeister: I respectfully disagree. In the spirit of debate, if you take a very large SoC design with hundreds of IP blocks, I have not seen a designer or design team being able to manually define a set of test cases that cover all the aspects.

Anderson: Graphs aren’t defining test cases. They’re defining the data flow in the chip—how those IP blocks are interconnected, what the parallelism is. It’s describing the architecture. The test cases come from the architecture. You don’t define the test cases. But the other point that came up is that this level of abstraction is important. No one I know has done a graph for a complete SoC. They don’t go down to the details of every one of those hundred IP blocks and include it in the graph. What you do is a top-level graph, in most cases, that just shows the top-level connections to the IPs, the top-level flows, the use cases—nobody attempts to repeat all the verification you did of all those individual IP blocks.

Schirrmeister: It’s a necessary start, but it needs more.

SE: Now that we’ve raised everything up a level of abstraction, let’s look at this in context. Designs aren’t just digital—they’re mixed signal, with lots of IP, and everything is getting far more complex at 16/14nm. What does software-driven verification do for that?

Schirrmeister: We have mixed signal discussions with customers going on now where if you take an applications processor and it’s talking to an RF interface, with MIPI to the outside and CSI/DSI (camera/display serial interface) to the air, and I have a thunderstorm, does the customer have the need to simulate all of that. Going back to software verification, people are using software to abstract exactly that interface, so in the analog/mixed signal/RF world they’ll cut at the IQ signal and they will use software at the test bench to represent how the signal would come into the chip. The reason they do it in software is that you can do it across engines. You can do it in simulations, in hardware, or even on the chip if you left it in the silicon.

Knowlson: Here’s you’re talking about software becoming the mode?

Schirrmeister: Yes. It’s software as a model of the RF (air interface) and the signal as it comes in. The software now becomes the model representing the system environment.

Anderson: Can you emulate that as a graph or whatever other mechanism you’re using to automate the process?
Chappell: This is more of a modeling discussion, and it’s somewhat orthogonal to how I’m generating the tests or what the tests are doing. What level of abstraction am I modeling at my virtual prototype level. Do I take a block out of there that wasn’t SystemC, and now plug in a SPICE model? It’s how I choose to architect the use cases. Where do I need to put my verification effort and how do I architect it in a way that I can get my job done in the most efficient way. Should I target this use case to be on the virtual platforms? Should I target it to be in an emulation world? Or should I target this to be in a simulation where I’m plugging in some of the mixed signal models.

Schirrmeister: It’s a different type of software verification. You wouldn’t automate the test, though. You would basically take the system environment and embed it into the test bench. That’s the best I can come up with regarding AMS, though. You want to simulate it all in one environment. Even if you could get enough speed and get all of the synchronization issues resolved, it’s unclear to me—and to customers—what kinds of bugs they will find with that versus cutting at some digital interface to divide and conquer the problem.

SE: Is it working?

Knowlson: It’s nascent. There really is significant tension between the production and the validation teams. There is tons of debate about which tools to use. ‘And oh, by the way, you have too many tools, so get it down to one.’ From the software perspective, at the very least we want to start with a VP (virtual prototype) because it allows us to start early. If it’s truly a derivative and there are some higher-level software stack changes, I can use a previous-generation processor for that or an embedded processor for that. If we want to have software to do validation, I have to have some framework to develop the software on before I have the more RTL-based solution. I’d like to be able to run the same content and either go to emulation or FPGA. Embedded software developers prefer the fast FPGA-type environment. They want to do 30 or 40 cycles on their software a day. And then when the RTL is healthy enough, they go to the slower, embedded environment. We really need that. We have a long way to go. The software teams have to move to do more from an RTL perspective, but the validation guys need to have the platform healthy enough so that when it gets there, at least some validation has been done.

Schirrmeister: Is it really the software team having to move? Whenever I see a meeting at which both the software and the hardware guys attend, the first thing they do is introduce themselves to each other with business cards. Both of them are so busy, and both are facing increasing complexity. Do we need a new type of engineer—a new person who is deep enough to do both. I don’t see the software guys or the hardware guys moving because they are each so busy with their individual problems. Do we need a new species of designer?

Knowlson: I don’t know if we need a new species, but this gets back to my notion of a common language that everyone can understand. Perhaps here’s where the graph-based approach will work. I look at a single use case and, by the way, I have thousands of these for a platform but I don’t try to do them all in pre-silicon. But I’ve got a couple hundred, so perhaps I use a graph-based approach on each use case to figure out how it should be executed, and then a validation team should exercise that flow and then we can run software on it.

Olen: There is a proposal to Accellera for a portable graph-based format to support multiple engines for description, so we’re hoping to promote a working group there.

To view part one of this discussion, click here.



2 comments

[…] To view part 2 of this discussion, click here. […]

[…] more on this, check out this panel discussion that took place at the recent […]

Leave a Reply


(Note: This name will be displayed publicly)