Big Shift In SoC Verification

Experts at the Table, part 1: Using internal processors for verification, not just test benches, is becoming essential to getting complex SoCs out the door. But what does Elvis have to do with this?

popularity

Semiconductor Engineering sat down to discuss software-driven verification with Ken Knowlson, principal engineer at Intel; Mark Olen, product manager for the Design Verification Technology Division of Mentor Graphics; Steve Chappell, senior manager for CAE technology and verification at Synopsys; Frank Schirrmeister, group director for product marketing of the System Development Suite at Cadence; Tom Anderson, vice president of marketing at Breker Verification Systems; and Sandeep Pendharkar, vice president and head of product engineering at Vayavya Labs. What follows are excerpts of this discussion, which was held in front of a live audience yesterday at DVCon.

SE: What is software-driven verification, and is there a difference between software-driven verification and system-level software-driven verification?

Anderson: There is a difference. Software-driven verification, in the way it’s being used by the industry today, refers to using the CPUs in an SoC as the primary mechanism for doing verification in contrast to the test bench. The shift we’ve seen in the past two years is that people are no longer relying exclusively on a test bench, UVM or otherwise, at the full-chip level if they have an SoC with processors involved. They’re either manually writing or internally generating or using a commercial product to generate tests to run those processors and do a lot of the verification. If you look at what an SoC does in real life, the processors are in charge. They control the device. It’s unreasonable to expect you can verify that device without involving the processors. That’s why this shift has happened. To me, production hardware-software co-verification is more about verifying the production software with the production hardware. That’s distinct from using specialized software as part of the hardware verification effort. There is some overlap, but not a lot.

Knowlson: What do you do in pre-silicon when you don’t have a processor?

Schirrmeister: There are two types of software we are seeing in designs today. One is the Elvis software—the software that leaves the building and goes into the product—and the non-Elvis software, which stays in the building and increasingly is used for verification of the hardware-software interactions and also just for the hardware itself. The second thing is that if you don’t have processors in the system, there are cases where people add a simple processor, either open source or some of the smaller ARM cores, to build an embedded test bench where the software is easier to handle than a traditional SystemVerilog test bench—especially in cases where you want to represent part of the system environment. The key advantage of why it’s happening, and why now, is that there are so many engines you want to use today, from TLM techniques to RTL techniques starting with simulation, emulation, acceleration, FPGA-based and then the chip at the end—that you really need to re-use verification across those engines. The key advantage of using software for verification is, regardless of whether the processor is part of the system that goes to the customer or whether it was added for purposes of verification, you can re-use that across all the engines. You can start developing a test scenario in a virtual world. You can do it in RTL, which is slow, but accurate, with great debug. But you also can do it in the hardware-accelerated engines, and you can even do part of the post-silicon debug.

Knowlson: Would you consider your non-Elvis software validation software?

Schirrmeister: Typically it’s not a branch of the production software. You need to have the production software validated, as well, and there’s such an interdependency between hardware and software that it’s a crucial aspect. Otherwise the product won’t work. The validation software is specifically developed for bring-up diagnostics, for making sure all the peripherals start in the right order. It’s a specific validation piece.

Pendharkar: A lot of customers we talk to say this is all good, but especially in simulation with a smaller processor, everything slows down. People are using software for verification, but in simulation you really don’t have a processor. You have some sort of bus master to route the transactions. So why is software-driven verification becoming important now? One trend we see is that if you look at a lot of the IP on an SoC, those are highly programmable. That wasn’t the case five years ago. The amount of programmability being packed in is incredible. If you are verifying those, it has to be done in a manner that reflects real life. Insofar as the intent is verification or validation, you really don’t want to use product software. You want something that is easier to use and more amenable to the problem you are trying to address.

Schirrmeister: The product software is part of what is to be verified. The HAL, the drivers, the OS become part of the DUT (device under test). The hardware doesn’t live without it.

Olen: I agree with the need and the challenge in that it’s getting more difficult to do, but I disagree with how long this has been an issue. This didn’t just start two or three years ago. We’ve been working since about 1995 with people using software to drive their hardware verification. They’ve been struggling to do it. There hasn’t been a great amount of automation around it. They’ve been using ISS (instruction set simulator) models to model the processor and swap out RTL. There are speed and performance issues. Ever since people have been designing with processors it’s been a challenge. And then maybe 10 years ago the hot-swap technology came along to be able to move from an accelerated mode using an ISS model to using RTL to trade off accuracy and speed. Now we’re starting to get into more clever and automated ways for using not the target system software, but more clever software for driving the stimulus. But this has been around for a long time. This is not new.

Knowlson: We’re talking about three kinds of software. We talked a little bit about production. We talked about what I’ll call validation software. And now we’ve just brought up models. We need to clarify what kind of software we’re talking about here, because what we do with it is very different.

Chappell: As things are getting more complex we’re adding more and more processors into these systems. We’ve been using software as part of the verification piece since the late 1990s. If you look at ARM’s peripheral offering around 2000, a mailbox was already there because they were planning to have communication between the test bench and the processor code that’s running. As we’re getting more processors and the complexity of the software increases, we need to be able to re-use a lot more. The more of that Elvis and non-Elvis software we can re-use, the less we have to throw away. If you can get to the point where you enable the software development sooner in the process, then you can get more reuse out of all the software you’re developing.

SE: If this is 20 years old, why now?

Schirrmeister: Some of the techniques have been around for years, but we’re now in the hockey stick or we’re looking at the hockey stick where it goes more mainstream. There is a need to get more precision into the software. The models are one of the aspects to represent the system for execution. They need to be verified themselves. It’s one of the engines you use to verify things. The co-verification is really an engine to execute the hardware, run the software on it, and then do verification. The second piece is the diagnostic software, and that’s really the piece that’s changing. We always had software in the system. Looking 15 years back, you had to bring up Symbian on OMAP 2. Those had to be done on hardware. What’s changing now is the need to re-use verification across the engines has become so big that software becomes the test bench. That’s the third piece of software. It doesn’t leave the building. The sole intent is to do verification of the hardware-software integration, and sometimes the hardware itself to make sure things run in the right order. If I find a bug in post-silicon and I have a hard time debugging and looking into my hardware, I want to reproduce it on emulation or simulation. If you have to rewrite SystemVerilog test benches all the time to do that, it’s a very long process. Doing this with an environment you can re-use is a big advantage.

Anderson: You’re mixing two points. Any software-centric verification is inherently reusable across platforms. Anywhere you have a processor or a model of a processor you can run that code. If you’re trying to boot an OS, it’s going to be really inefficient in simulation. It may even be impossible to run it there. The differentiation is not so much the reuse, which is a contrast between the test bench-driven verification, because you can’t re-use it once the test bench is gone, and software-based verification. The differentiation is between using production software, having it ready early enough in the project and stable enough so it’s useful, versus developing this custom non-Elvis software. That’s what’s changed in the last two or three years.

Schirrmeister: In the spirit of debate, to me the production software is part of the DUT. Its interaction with the hardware needs to be verified. The software I’m referring to, which is re-usable, and the test bench, which is re-usable, is the software written to keep all these things together and verifiable. That is the key change. That’s now becoming re-usable.

Knowlson: We’re seeing real tension between the validation software folks and the production software folks because we do have all these embedded processors. The validation guys don’t want to have to write massive amounts of validation firmware, and the production guys don’t want to debug RTL. They don’t want to go anywhere near it until it’s pretty mature. So we have this real tension.

Schirrmeister: As I’m talking to customers, you can draw sequence charts of finger pointing between hardware and software teams. Somebody figures out this doesn’t work, so the software guy says the hardware guy didn’t do what he was supposed to do. The hardware guy comes back and says, ‘I looked at my waveform and everything is fine.’ Then 20 sequences later they realize the hardware was in a low-power mode and didn’t read the programming right that the software guy had initiated. The challenge we face is more of a capacity-complexity one. No one person exists who understands all of the details on the software side and the hardware side to meaningfully make a call on it. The system architect sits in between as the moderator, but he isn’t able to go into either one in detail and debug the RTL or low-level software. We’re trying to devise an environment where these people can interact better with each other. But we definitely see the tension.

Chappell: Whoever ends up generating the software for the test, whether it’s through automation or you’re hand coding it yourself, being able to create that common language between the hardware guys and the software guys and all the different software people is the real value we can provide. If you have a common environment where you can debug, that’s really the heart of it. Where did this finger-pointing come in? That’s the real starting point. Now you have to figure out whose problem it really is, and if you have an environment that looks at this from virtual prototyping through simulation through emulation through physical prototyping even into post-silicon debug, that provides the foundation.

Schirrmeister: This goes back to the first question about whether this is a system-level issue. We are not talking about replacing UVM at the block level. This is not a block-level technique. This is about making sure you have the scenarios defined right. If you’re playing Angry Bird and a call comes in and someone wants to text you, all of these things need to work correctly. It’s about integrating all the components on an SoC.

To view part 2 of this discussion, click here.



5 comments

EDACafe.com - The Breker Trekker - Final Report on the Big DVCon 2014 Show says:

[…] The second major aspect of the show for Breker was my participation on the panel ”Is Software the Missing Piece In Verification?” on Wednesday. The panelists strongly agreed on one point: SoC verification can’t be performed effectively without software. The “verification from the inside out” approach that we’ve been championing for several years is truly a mainstream concept now. Moderator Ed Sperling is transcribing the entire panel as one of his “experts at the table” series and you can now read the first part. […]

[…] Suite Frank Schirrmeister, who spoke on Semiconductor Engineering Editor Ed Sperling’s DVCon panel on March […]

[…] To view part one of this discussion, click here. […]

EDACafe.com - The Breker Trekker - The Dawn of the Embedded Verification Engineer says:

[…] panel discussion and transforming it into one of his signature “Experts at the Table” three part series on SemiconductorEngineering. I encourage you to read all three parts since a bunch of […]

Leave a Reply


(Note: This name will be displayed publicly)