First of three parts: Interfaces, power requirements, software, open systems and the added complexity of flexibility.
By Ed Sperling
Low-Power Engineering sat down with Shabtay Matalon, ESL marketing manager in Mentor Graphics’ Design Creation Division; Bill Neifert, CTO at Carbon Design Systems; Terrill Moore, CEO of MCCI Corp., and Frank Schirrmeister, director of product marketing for system-level solutions at Synopsys. What follows are excerpts of that conversation.
LPE: What’s the big problem in verification?
Matalon: In today’s modern designs, many functions are implemented in software. Obviously, performance-critical functions are implemented in hardware. We’re seeing designs moving to multicore, multiprocessor. When I look at verification, I look at how you verify your requirements have been met at various levels of implementation. That includes functional verification, meeting performance requirements, and meeting low-power requirements. Those three are getting more and more intertwined unless the designs are not performance-critical or low-power critical, and we don’t see many of those kinds of designs these days.
Schirrmeister: The meanest bugs are at the hardware-software interface. This whole notion of getting something on which to develop the software as early as possible is critical. And it’s verification on both sides. It’s verification of the hardware, the software, and the hardware-software interface. We also see an increase in people using the software to verify the hardware, which augments the traditional System Verilog test benches. It’s not just the function and the driver. That’s a new trend.
Matalon: Why do you think that’s a new trend? People have been using software to verify hardware for as long as I’ve been in EDA, which is a long time. It has been used in emulation, in products like Seamless. Maybe what is new is that there are more advanced techniques like SystemC and TLM 2.0 models and more sophisticated modeling of software. But the need to verify the hardware in the context of the software has been there for quite a long time.
Schirrmeister: You are right. There are new techniques coming in earlier in the flow. But if you look at hardware and software and you think about the drivers and the OS and the application software, what used to be seamless was the low-level drivers. But it is being applied to new techniques.
Neifert: Customers have always been finding hardware bugs by running software on top of it, but mostly that was a by-product. It wasn’t a concerted effort to find bugs in the RTL with the software. What I’m seeing more and more is a blending where they are spending time writing software specifically to test the hardware. There’s great block-level testing with System Verilog test benches, but the interplay between these blocks is really only seen when you get the software running on there. Direct software testing to get the interplay of these blocks is the emerging area.
Moore: We’re a software company, and we get involved in verification with our customers selling complete systems. What we see as the big problem is the 1+1=3 issue. If you have two systems talking to each other, typically you’re only designing the SoC for one of the two. You can’t get to some of the bugs without assembling the complete system and testing it. You can’t get the coverage. Certainly the hardware-software interactions are becoming easier with virtualization, but the stuff that really is a problem is when you take a Windows system and you hook it up to your embedded system and you find there’s something you overlooked.
LPE: In the past, many designs were static. Now the designs are in motion. Power, software and hardware all change along the way. How accurate is the verification?
Matalon: Software provides a lot of configurability. You can change it and implement different functions. We also see a lot of configurability in the hardware. One effect of configurability is that it increases complexity, particularly with verification. You basically need to deploy almost every tool in your inventory to tackle the verification problem. You have to start very early and use abstracted methodologies and technologies such as transaction-level models to model your hardware and to run enough scenarios. You can use emulation. Verifying hardware and software on silicon isn’t going away. And even though it’s less prevalent, verifying software against RTL isn’t going away. All of this is forcing use of all the combined options on nailing the system-level verification problem.
Schirrmeister: There are so many moving targets between hardware and software that what the user is verifying has to be more specific. For example, if you have a configurable Tensilica processor and that gets used to replace a hard-coded RTL block, it has a profound impact on verification. You don’t verify the processor beyond the test benches you got from the vendor. You need to verify the connectivity between the different modules. And the functional verification actually happens a lot in software while the chip is burning in the fab. You don’t need to verify all the Postscript while the chip is running. You verify the connections and the architecture. Now they’re doing software verification for the functionality, and it becomes more specific for what the user wants to verify and has to verify.
Neifert: When you look at the commoditization of the IP blocks, almost every phone has the same core set of chips in it. They really differentiate themselves based on the software. If you look at the ways consumer products are increasingly differentiating themselves, it’s based on the software. The stuff you’ve got to put into the hardware to support that is enormous. Everything out that has a USB and graphics, and all the necessary software has to be integrated and support it. You may have software that only uses three-fourths of these, and then you take that same IP that was working fine in another chip, put it in a new environment and it exposes a whole new set of problems. You can have the same chip and new software is going to create new problems.
Moore: With the increasing adoption of open standards, you don’t have a closed system anymore. In the old days, even if the chip was being used in unanticipated ways it was still a closed environment. You could adjust what you were trying to do. With open standards you can’t change what Microsoft is trying to do or what the cell phone is trying to connect to. You might not want to just connect to one model, and you don’t know what they’re going to do next year. And your verification requirements may not anticipate how it’s going to actually be used.
Schirrmeister: So it needs to be future-proof.
LPE: Isn’t that even harder with derivative chips? It’s too expensive to develop one chip and they may have to last until the next node, so how do you future-proof it?
Neifert: But that’s exactly what’s happening. In the wireless space, they’re trying to design three or four years of cell phones in one chip. But few people anticipated four years ago that the iPhone was coming out. There are a lot of features being built in. And there’s a high cost if you guess wrong.
Schirrmeister: One of our customers told us they didn’t predict MP3 and they didn’t go with a programmable solution. Their chip wasn’t bought once the next standard came out. But on the verification side, it depends on what the user is looking to verify as well as the type of software. If you look at the range of software in the stack, the vehicle with which you verify the software changes depending upon the needs of the software. Starting at a very high level, if you’re downloading the iPhone SDK then you get something that’s hardware-dependent. If you go lower, you want to verify if the register fields are okay. The next level down you get to software that needs to understand the performance, so you care about memory management and cache. And when it gets into automotive safety critical, then you need cycle by cycle.
Matalon: The need to verify hardware and software is easier at various stages. But it also depends on what you need to get out first. Sometimes you need to get the hardware out first and you can modify the software. In many other cases, the software is your gating item. If you don’t have the software ready when the hardware is ready, you may not be able to tape out. The hardware may be inadequate to support the software. That’s one of the things we’re seeing today.
LPE: Inadequate in what way?
Matalon: Here’s an example. How would you know the performance of your final design if you are validating without the context of the software. You wouldn’t know the dynamic power being consumed by a device without something that really represents the workload of your application software. If you tape out your chip you might find out later that you’re missing your performance or low-power targets. You cannot validate them without software. You need to validate all three in concert—the hardware, the software, or the interactions between them. You need to create a functional model that can show you are meeting functionality, power and performance that can be used as a reference.
Leave a Reply