Experts At The Table: Verification At 28nm And Beyond

First of three parts: Who owns the power models and how accurate are they; can verification ever be tamed; functional vs. structural verification; block level vs. system level; the rising cost of a re-spin.

popularity

Low-Power Engineering sat down to discuss issues in verification at 28nm and beyond with Frank Schirrmeister, director of product marketing for system-level solutions at Synopsys, Ran Avinun, marketing group director at Cadence, Prakash Narain, president and CEO of Real Intent, and Lauro Rizzatti, general manager of EVE-USA. What follows are excerpts of that conversation.

LPE: Power seems to be an increasing portion of the whole design process. Who’s responsible for creating the power models?
Schirrmeister: The technology provider—TSMC or GlobalFoundries—will have to characterize their libraries for power modeling. There are high-level models for power intent. And then when it comes to libraries there has always been characterization. For high-level synthesis, you have area, performance and power. Some companies like ChipVision enhanced that to include dynamic power, which basically meant you no longer had one number for the power multiplier. It was now dependent on how the inputs were targeted. What we see today in TSMC’s reference flow 11 is they characterize their libraries for low power to make it accessible for transaction-level modeling. Then you add up meaningful power numbers that correlate to those technologies. This is in the early stages today and it’s proprietary. At one point this needs to be standardized.
Avinun: There are two topics here. One is methodology—how to do this. The other is what is the format and who owns the accuracy of the data behind the format. What hasn’t been solved with methodology is beyond TSMC. We have compared our results with real-world results. When you tell the customer, ‘Compare your back-end flow with your libraries,’ that’s not good enough. That’s only going to give you the data about your SoC or ASIC. When they look at power, they look at what they test in the lab, in real environments, and with all the other devices—the noise, the environment beyond this. That’s what you measure in the lab. First, there’s a problem measuring the power. Even the most advanced customers don’t have a good methodology. They also don’t know how to partition the die of the ASIC, so what they measure is the overall power. They don’t know how to partition those components and there is no good way to model those. We’re looking at the ASIC and the die level. But you need to model the whole system. And then, what are the key critical components of the power? No. 1 is the system power, including the chassis and the die. The second is dependent on what you are running. Beyond this, once the methodology is solved, there is an issue of formats and who controls the libraries. The other component that needs to be considered is the memory. Memory is consuming most of the power, and these models are being controlled by the memory vendors.
Schirrmeister: And the access involves software.
Avinun: At the block level there are methodologies addressing these issues. The problem is at the system level. If you go high enough too early you may have 300% error. Or if memory is 80% of your power consumption, if you haven’t decided what memories you’re going to use then all you can do is make the tradeoff analysis. But you can’t say it’s going to consume this much. Over time we will get better at simulating and emulating the different scenarios, but it’s all still in the very early phase. TSMC seems like one of the key companies to take the initiative here. It’s not just ASIC issues.
Narain: I have to question why the spreadsheet approximation will break down. For power estimation you need the characterization of the libraries and all this information from the vendor at the block level—and that’s where you get more precise power estimation information. But if you’re doing planning, why would you move over from spreadsheets if you’re 300% off?

LPE: Isn’t it just complexity? At 28nm you’ve got electromagnetic interference (EMI), electromagnetic compatibility (EMC), electrostatic discharge (ESD) and all these different power islands.
Narain: But you’ll still be off in accuracy with power. The simplicity of the methodology will still persist.
Schirrmeister: There’s complexity on the technology level. But the functionality is dynamic and cannot be predicted.
Narain: When you want precision you can’t do that at the system level. If you’re doing system-level planning, you’re doing first-order estimates and second-order estimates. How precise does your planning need to be? Spreadsheets should suffice.
Schirrmeister: We’ve already seen that breaking down. Within the model there is dynamic power, depending on the inputs. And even in high-level synthesis you have schedules. You also need predictable success and then that correlate back to the predictions from the beginning. There also are a lot of parameters beyond the system on chip. But if you contain it to the chip and the memory it accesses, a lot of things can be done pre-RTL if you really execute on transaction-level models. Is it as accurate as RTL? No, but the layout designer laughs about the RTL designer. In the end it comes down to predictability and correlation.
Avinun: You need to separate between the planning and the analysis. For the planning you can do it with chip-planning approaches. We allow you to take data-sheet components—our legacy components—which are not the complete design, and do tradeoffs between memories and IP. And for new IP, we use high-level synthesis, which allows you to make quick tradeoffs between area and performance. If you optimize for area, then you have to optimize for performance. And then, for full system-level analysis, you need a presentation of your system. That could be RTL. It doesn’t have to be high-level synthesis. But as soon as you go to the full system, it’s not accurate.

LPE: Verification has taken 70% of the NRE in design. Will that ever be brought under control at future nodes, or will it just get worse and worse?
Rizzatti: More and more, emulation is replacing simulation, especially for block-level verification. You can do a lot more work in a given amount of time. I would expect that will be a major contributor in containing the cost of verification. In a given amount of time you will achieve higher verification. And because SoCs will have more and more embedded software, it will be more complicated. Just saving one re-spin at 28nm will be $10 million or more.
Narain: If you look at the design cycle times, there was a time when people used to tape out 1 million-gate designs in six to nine months. Then they went to 10 million-gate designs in the same time period. Now it’s approaching 100 million gates and at some point in the future it will go to 1 billion gates. In six to nine months you can’t design 100 million gates from scratch. There’s a lot of re-use. SoC methodologies have improved. Simulation is very important in designing gates from scratch. Emulation is important for system-level design. But at the end of the day you still have to tape out the chip. The signoff requirements aren’t just about functional verification. The number of issues you need to look at with clock-domain crossing may increase 10x, so the signoff time will increase 10x. That’s not possible. These other methodologies are becoming much more important to the overall verification strategies. Verification is no longer just about simulation. It’s a lot of processes that are done in parallel, and they’re better served by an independent methodology to sign off. Power intent verification is one item. Clock domain checking is an item. Methodology management is also going to become very important.
Avinun: We’ve seen verification, integration, and overall hardware-software verification are becoming the key problems. If you look at the block level, most vendors and customers know how to do it. It’s still challenging, but it’s not as challenging as it used to be. The challenge now is in the integration, and we see several trends evolving here. One is on the emulation side. This may be the same solution we had 10 years ago, but it’s still one that customers are using. We also see major pressure from companies to migrate from block-level to SoCs. Some are using acceleration, but they’re also becoming smarter about the way they verify the SoC and do simulation. They partition the problem into multiple domains, they use more off-the-shelf IP and verification IP. In addition, they are moving to a higher level of abstraction. We don’t see this yet as successful as emulation, but companies say this will be one of the ways to solve the problem. They won’t be able to continue to do RTL verification. They’re going to hit the wall, which means they will be spending too much money. It’s a major change but it’s not going to happen overnight.
Schirrmeister: Simulation and hardware emulation are important, but the underlying problem is it’s different machinery to do the same job faster and better. There’s also the issue of becoming smarter about the verification. Verification is an unbound problem, so the only solution is design. You need to be smarter about how you design the components. With block design, a lot of verification relies on the fact that the block is pre-verified. IP qualification makes sure the IP will work in the new system context. That has become an important part of the overall verification. On top of that, to make it smarter, you want to avoid verifying functional components. Instead of running all variations of an MPEG decoder on a hardware block, you make sure you have the right instructions. You need to separate the functional verification from the structural verification and move the verification into the software.



Leave a Reply


(Note: This name will be displayed publicly)