Experts At The Table: ESL Reality Check

First of three parts: What’s behind system-level tool adoption; Japan and Europe lead; complexity surpasses human capabilities; ESL sub-flows around TLM and HLS; the growing challenges of verification in context.

popularity

By Ed Sperling
System-Level Design sat down to discuss electronic-system-level design with Stephen Bailey, director of emerging technologies for the design verification technology group at Mentor Graphics; Michael McNamara, vice president and general manager of Cadence’s System-Level Division; Ghislain Kaiser, CEO of DOCEA Power, and Shawn McCloud, vice president of marketing at Calypto. What follows are excerpts of that conversation.

SLD: Where are system-level design tools being adopted and where will they be adopted in the future?
Kaiser: You need to understand the benefits and the use model for system-level design. So far, system-level design has been used for software validation, for early performance analysis, and high-level synthesis to improve the hardware design. There are several reasons for it. The benefits are improved productivity with high-level synthesis and software validation, and the ability to manage complexity with regard to multicore design and the link between the software and the hardware. The companies that use it are those that have to develop complex systems, such as the wireless market, set-top boxes and networking technology. There is a lot of software in those systems. It’s mostly large companies that have adopted it so far because there is a big investment needed. There also is a lack of standards. When you have standards it’s easier to deploy solutions, even with small companies. The Japanese market and the European markets have adopted it first. The U.S. market has been a follower.
Bailey: You have to define your terms. A system refers to scope, not an abstraction level. The abstraction level would be RTL, gate, transaction-level, algorithmic and high-level synthesis and some kind of hybrid in between.

SLD: Even the word ‘system’ is a little vague, right?
Bailey: Yes, system is contextual. But in some context, system means the entire thing that you’re going to deliver. System-level design is becoming more complex because systems are more complex. There are more people dealing with it. There are more details, more characteristics, more properties of the system that have to be analyzed and designed in. Abstraction is often linked with it. Because of the complexity level people have to move up in abstraction because they can’t deal with all the details at and RTL implementation level. You’re trying to explore power and performance characteristics of a system and overall functionality you think the product will need to be successful. So there are different things happening in the industry because of this. In the past you may have had one or two sharp guys who could carry most of the details in their heads and do the design and implementation. Now you have to spend a lot more time understanding the system—how will it all come together, how will it be integrated, will it meet your requirements from an architectural perspective? You still have an implementation team, too. When I first started the level of re-use was probably no greater than a macro-cell level. Now people are looking at complete subsystem use. When you get to that level of implementation, if there’s something new we also see them trying to move up a level of abstraction. The time it takes to get an SoC-based system out to market is going to be limited by software and, on the hardware side, how quickly you can implement those new functional blocks. We also have our own survey data. If you look at the bottleneck at the front end it’s clearly verification. About 56% of the total project time is spent in verification.

SLD: We’ve been hearing 70%.
Bailey: It may actually be as high as 75%, depending on how you look at the data. If you ask that question straight out, it’s 56%. But there is roughly a 50-50 split between the number of resources between design and verification on a project. And then designers say they spend more than half their time doing verification—especially for SoCs. So what’s driving it is complexity, keeping down design costs—including verification—and we’ll continue to see this trend.
McCloud: I agree that one of the primary motivations to move to ESL is verification. One of the benefits of high-level synthesis is reducing the whole verification effort because the RTL methodology does introduce a lot of errors into the design flow. Where ESL is being adopted, from a geographical perspective, is Japan and Europe first. But there are also ESL sub-flows. One is around TLM platforms and prototyping. The other is around HLS. These ESL sub-flows are very complete solutions. On the TLM side they’re being used to do early system validation and optimization. Equally important, they allow early software development. On the HLS side, it’s taking these abstract models and implementing them into hardware, which includes all the verification pieces. There are hundreds of designs done in both of these sub-flows. What is changing in these two camps is the modeling effort between virtual platforms and HLS are coming together.
McNamara: At each design node level there is a modeling task, a verification task, an analysis task that gets the information and moves data up to the next level, and then there’s a transformation task that takes the representation down to the next level. At the TLM to ESL level, high-level synthesis is that transformation that takes the model and transforms it into something that will run at the next level, which is RTL. The analysis takes you from design rules and characterization to the higher level that doesn’t care about any of the lower-level details. It just needs an indication of how much power is being used, what’s the area, what will be the timing of this, but leaving all the other details to the lower-level implementation. System-level is where you always have some level above you that needs to be taken into account and optimized and characterized. Everybody’s component is a system to them, and they all plug into something else. A cell phone is a component to the base station and the radios and towers. And all of that is a component to how I contact someone and say we can’t meet somewhere. Our industry has been all about RTL design. This is the next thing. That’s why we try to call it TLM verification. That’s the context to set it in. It’s a big challenge, with lots of opportunities. Our engineers already can type Verilog faster than we can verify it.

SLD: As mainstream moves down to 40nm and 28nm, does the need for system-level design grow for everyone?
McCloud: Absolutely, and for a number of reasons. One is that, as you move down to the lower geometries, power becomes a very big concern. The historical way of reducing power as you move to lower geometries has been to scale the supply voltage. That becomes extremely difficult after 45nm. You must look at other techniques. If you don’t, the power density will go non-linear on your chip. Companies will face not only battery life problems, but also faults due to thermal and supply integrity. The further you get away from the implementation, the greater your ability to influence power. ESL can help because software can be a big consumer of power. Being able to model that up front and decide whether to implement something in software or hardware is an important tradeoff because doing something in hardware can consume 20x to 100x less power. There’s clearly analysis to be done there. There’s also automating clock-gating insertion and things like that to help reduce hardware subsystem power.
McNamara: And we’re not all marching from 250nm to 130nm to 65nm. The EDA industry for the last 20 years has been about the most advanced node and everyone will need new tools for each node. That’s not happening anymore—130nm and 65nm will be useful technologies in 3D stacking, and you’re going to have a subset at 28nm and 22nm. The only things you can afford to move to the smaller geometries are things you can sell in quantities of hundreds of millions. You don’t need hundreds of millions of each type of sensor, but you probably do want the processing core to be standardized. That’s a dynamic the pricing model in EDA hadn’t contemplated. The old tools are still useful.
McCloud: And 3D does add a whole other level of opportunity. The first rounds of 3D will have ASIC and memory stacking, but one thing that has always been so attractive about FPGAs is the speed with which you can get products to market. There’s a drawback in terms of density and power consumption, of course. But eventually we’ll see these 3D devices start to layer even FPGAs. You may have ASICs, memories and FPGAs in a stack, so you get the best of both worlds.
Bailey: It has to go that way because of manufacturing costs. The benefits of going to smaller geometries are driven by economics. The counterbalance is that you need such a large market to justify it. You don’t need such a large market if most of it is hardened, but you also can program in some differentiating logic as well as the software that goes on it. That way you have economies of scale. With 3D, it continues that growth of complexity in doing a system-level design. What we’re focused on in EDA is the hardware first that software is going to run on. Even just putting memory on top will create new challenges because now you’ll have a completely new network-on-chip architecture. People are going to be changing architectures of processor subsystems because all the constraints that drove the caching strategies are relaxed a bit.



Leave a Reply


(Note: This name will be displayed publicly)