First of Three Parts: Moving up several layers of abstraction to build a better semiconductor design.
By Ed Sperling
System-Level Design sat down with Simon Bloch, vice president and general manager of ESL/HDL Design and Synthesis at Mentor Graphics; Mike Gianfagna, vice president of marketing at Atrenta; and Jim Hogan, a private investor. What follows are excerpts of a lively, often contentious two-hour conversation.
SLD: Let’s start out with a point of reference before we get going on virtual platforms. What does ESL mean to you?
Jim Hogan: ESL often gets defined as behavioral synthesis, and that’s part of the problem. The market at the system level is a lot more than that.
Mike Gianfagna: The way we look at it is ESL is anything above physical implementation. If you have synthesis, place and route, RTL that’s well defined, that isn’t ESL. Anything above that, where abstractions get more vague and your options for hardware/software get broader, or your options for what kind of IP to use for the system, that would all collectively be ESL.
Simon Bloch: It’s a TLM platform-based design. It provides four different value functions. One is architectural design. The second is software-hardware co-design. The third is system-level verification, and the fourth is high-level synthesis.
SLD: ESL has been overhyped for years and now it seems to be almost mainstream. Has it gotten a bad rap?
Hogan: There is no brand equity in ESL. People equated it with RTL-level design. People are using platforms because people developing software applications really don’t care about the hardware as long as it gives them price, performance and functionality. The user experience, by and large, is in software, whether it’s a phone or a game console. When you buy a car it’s not what’s under the hood anymore. It’s how things work. The same engine that’s in a Cadillac is in a Chevy. We have to virtualize as much of that as we can for the software.
Bloch: I think we need to differentiate between hardware-dependent software and hardware-independent software. Hardware-independent software can run on anything. With hardware-dependent software, there’s a big cost element involved so it can run on certain chips. From the last numbers I’ve seen, at 32nm it’s estimated at about $20 million of the $60 million in development, which goes hand in hand with architectural design and chip implementation.
Hogan: Those are the interesting SoCs to talk about.
Gianfagna: I don’t think you can find an SoC these days that isn’t dependent on software.
SLD: How much of the software stack now has to be included, because the platform is now hardware and software.
Hogan: On any given SoC, 70% to 80% of the IP blocks are sourced commercially. But they’re not verified the same way. Taking a heterogeneous mix of stuff and being able to verify and validate that is a big deal.
Gianfagna: It’s verifying, validating and stitching it together. But you can trace this problem back a long way. This Holy Grail of mixing and matching blocks isn’t new. Back when digital TV was introduced, Philips had a building-block concept where you could stitch together all these pieces. It didn’t work because even within the same company it was horrendous to take a chip and turn it into an IP block. There were different interfaces, voltage levels and protocols. We’ve gotten smarter about that. There has been incremental help with standards like IP-XACT. But you’d better get it to work together.
Bloch: You do need to understand the hardware to develop software, though. You have options to access cache or memory that affects the user experience. Today it’s closer to the hardware discipline than the software discipline.
Hogan: Software is dependent on the hardware and the hardware is dependent on the software. If’ you’re concerned about power, you can have less registers and you can schedule them differently. That saves a lot more power than clock gating and different techniques from the physical domain. You don’t have to have as much memory. You can stay on-chip with cache and you can be a lot more creative in terms of how you architect your chip. The problem is that typically that’s not available to the software guy to virtualize. You’d like to work interactively, but software engineers eventually will get to the point where there is a software signoff.
Bloch: I agree. You see some of this with FPGA prototyping. Most of that is facilitated by the need to run software fast. If it works, that’s great. The problem is that if it doesn’t, you can’t debug, so it has limited value. That’s why we believe the virtual platform is the way to go—for verification, design, analysis and optimization.
SLD: You’re moving everything up several levels of abstraction. Is the signoff now done by hardware or software engineers?
Hogan: I can get the hardware done a lot faster than the software because the software keeps changing and changing. It’s a lot harder to change hardware than software. If I could get my software group to test against a virtual platform, that would save lots of time to market. It’s like NASA. You can’t wait until you build the space shuttle and then load the software. You have to do it together. Chips are like that now, too. It’s just too complicated.
Bloch: The engineering management in companies is coming under the same roof for hardware-dependent software.
Hogan: Especially for the drivers and OSes.
Bloch: Yes. That creates a single point of contact, so the people building platforms are not separate from the ones building libraries and creating gate-level descriptions. They can debug and analyze what they’re doing. Having all of that under one roof helps.
SLD: Platforms suggest the idea of reusability. Can parts be re-used here?
Bloch: In this platform you can have two elements. One is the TLM functions and the other is the connections between the functions. If there are parts that can be re-used, then you re-use it. If you have a TLM function for a DMA (direct memory access) controller you can re-use that. How you put them together varies from one chip design to another.
Gianfagna: That’s an instantiation of a platform.
Hogan: Think about a physical network. You have routers and appliances, but you configure that to a certain service level. You may use different techniques and different service-level agreements for your enterprise needs. UBS needs real-time access to a transaction database, not reporting. So that will be a different configuration. In a different enterprise, you may need reports rather than real-time access.
SLD: You’re talking about prioritization, right?
Hogan: Yes. We arrange the components to give us more optimization for our system application.
Gianfagna: Everyone knows the cost of a 32nm or 45nm chip. It’s hard to find a market for all of that. But if you can put all of that into a reference design, then you have 60% of your ROI in that platform. From there you think about what are the derivative products and related markets that might have better power profiles, DSP capability, more display processing capability. If I can get derivatives done based upon that base platform in dramatically less time—in one case it went from four engineers and nine months to one engineer and one month—then that can be the difference between the success or failure of the project. Now I can take all the work I’ve done on the first platform, change the interfaces, change some of the IP and crank out a new derivative more quickly. I can build that chip and its family and get a positive return, but if I’m going to hit it out of the park with one chip, there aren’t that many markets for it anymore.
Leave a Reply