Second of three parts: TLM modeling, complexity and where the new opportunities will be.
By Ed Sperling
System-Level Design sat down with Simon Bloch, vice president and general manager of ESL/HDL Design and Synthesis at Mentor Graphics; Mike Gianfagna, vice president of marketing at Atrenta; and Jim Hogan, a private investor. What follows are excerpts of a lively, often contentious two-hour conversation.
SLD: What’s the starting point for designs in this world?
Hogan: In the virtual world, where you start is models. Those models have to accurately represent your concept at the level of abstraction you’re working at. The problem with models is they have to be accurate enough but not too heavy. I used to get SPICE models with 98 terms. I could only use 5, so I would fill in coefficients on 10 of the terms and zero everything else out and see what happens. We need more standardized concepts to make this work. Then we have to figure out what is the right level of abstraction, and then we can go off and build simulators and optimizers and analysis systems to take advantage of that.
Bloch: The SystemC TLM 2.0 enables us to go from light to heavy on an as-needed basis. You start with less accurate but fast, which is good enough for virtual prototyping. You don’t need a lot of information for prototyping. But at the next step, if you want to do exploration for performance of power you need more accuracy. You’re okay to sacrifice some speed, and you can load in more data. The key is a scalable model, and it exists in TLM 2.0. Function, performance and power are separate. You start with function, you incrementally add timing.
Hogan: We used to live in a 2D world—performance and area. Now it’s a 3D model. And there’s a shape within that model that’s your design space. You have to optimize that 3D model for your design space. You can’t do this on an Excel spreadsheet. That’s only two dimensions. You can’t do this without a simulator.
SLD: How about software dependencies?
Bloch: Software dependencies are taken care of in the TLM model, which is comprised of TLM functions, interconnections—which are called transactions these days—and if you add software it affects the transactors, it propagates functions, and you can measure things like power, performance and how the architecture affects that. Software is part of the TLM platform.
SLD: As we head into even more complexity with more power islands all the way down to 22nm, you need to layer your model, right?
Bloch: Yes, and that’s the beauty of dealing with different levels of abstraction. When you talk about power islands, this is reflected at the RTL level and down. At the virtual platform level you care about how long it takes you to go from wake to sleep. That’s what you care about at that level. But if you made the wrong decisions for the user experience, it’s very hard to recover at the power island level.
Hogan: What will you tolerate in terms of sleep? Maybe half a second?
SLD: It depends on whether you can receive your phone call or not, right?
Hogan: Yes, and there are hardware requirements for that. The memory for that has to be powered up all the time. This is all hierarchical. We used to talk about timing models, and that’s where EDA stayed. When we got to RTL we talked about cycle-based models. Transactors are the next thing. Then there are threshold models from the applications. The abstraction has gone up to the level of the software.
SLD: Basically like a software stack, right?
Bloch: Yes, that’s a good analogy.
Hogan: The challenge is traversing the hierarchy from the abstraction down, and then back up. Power is an artifact of how your function wants to be.
Gianfagna: We have a customer now with 20 power domains. Of those 20, there are hundreds of ways you can slice them. Which one is right? There is a lot of ‘What If’ going on. Maybe you do a trial through a high-level synthesis through a synthesis down through RTL, and then go back and try it again. The good news is I can do that in days. If I go down to the gate level, it’s weeks. You’ve got to do these ‘What ifs’ at the higher level. New tools—some exist, some are still to be invented and released—are the only way to get there.
Hogan: Since the late 1970s, digital design has been sequential. That implies there’s a clock. That consumes a lot of power. Asynchronous doesn’t. I’ve seen chips with 90 power domains. They’re basically doing an asynchronous design, but they haven’t figured that out. You have to decide how you want your chip built at the architectural level. It’s been tough for investors to go there. Mentor has done that and so has Atrenta, but there isn’t a lot more. We have to invest in that.
Gianfagna: How much of EDA’s lifetime has been directly aligned with our customers delivering and accessing an end market? Very little. It goes into a product and when we ask a question about where our stuff is getting used, the answer is, ‘It doesn’t matter. That’s not your business.’ With this platform movement, the semiconductor companies are trying to figure out how to collaborate, how to build derivatives in a cost-effective manner, how to build hardware and software together and how to build a hierarchical model. It’s not cost-effective for them to do this on their own. If the EDA industry gets involved at that level, then they become a partner. We can work with TI, ST or Qualcomm to deliver a platform to their end customer that they can customize for their application. We’re starting to get involved in the business of producing semiconductors.
SLD: It sounds like you’re vertically integrating the entire process.
Gianfagna: Yes, which I think is new for EDA.
Bloch: Is that called Spectrum Design Solutions? (Spectrum was bought by Digi International last year.)
Hogan: Or (Cadence’s) Tality?
Gianfagna: No, it’s not a service at all. It’s not a design flow. It’s enabling technology to allow the semiconductor companies to deliver a complete applications environment. That requires EDA’s help.
Bloch: It goes in stages. There are standards first. There is methodology second. With methodology, then you know what to build. In this platform-based design space, we’re early. We tried things in the past for virtual platforms at the RTL level and they were too slow. Now we’re going up a level of abstraction. TLM is a new standard and we’re figuring out the methodology. It’s a good model to work with customers, but it may not be an ongoing business model. We need to understand the methodology to build the tools.
Gianfagna: I would argue that with some of the larger accounts and early adopters, that’s happening today. Is it hundreds of companies? No. Is it a dozen? Yes.
Hogan: If you’re a general contractor you have to build a scalable practice. Wherever you can on a methodology, you automate it. Automation takes people out and builds in scalability. But for a long time it’s going to have a significant people component. It’s not like analog. Analog isn’t chip design. It’s always block design. And something most people don’t think about, analog doesn’t differentiate a product. You can have good analog and bad analog, but functionally it doesn’t change things. Analog has been very resistant to automation. It takes awhile. The first SPICE engine was in 1962. Everyone agrees on the need. System-level customers are giving us their permission. And those leading-edge customers that Mike talked about are system-level customers. They’ve moved up a notch. Our opportunity is not clear to everybody. We’re all comfortable doing place and route. It’s our comfort zone. We have to evolve.
SLD: It’s evolving, but isn’t it also a compromise on all sides based on an explosion in complexity?
Bloch: I think complexity is driving outsourcing and separation of functions. Semiconductor companies used to do it all. Automobile companies used to do it all. I am a big believer that system-based companies are going to use platform-based design as an executable spec.
Gianfagna: Yes, and that’s the opportunity for this stagnant EDA market. It’s a whole new customer base. Traditional EDA needs this methodology. There also are a lot of new users well above our traditional semiconductor guys.
Hogan: What we know as semiconductor implementation isn’t going away. The customers are going to squeeze as much money out of that as possible so there will be less money to share among the suppliers of that. There will be consolidation, and usually in a market only the top three survive. Semiconductor companies have short product cycles, they can’t amortize these tools across the life of the product anymore and they push the risk onto the subcontractors.
Leave a Reply