Models, hardware-software co-design will take center stage as industry shifts to more re-use to satisfy faster time-to-market demands.
By Ann Steffora Mutschler
Looking at the biggest challenges for system-level design in 2012, model availability, IP integration and hardware/software co-design top everyone’s list.
The integration of IP and enabling that from a system-level perspective is a significant challenge for the industry. Moreover, it will become even more significant as the market moves further down the Moore’s Law road map, and as 2.5D and 3D stacking become more mainstream.
“Getting closer to making the development and generation of the models part of the standard design flow, as opposed to knowing you are building a system and then saying, ‘Now I have to build a system-level model to enable my software,’ should be just part of the design flow that the model falls out naturally,” said Frank Schirrmeister, group director of product marketing for the system development suite at Cadence. He said that as part of model development efforts, addressing application-specificity will be key because different application areas, such as mil/aero, wireless communications, networking, industrial automation, will require specific models.
Jon McDonald, technical marketing engineer for the design and creation business unit at Mentor Graphics agreed. “Having the availability of the models, having people delivering models to the standards is going to be a continuing process of improving the number of models and improving the availability.”
Connected to that, he said the industry must make sure that standards are not corrupted. “One of the things that we’re starting to see is sometimes people are changing the standards or extending the standards to provide some added benefit. But extending the standard breaks the standard and is essentially trying to lock people into whatever it is. That’s an issue that we really need to be concerned with because if we segment standards and if everybody ends up having their own X, Y, Z, it’s going to destroy all the benefits that we are trying to achieve with common language.”
The importance of co-design
It’s hard to decipher the conflicting reports about growth, complexity and change, but there is talk about 2012 being an inflection point of sorts.
“There are two key challenges for system designers and for the system-design process,” said Mike Gianfagna, vice president of marketing at Atrenta. “One is this whole hardware software co-design thing. We’ve talked about this for years, but I think the time is now.”
He pointed to companies like Apple that are leading the way in this sarea. “One of the things that strikes us about Apple’s strategy is they’ll lead with the software experience and they’ll lead with the application code and say, ‘How do we build a hardware architecture that makes software scream and makes it work efficiently?’ That reverse direction is a tremendous opportunity–I won’t say it’s very widespread. Most design methodologies still go the other way where you build a hardware architecture, you document the software interface, you throw it over the wall and the software guys figure it out from there.”
More companies are beginning to talk about software being at least as important as software. Whether they use software as a starting point for design is another matter.
“System design efforts that start to think about it the other way will wind up having a competitive edge in the market in the sense that if you can start with the application layer and you can use that as the kind of living, breathing specification for the hardware and say, ‘OK, what kind of hardware is really responsive to this software application? How many processes do I need? What kind of throughput do I need? What kind of heterogeneous architecture really comes for this kind of application? How do I deal with power domains and finding which power domain is needed? How do I plan and ensure that the software can really have complete mastery over the hardware’s power consumption and really control the on/off sequences to get maximum battery life? That’s a winning strategy,” Gianfagna said.
The only problem is there is no well-defined methodology for doing things that way today. While there are a lot of companies working on it, a tight interaction and a tight iteration between the hardware architecture and the software development process is still in the developmental stage. Driving this is the fact that none of this is really possible if the hardware design process is started from scratch and everything is built from the ground up.
On the other hand, if hardware design is really a function of mixing and matching IP and subsystem building blocks, those building blocks can be remixed to create a unique solution. Those subsystems, moreover, can be anything from millions of gates to a full chip, which is the current thinking in stacked die—most likely optimized for a specific market or function.
Ine way to think about this is kind of like a gargantuan version of the way people used to design PC boards, Gianfagna said. “You’d get out your LSTTL book, you’d kind of thumb through it, and you’d go pick out the functions that worked for your idea, you put them down on the board and you’d run a simulation. If it worked, you’d go build a board. The same thing is happening now, except on a massively more complex level. If I am building a new cell phone and I want a fourth-generation LTE video download and call proxying at the same time, and to be able to browse the Web while I’m on the phone and I want to have ultrahigh resolution video display, I basically go to the catalog and see what subsystems do those things and how well they talk to each other. That’s a different way of system design. It’s more like a cookbook, even though it’s amazingly difficult. It’s doing system design based upon pre-existing subsystem blocks.”
Reality check
Undoubtedly, the industry agrees on the big picture issues, but is the technology in place and is the willingness there on behalf of users to adopt it? Historically, the answer has been no. Things like logic synthesis took several years to really catch on. It takes time to get comfortable with new methodologies and ways of doing things.
“As we look back maybe in 2015 or 2020, what I’m hoping we say is that 2012 was the year that paradigm shift started to happen in earnest and we started to see larger numbers of high profile chips,” said Gianfagna. “Whether they go in cell phones or set-top boxes or compute servers, we saw the shift measurably happening where more of the informed folks out there started to really jump on the bandwagon and design a different way—and were rewarded by better market share and faster time-to-market. If 10% of the chips today are built this way, and a year from now if 25% of the chips are built that way, I think that’s pretty good progress. Success breeds success, and like other paradigm shifts it’s kind of exponential. It starts slowly and then speeds up.”
Leave a Reply