Experts At The Table: Does 20nm Break System-Level Design?

Second of three parts: What system-level tools are still needed; software as a breaking point; capturing the use-case scenarios.


By Ann Steffora Mutschler
System-Level Design sat down to discuss design at 20nm with Drew Wingard, chief technology officer at Sonics; Kelvin Low, deputy director of product marketing at GlobalFoundries, Frank Schirrmeister, group director of product marketing for system development in the system and software realization group at Cadence; and Mike Gianfagna, vice president of marketing at Atrenta participated in the discussion. What follows are excerpts of that discussion.

SLD: What do we have today as far as system-level tools? What do we still need?
Gianfagna: It’s unclear if this is really true but I’ve seen some data that suggests that 20 nanometer might be the last node where you can actually build a transistor strong enough to drive off-chip. Below 20 nanometer you can’t get there from here. You can’t build a device strong enough to drive off-chip, which says that you’ve got to go from your sub-20 nanometer technology to something more robust to actually drive a signal off-chip that will go somewhere. That demands 3D, which is an interesting discontinuity. Up to that point, 3D was interesting but not required. Below 20, it might be required…At 14 nanometer can you build a transistor that can reliably drive a signal through a bond pad to the outside world?
Schirrmeister: That’s true, but it doesn’t break the system level. With 3D, the question becomes can you actually design 3D without having done the right system-level things? The right system-level things in this case would mean to go back to annotating power data, performance data into the transaction-level model to be able to analyze the signal that I used to drive through a pad, through a pin and onto the board. What happens to my power, to my performance if it’s now going through the vias? The technology behind that at the abstract level isn’t any different, meaning, I’m plugging data into the transaction-level model and annotating it, but you could turn this around saying, at this level it’s just one more good reason why at this level it’s really, really dangerous not to use system-level.
Wingard: As we get to higher levels of integration, we have no choice, we must be able to divide and conquer. The abstractions are really fundamentally no different than thinking about the design when it was at the printed circuit board level. We have to have these components; we have to know these components are good. We have to trust that they’re good and only look at the interactions at their edges. This is very comfortable domain for me because that’s what I’ve been doing for the last 15 years. It’s kind of the fundamental idea behind Sonics is the original vision of VSIA from all those years ago, which was plug and play IP. How are we going to encapsulate these components in such a way that we can just worry about their behaviors? As a member of the IP community I have to say that many IP companies haven’t done that very well. It’s still the case that their customers have to know far too much about how the internals of the IP work. They haven’t been well abstracted. With the subsystem trend, there is new hope. Most people’s definition of the subsystem is that it’s much better abstracted, it includes a number of IP blocks, it often includes a processor, and it almost always includes the software to really abstract it. I think we have a much better agreed-upon model for how we can actually think about these things at the system level.

SLD: What are the other breaking points?
Schirrmeister: Software is the other potential breaking point. From a cost perspective at those nodes you have a terribly complex-to-verify and hard-to-verify interconnect, you have lots of processors on it. So then the breaking point then becomes, how do you actually put the functionality on whatever programmable device you have developed because of a the cost structure perspective. It’s so expensive that now the functionality hops into the software even more as it has already done.
Wingard: I have thought about SoCs in a slightly different way. From the first slide set we ever produced, the question we asked was: SoC is this new kind of device—is it going to be system designers becoming chip designers, or is it going to be ASIC designers becoming system designers? This is the tension. Nine times out of ten, it’s been the ASIC guys who tried to become system designers. Fifteen years ago when we started the company and we tried to describe to them, ‘Hey, we’ve got this better mousetrap for doing this thing,’ [the response was,] ‘How do we do this?’ What became clear is ASIC people did not have a very good view of a performance-driven methodology for making decisions they were making. They were used to a design in which they could put the functions together. They were OK and that was enough. And so to many of these accounts, we introduced technology that we used to call ‘dataflow modeling’ that today we call transaction-level modeling. ‘We want to do this, we don’t care about what the data is. Who cares how much data there is? We don’t care about the specific addresses. We care about the relationship between the addresses because that drives how the system performs. We care about the time domain a lot because that helps us imagine the performance. We care enormously about the different use-case scenarios the end device has to do in capturing them, and now with this we’re overlaying requirements for power. We care enormously about what power states are each of these subsystems going to be in in each of these scenarios. What’s becoming really important is how quickly we can switch between these power states.’

SLD: So much of this is dependent on the end application. How do we capture that?
Wingard: That’s what I think these operating scenarios become. I think in all of the transaction-level modeling work that others have talked about, you have a system scenario that you’re considering. You try to estimate what are the various tasks abstractly that are going to be running on the different parts and there’s a lot of work that’s been done in this task-based modeling. My view is unfortunately a little bit outside of the grasp of many of the SoC design teams that we work with, but the next level refinement down from that task-based view is a transaction-level view that most chip architects can readily put together.
Gianfagna: It’s going to be unavoidable to talk about software in this discussion. That’s a big part of how you figure out use cases and the scenarios, which is an interesting extension of the design methodology we’re talking about. What in the world happens to the hardware/software interface? It really needs to change, I believe that strongly. The software team is at the top of the pyramid. That’s not true today. I think that needs to change. What if the software guys are at the top of the stack and they’re driving the hardware architecture to be responsive to this software user experience and the resultant power consumption? That’s a different world but I think it’s an interesting one.
Schirrmeister: I’m a strong proponent of what the ITRS is suggesting with those base platforms for the application domains and I see this in the customers. If I go to my top five graphics guys, for Palladium emulation and if I look at the top five wireless guys, their designs don’t actually look all that different. It is a base topology that looks similar and goes back to Gary Smith’s multi-platform-based design, and that’s what the ITRS has done already with the networking platform, the mobile [both with and without power]. I’m a very strong believer that this will be the case. And then you have these base platforms. The interconnect becomes the thing you really need to verify. With all those processors in there you don’t actually necessarily care about the functionality. The functionality is later defined in software. You just need to have the right ranges of performance, of power for those chips that can actually do all those things.