Experts At The Table: Does 20nm Break System-Level Design?

First of three parts: Is 20nm the end—or just the beginning of the ESL era; managing cost in the process; raising the level of abstraction and building confidence in abstractions.

popularity

By Ann Steffora Mutschler
System-Level Design sat down to discuss design at 20nm with Drew Wingard, chief technology officer at Sonics; Kelvin Low, deputy director of product marketing at GlobalFoundries, Frank Schirrmeister, group director of product marketing for system development in the system and software realization group at Cadence; and Mike Gianfagna, vice president of marketing at Atrenta participated in the discussion. What follows are excerpts of that discussion.

SLD: Is 20nm the breaking point from a system-level perspective?
Wingard: The good news is we’ve seen a bit about 20nm already. We’re fortunate to have one customer who is a large semiconductor company who is already working at 20nm, and from the perspective of being a soft IP supplier; they have managed to deal with enough of the physical stuff that we’re isolated from that. We do get to worry about the system things instead. The level of integration possible at 20nm is mind-boggling. The concerns are substantial. The requirements for minimizing the current to protect power and thermal reasons are absolutely fundamental. We’ve been talking about power as a first-class design concern for a long time, but at 20nm it’s everywhere and everything. There are incredible opportunities to use all this function and enable all these transistors—and really big challenges. I think the system partitioning questions are maybe the most interesting ones, because to think about a design with that many gates available to you as a collection of 25,000-gate size blocks—it’s just impossible for us as humans. So the question of what kinds of hierarchies do we create, how do we describe those hierarchies, how do we model them and how they interact with each other? How do we prove to ourselves that they are going to work at a performance level and at a functional level? And then how do we pass them through to the back end? These are all really big challenges.
Low: From a foundry perspective we struggle to understand the system-level behavior constraints. What we try to do is to enable the technology—and technology from our perspective means certain barriers that we have to overcome. I’m sure you have heard of cost concerns. We believe we have found an optimized definition of 20nm. If we drill down into the details of why 20nm is so expensive, it really comes from the double patterning—more masks on the back end. Do you really need that many masks? It’s a compromise. We have found a good balance. What we have done is provide help on the design side in trying to understand both the system-level and design challenges, as well as post-manufacturing on the packaging side. We have actually figured out a cost-optimized solution for each of the different phases. For example, design rules for double patterning—we have managed to minimize it. We have built an internal design capability, not intending to offer IPs, but using that capability to exercise the technology definition, optimize design rules from there. We are actively acquiring in-house understanding. The game has changed. We’re collaborating much earlier than before.
Schirrmeister: We haven’t seen anything in 20nm that cannot be modeled, so 20nm is not breaking system-level design. What I think is happening that is at risk at 20nm is to do the design without system-level design—specifically, things like doing the right performance analysis to see that your bus or network interconnect is correctly assembled, doing the right simulation to figure out if the power envelopes are met by annotating data from the technology all the way back, making sure that the transaction level, the RT signal level and everything is connected. So I think it’s a breaking point in the sense that if you don’t use those technologies you are increasing your risk unnecessarily without a need to increase your risk.
Gianfagna: Let’s talk about the continuum, which is what most of it is. Everything gets bigger, 500 million gates, who knows what the number is—it’s a very big number. You just can’t run that flat anymore. You just can’t do it as one block. You can’t do it as multiple blocks. You can’t even do it as IP. You do it as subsystems. What’s happening, and we see it happening very clearly, is that what used to be a ‘nice to have’ at 28nm and 45nm becomes a ‘need to have’ as you move down. And the ‘nice’ versus ‘need’ to have is reducing detail by moving to a higher level of abstraction, with a suitable amount of accuracy so you can actually still build a chip. If you can’t do that, you can’t get there from here. You wind up doing the some old thing at 20nm and the cost of the silicon makes it uninteresting to go to 20nm. You have to extract a lot more value from that process node, and the way you do that is by adding more technology, adding more complexity, adding more features to the chip. And the only way to do that is at higher levels of abstraction. We’re seeing more and more focus on better qualification of IP and subsystems, things where you really need to figure out what are you buying, what are you getting in that IP block? Is it really going to work? What are the integration risks? Is it testable? Is it going to cause routing congestion or not? It’s unacceptable to find that out when you’re at place and route that it’s too late. You have to learn how to deal with higher levels of modeling and trust that the analysis you get at that level is accurate enough to help you make decisions. Power is a perfect example. You’re not going to meet the power budget by shaving transistors and doing low-level things. You need to do it at the architectural level and worry about power domains: what’s on, what’s off? All this talk about dark silicon is real. The only way to accomplish that is at very, very high levels of abstraction to figure out how to do the power domain analysis and how to capture that early and then flow it all the way through the whole process. It’s really all about managing complexity, working at a higher level and believing the high level abstraction will tell you enough meaningful information that you can trust the direction and trust the advice that you get from your tools.

SLD: What do we have today as far as system-level tools? What do we still need?
Gianfagna: It’s unclear if this is really true, but I’ve seen some data that suggests that 20nm might be the last node where you can actually build a transistor strong enough to drive off-chip. Below 20nm you can’t get there from here. You can’t build a device strong enough to drive off-chip, which says that you’ve got to go from your sub-20nm technology to something more robust to actually drive a signal off-chip that will go somewhere. That demands 3D, which is an interesting discontinuity. Up to this point, 3D was interesting but not required. Below 20, it might be required. At 14nm can you build a transistor that can reliably drive a signal through a bond pad to the outside world?
Schirrmeister: That’s true, but it doesn’t break the system level in a sense because with 3D, the question becomes can you actually design 3D without having done the right system level things. The right system-level things in this case would mean to go back to annotating power data and performance data into the transaction-level model to be able to analyze the signal which I used to drive through a pad, through a pin, onto the board. What happens to my power, to my performance if it’s now going through the vias? The technology behind that at the abstract level isn’t any different, meaning, I’m plugging data into the transaction-level model and annotating it. But you could turn this around saying, at this level it’s just one more good reason why it’s really, really dangerous not to use system-level.
Wingard: I’m not going to undersell the abilities of semiconductor companies to come up with solutions to those problems. They’ve already demonstrated some amazing skill in giving us flavors of transistors within one process technology so I wouldn’t sell them short. They may have a stronger transistor to prevent their customers from having to make this jump. On the flip side, as we get to higher levels of integration, we must be able to divide and conquer. The abstractions are really fundamentally—no different than thinking about the design when it was at the printed circuit board level. We have to have these components. We have to know these components are good. We have to trust that they’re good and only look at the interactions at their edges.



1 comments

[…] resulting write-up (part 1, part 2 and part 3) provides a great summary of what we discussed. When asked specifically whether […]

Leave a Reply


(Note: This name will be displayed publicly)