First of three parts: Double patterning and litho issues; multi-mode, multi-corner analysis; changes in signoff and timing; concurrent design; customer goals vs. EDA vendor investments.
By Ed Sperling
Low-Power/High-Performance Engineering sat down to discuss the challenges at 20nm and beyond with Jean-Pierre Geronimi, special projects director at STMicroelectronics; Pete McCrorie, director of product marketing for silicon realization at Cadence; Carey Robertson, director of product marketing at Mentor Graphics; and Isadore Katz, president and CEO of CLK Design Automation. What follows are excerpts of that conversation.
LPHP: At 20nm we have double patterning, new structures, physical effects, lower margins, and IP blocks that may not be fully characterized. What’s are the really big trouble spots and what do engineers need to do differently?
Geronimi: Double patterning is new. We did not deal with that before. Then there are effects that are getting worse. We are going to increase the size of the chip and are getting to a very large number of transistors. We will need to secure tapeouts. This isn’t new, but it is definitely more difficult.
Robertson: Customers are going to be faced with one of two things with double patterning. First, you can ignore it to a certain extent and just treat it as a DRC rule without knowing the reason why. So you could approach this with a ‘business as usual’ approach and wrestle with additional DRC violations. But at 20nm these are serious designers and they’re going to want to know why. So why are they getting these DRC violations that they didn’t at 28nm? And at 14nm it’s going to be multi-patterning. From a layout perspective you’re certainly going to have to wrestle with new effects and understanding the reason why. From a timing perspective, you can ignore those rules and create more corners, or you can understand where your nets are going to be printed and take a different tack with simulation, how you assign polygons to the map, and develop new strategies for timing closure because of this explosion in corners. Otherwise you won’t have time to do all these additional corners and simulations.
McCrorie: We have to focus more on design efficiency than anything else. Double patterning is obviously one of the challenges. Multiple corners is another challenge. And then you start to look at the designs themselves. They’re much, much bigger, and you’ve got to go through multiple mode, multiple corner analysis. It’s a huge challenge for the design team. Our focus is trying to bring the signoff into the design platform as much as we can, avoiding mistakes in the first place. The other thing is DFM. You need to look at litho effects very early in the flow. If you’re not careful, the litho will affect timing. And then you need to go through this big litho check at the end, which is not the optimal way of doing designs.
Katz: There is a reason we’re going to 20nm and 14nm. You can do more at those nodes. The tendency is to think we’re losing margin. There’s still a lot of margin available because we’re shrinking feature sizes. The problem is the current methodologies don’t necessarily allow you to address that. A lot of customers are starting to re-think how they go through signoff and timing. If we keep trying to do business as usual, you really do lose margin—although artificially. They’re also starting to rethink how they get signoff to correlate. There are problems that materialize with all the different voltage ranges and voltage conditions and temperature gradients. There’s a lot more that needs to be done to anticipate problems.
LPHP: The hardware has to work, but the software has to work with it as part of the same system. How much does that complicate things?
Katz: These flows we have inherited from 65nm to 28nm are highly partitioned in terms of the way they operate. They’re exceedingly complex, and they’re imposing new levels of complexity on the physical design. These new multi-core types of processors, in which the chip will go into a very low-voltage suspend mode or a very high-performance mode—those things impose new performance constraints on the physical side of things. But they don’t necessarily show up with the physical flow. The software folks make sure their software works with at the system level. A good example is the Nvidia Tegra. It’s five cores, one of which goes into a very low voltage mode to reduce power consumption. There are other flavors of that showing up where there are extreme operating ranges.
Robertson: A byproduct of that is on the verification side. To deal with these levels of abstraction, whether it’s power savings or just making the system-level designer’s life more sane, you have to verify them. You need to make sure signals are crossing from one domain to the next without violating the integrity. As designs get smaller, you have thinner and thinner oxide transistors, so these power domain issues going from high Vdd to low Vdd, whether it’s consumer or automotive, you have to make sure you have the appropriate circuitry between these various islands on the chip.
LPHP: What you’re getting at is to some extent concurrent design of lots of different facets, rather than doing them sequentially. Is that really happening, though?
Geronimi: At 65nm, we had to integrate a lot of things so we have a very complex work sheet. It’s already there at 20nm, so it’s no longer a concern for us.
McCrorie: From a power aspect, you have to create multiple domains and put in various isolation logic between them, We’ve seen an increase in both CPF and UPF from our customer base. We’re starting to become agnostic at Cadence. But we’re seeing that the software is expected to sequence a certain way and you need the logic to follow that sequence. So before you power on you’d better make sure the voltage is stable, the clock is firing up and a whole bunch of preconditions. There’s a lot of extra work that needs to be done, driven by the software. But the information about how to power up can actually be driven by UPF and CPF.
LPHP: As we move from 2D to 2.5D and 3D, what changes from the design perspective?
Katz: It doesn’t really change that much. There are special reliability issues and noise and crosstalk issues, but for the moment chip-to-chip communications in a stack will look a lot like chip-to-chip communications the way it works today without the package. You’ll still have things like SerDes to get between different dies. There’s no way to get around that. It will use the same techniques, architectures and interfaces to connect two dies. That doesn’t go away, but you do get better power and performance.
McCrorie: Most of what we’re seeing now is two die being stacked together. If you stack three die, you have thermal issues.
LPHP: There has been a lot of talk about hybrid chips where there may be more than one process technology on a single chip. Is that real?
McCorie: We’ve seen some of that, where people are putting 65nm analog technology on a 28nm digital chip.
LPHP: Can that be planar?
McCrorie: Yes. It just requires a high-speed interface.
Geronimi: You need to solve some critical issues and it does require more integration, but it’s not that complex at the end.
LPHP: There has always been a fair amount of tension between EDA vendors and leading-edge customers. Recently, EDA vendors have focused heavily on integration while customers want new tools at 20nm and beyond. How is that progressing?
McCrorie: Multi-mode, multi-corner is starting to stress the need for doing statistical analysis. We’re working with one large foundry. The concern is that when you don’t do it statistically, when you get to triple patterning at 14nm, it will get even worse. There is going to be some point between 14nm and 20nm where statistical will be considered by most of the bigger companies.
Carey: Our experience is a little different. Because of our link to manufacturing, we’re working with foundries and key customers on the manufacturing all the way down to 14nm. Then we’re bringing in the DRC tools and the other physical verification products. Our approach is different in that we make sure we can model geometries and verify them first, and then to make sure they’re well integrated in the flow. Our customers put a premium on the accuracy of the device extraction, the DRC, etc., and linking that to the manufacturing process. And then once we’re accurate we take on the integration and the efficiency. Our first rollouts are not fast. They’re optimized over time. We catch flak because they’re not well integrated in the beginning. We have to then optimize and tune and make sure they can be efficient. We can’t just be prototype-ready. We have to be production-ready.
McCrorie: And there are lots of rule changes going on.
Geronimi: There needs to be a very close working relationship between a company like ST and EDA. For double patterning, we don’t want to leave any margin. We want to benefit completely from this move, and we are doing that. The voltage description and the coloring—we are doing what is necessary. For that we need to work with the providers of the tools. In addition, with timing verification and extraction, it’s a much quicker way to get rid of margin. The technologies are there. But there’s also a question of design efficiency and accuracy. You don’t need accuracy everywhere.
Leave a Reply