Experts At The Table: Challenges At 20nm

Last of three parts: More corners; hierarchical and incremental flows; half-nodes and longer time between nodes; competitive stakes; stacked die impacts; internally developed vs. commercial tools.

popularity

By Ed Sperling
Low-Power/High-Performance Engineering sat down to discuss the challenges at 20nm and beyond with Jean-Pierre Geronimi, special projects director at STMicroelectronics; Pete McCrorie, director of product marketing for silicon realization at Cadence; Carey Robertson, director of product marketing at Mentor Graphics; and Isadore Katz, president and CEO of CLK Design Automation. What follows are excerpts of that conversation.

LPHP: How much of a problem are all the corners?
Katz: You’re going to find out that you have a late-stage problem and it will have a ripple effect across the entire signoff and extraction flow. People have been working out how to do incremental modification, but they’re also saying they have to bring all of this back under control at 20nm or 14nm or they’ll never get to signoff. People are starting to look at strategies for that final stage of the flow, whether it’s the last stage of optimization to try to get more power out or the final stage of signoff. They’re breaking it down into smaller pieces and smaller sets of problems. Otherwise it will never get done. We’re going to be seeing 140 million and 200 million instance circuits in the not-too-distant future. That’s an example of something you do not want to do in a 100-plus corner mode every day under any circumstances. You’ll see increased use of hierarchy and increased use of incremental flows.
McCrorie: The block boundary conditions are established in looking at the full chip, and then you work on each block as a sub-entity and close it off as a sub-entity. You can’t look at everything full-chip. A hierarchical solution is the key.
Katz: That goes back to how system specifications end up interacting with the back-end flow. The pushback that will come back is, ‘Don’t send me everything together.’ It’s like taking the cow, grinding it into hamburger, and then telling the physical design guys to reassemble the cow. That’s not going to work.

LPHP: Are we moving to the next node at the same speed as we did in the past?
Geronimi: Power is one of the things we need to address. EDA needs to help. But in the end, it’s not that bad.
McCrorie: We saw some yield issues at 28nm. As a result, do we want to ramp up 20nm and 14nm as fast? I can see it slowing down because of that, but right now that’s not happening.
Robertson: But will there be an 18nm and a 16nm node because of power of depleted SOI. Components of the next technology may come in with a half step.
Geronimi: We will do that.
Katz: The markets we are serving with 28nm and 20nm products are the next-generation mobile and wireless communications platforms.

LPHP: Isn’t it also enterprise-level?
McCrorie: Yes, as well as graphics chips.
Katz: Correct, and these are the biggest chip market opportunities out there today. So long as you have 20 or more companies fiercely competing for the next available socket and trying to stay ahead on delivering to spec, no one is going to cry ‘uncle’ and slow down if it means they can’t get the next competitive product out. There are examples in every industry where the market leader who shows up first, whether it’s dual-core or quad-core or, in the mobile market, 10 gigabit optical and 40 gigabit optical—those people win big. The pressure to win big is very high. And some people will drop out.
McCrorie: Yes, they change their strategy and shoot for a different end market.

LPHP: Does stacking of die change the stakes here?
Katz: It certainly relieves pressure on die sizes. If you can start to pull large pieces of your on-board cache or your analog functions off on a separate die, there’s immediate savings you can realize in terms of yield.
McCrorie: And an extension of that is, as you take the rest of the digital and partition it into two die instead of a single die, now effectively you have twice the capacity on a single die. Is that enough to make the next generation? We don’t know.
Robertson: But you’ve also traded off some problems for others, such as thermal and different stress characteristics.
Katz: We already have temperature gradients that are pretty outrageous now.
Robertson: We may be introducing a whole other set of yield issues with TSVs. And for the most part the industry has ignored parasitic inductance. TSVs are highly inductive, so the modeling implications that designers will have to wrestle with will expand. It may improve their yield, but there are certainly some modeling and system-level concerns.

LPHP: As we get to 20nm and beyond, internally developed vendor tools are running out of steam. Is the strategy to develop new ones or use more commercial tools?
Geronimi: It’s not easy to develop tools. For a long time we have found it better to partner with EDA vendors to get what we need. We add a lot of scripting and intelligence in the flow, but we are not developing timing or verification tools.
McCrorie: On a yearly basis we hear from customers who would like us to take over their internal development. Then the question is how specific is it to them and is it valuable to anyone else? The majority of time it’s a very specific application for the user and we can’t afford to take that on.
Katz: One of the things we did as a company was to facilitate development of internal tools. We’re seeing a bunch of companies that want to build internal tools and we’re helping them build the next generation.
Robertson: We’ve got customers asking us to take over development, as well. Usually it isn’t because their tools are running out of steam. We hear they can’t fund it anymore and it would be great for the industry. Our experience is that it’s fairly esoteric and not extensible. But we are trying to provide APIs and scripting capabilities because advanced customers may be able to use DRC or LVS in a way that is not traditional. It may not be a straightforward enhancement of the tool, but it might be something for which we can provide flexibility. There are a lot more APIs and accessible databases so customers can use our visualization tools. But taking over a proprietary customer’s implementation doesn’t work.
Katz: Everyone has different voltages, temperatures, layers of metal that they work with. Our customers still want flexibility. It may not be a tool in its own right, but it is part of what they consider their intellectual property.
Geronimi: The definition of what’s a tool needs to be specific. Sometimes it’s a library or IP.



Leave a Reply


(Note: This name will be displayed publicly)