Design Rules Explode At New Nodes

Experts at the table, part 2: Speeding up time to market; finFETs vs. 2.5D; partitioning designs; bounding complex problems with rules; better supply chain communication.


Semiconductor Engineering sat down changing design rules with Sergey Shumarayev, senior director of custom IP design at Altera; Luigi Capodieci, R&D fellow at GlobalFoundries; Michael White, director of product marketing for Calibre Physical Verification at Mentor Graphics, and Coby Zelnik, CEO of Sage Design Automation. What follows are excerpts of that roundtable discussion.

SE: A lot of engineers say their biggest problem is time—they can’t get designs done in the design windows. How do we solve that?

Capodieci: There is a food chain that needs faster performance and new form factors. The question for me is where it’s going to be solved. It will be solved in software and at the intermediate level. There are two aspects. One is the delays in 28nm and the subsequent nodes might cause people to reconsider what they’re going to deploy and when. That’s not a solution, but it’s a reality. It doesn’t mean we’re going to slow down. We’re still going to work with the time-to-market requirements. But we might not be able to deliver the same performance at the same cost factor. That’s the key.

White: Everyone is trying to accelerate the introduction of the new node as early as possible. But the rate of growth from that initial introduction to when it becomes a volume node will be the same as it’s always been or it’s increasing. We’re now starting to work on 7nm. At 16/14nm we’re starting to see a ramp into initial volume. The 10nm folks are just starting initial IP and test chips. Now we’re working on 7nm. They’re bunching up. We’re working on initial IP and process development every year for the next one. But from the IC side, that rate of adoption is longer.

SE: We’ve seen homogeneous 2.5D as a way of improving yield, and now we’re starting to hear about heterogeneous 2.5D. Is it a solution to this problem?

Shumarayev: The finFET is another consideration in the design enablement and verification. As you partition designs into 2.5D, how do you verify it and how do you netlist from two different foundries? It’s a big discussion point at Altera right now. How do you move forward? How do you divide and conquer? We do see more push toward 2.5D. There is no question about it.

White: Everyone has different flavors.

Shumarayev: The question now is how you put a half-dozen chips into a single package. These are designs that are underway right now. It’s not just mechanically how you put those chips together and known good die. It’s also the whole package co-design.

White: From an EDA perspective, there are solutions to do physical verification and connectivity checking and to create a system netlist of the assembly you’re trying to stitch together with all these different things, and to do package co-design and optimization. By and large, though, there are EDA solutions for those things. Different foundries are at different places insofar as being able to offer you a PDK or a relatively easy to consume reference flow for you to use those solutions. Some of the foundries have struggled because there are so many different options they have not been able to hone in on one.

SE: It seems for the first time, in a long time, that EDA is not the bottleneck. It’s the complexity of the design, how to partition it, what comes next, what process technology to use. Is that a correct assessment?

Capodieci: The current partitioning of the industry, which has served us well for many generations, is showing its limitations. There’s an issue with the full ownership of the problem. That’s why we have to see more partnerships, more joint risk projects where we share the risk. Foundries are not ready for 2.5D and 3D processes with a full enablement kit because there is no universally acknowledged way of doing these designs. This industry won’t suddenly morph into anything. But what might happen is that small startups, maybe even captive startups, will be launched to take some of this risk. There are lots of people with new ideas and they may be working on these problems. It’s not all doom and gloom. If problems fall between boundaries, people will create new entities to solve the problem with multiple levels of expertise across domains.

SE: What seems to be happening is that we’re trying to bound a complex problem with rules, but the big customers—the ones who are moving to the most advanced nodes—are the ones that bend the rules. Is that correct?

Capodieci: Or they’re the ones that take well-established components and put them together. Patterns can be seen at multiple levels of abstraction. We all work with standard cell libraries or blocks. But routers don’t work like that. They create enormous complexity in trying to connect everything. Why can’t routers be re-formed with different ways to assemble connectivity in a manufacturable way, rather than do anything you want and we’ll put restrictions on it? That’s why we have the problem we do today. This speaks to new methodologies to look at the design rules, as well.

Zelnik: We also need some infrastructure to do that. Traditionally, design rules were created at the foundry and you would read them and try to understand them. But it’s not just the rules from a verification point of view. It’s better electrical behavior, a better methodology to do things more efficiently. There is cross-domain knowledge, but you need the ability to communicate this knowledge. We may want to bend this rule a little, but to enable this conversation efficiently you need tools and methodology. And you need to do it in a much easier way than it is done today. If you look at design rule manuals, it’s really tough to understand them. And how do you even pose a question to the foundry about how to change something, especially if it’s going to change in the next month? You need a better way to communicate.

SE: Has communication between the foundries and the chipmakers gotten better?

White: It changed significantly at 20nm, partly out of necessity. At 20 we had a new type of patterning. It was a completely different type of design rule. Most designers didn’t understand how do deal with them. We then started to have three-way NDAs between the foundry, the fabless company and EDA. More often than not, we knew more about how to debug multi-patterning errors than the foundries. So we found ourselves in training mode for debugging multi-patterning, providing strategies for layout, and that’s continued and has grown for the next couple nodes. It’s not just debugging. It’s also helping the fabless company handle their computing environment so they can handle the volume of data they’re going to need to process. That three-way collaboration has changed quite a bit over the last few years.

Shumarayev: Yes, we’ve seen that, too. Engagements are way different than they used to be.

White: What’s happening now is people are trying to use the current tools and infrastructure or deck and make all of that functional. Once those partnerships are started, maybe we can figure out how to extend the tools for a more efficient methodology for doing debugging.

Capodieci: That’s definitely true. Given the distributed level of expertise, there is not one single entity that has all the answers. The communication infrastructure is three-way, and everything we do is through a three-way infrastructure. We cannot have any project that is individually run. The infrastructure is in place to co-spec new tools and methodologies. With enough time and some joint risk—meaning joint money to share the risk—that will be the best approach for 10nm and 7nm. Nobody knows what is in store in terms of the process.

Leave a Reply

(Note: This name will be displayed publicly)