Design Rules Explode At New Nodes

Experts at the table, part 1: The number and complexity of rules has been increasing since 28nm. Whose problem is it to solve? Plus, training and capabilities vary greatly across teams within the same companies.

popularity

Semiconductor Engineering sat down changing design rules with Sergey Shumarayev, senior director of custom IP design at Altera; Luigi Capodieci, R&D fellow at GlobalFoundries; Michael White, director of product marketing for Calibre Physical Verification at Mentor Graphics, and Coby Zelnik, CEO of Sage Design Automation. What follows are excerpts of that roundtable discussion.

SE: As we move below 20nm, where are you starting to see problems and what are they?

Zelnik: What we see is the exponential growth in design rules. The issue that needs to be addressed is that it takes much more effort, more time, to get to design enablement—the point that you have a DRC deck, when the DRC deck is verified, and that the customer understands what the rules are. The whole process takes a long time.

Capodieci: From the foundry perspective, the main problem is the disconnect that is happening with the complexity of the process requirements—the actual descriptive language that is part of the design rules. The complexity of the process is one element, and that’s inescapable because complexity will increase as we try to shrink to the limits using advanced litho. But the representative power of the current methodologies also is showing its limitations. We cannot completely represent the issues of multiple patterning, for example. And we cannot enumerate well all the different cases where two lines meet and they have a via on top of that. That results in enormously complex runtimes. There’s an analogy with dark matter. It’s where we don’t know the process will fail. Unexpected things will come up.

Shumarayev: New processes are very conducive to ASIC design, but when it comes to custom or mixed signal design it’s prohibitively more difficult.

SE: Because of the difficulty in shrinking the analog portion?

Shumarayev: It’s the automation part. The place and route automation moves custom design from custom craftsmanship to semi-custom. To meet high-performance or mixed-signal requirements, you used to be able to put an elevator shaft of vias to connect from transistors to metal or other structures. As we move to finFETs, there are structured gate array systems. You cannot just put in elevator shaft vias to meet your design specification. That has an effect on how much manpower it takes to do a custom layout. We were benchmarking finFET technology and found a complexity factor of two to three times what it was. There’s a great infrastructure in the ASIC domain.

White: One of the challenges from the EDA requirements is the amount of computation required given the explosion in design rules. This has been going on for a long time. When 28nm was introduced, it was following the historic design rule increase, node over node, of about 15% to 20% for checks and operations to implement those checks. 28nm didn’t go so well for a lot of folks. We saw a huge increase in the number of design rules after the introduction of 28nm, and foundry-to-foundry it could be anywhere from 500 to 1,000 another design rules introduced after the fact at 28nm. So 28nm showed a discontinuity in rate of growth of checks node over node, as well as operations. We’ve continued to see that kind of behavior for every node since, at 20nm, at 16/14nm, and then on to 10. And all those operations ultimately turn into additional operations for EDA, including at least 2X increase in transistor count, which resulted in a 3500X overall increase in computation.

Capodieci: This phenomenon is very well known. It’s the bane of EDA and customers, as well. We need to add those rules as a catch up for what was not explored in the design space. We are applying so much duct tape that we are overwhelming the system. The problem will become worse until it becomes better, but at least we are at that bifurcation point. We have to remove all these restrictive rules and go more in the direction of prescriptive rules. But this also involves partitioning of the industry. Whose job is it? It can only come from collaboration. It’s not exclusively the job of the foundry. Their customers have to play a more active role. You need a qualitative automation. You want more speed and more automation, but what does that mean? The EDA industry has all the tools ready to build this prescriptive language, but they’re not leading the industry. We need to spec it and then they will build it. But we need to spec out how to build a physical design versus how not to build it.

SE: But isn’t part of the problem that even version 1.0 of a process isn’t the final version? Design rules are supposed to be a stopgap measure.

Zelnik: We’ve reached an inflection point. We can make the algorithms a little better, of course. But this is just the tip of the iceberg. We need to tackle the rest of it. There are so many things that need to happen before you add design rules. It may have taken two years to get to the deck you began with at version .01. We need a proper way to specify design rules. Second, we need automation for the process design rule deck. And third, we need a way to verify this.

Capodieci: Design rules are like sampling points. They illuminate different parts of this multidimensional design space. We don’t know the parts that are not illuminated, though. When a new design comes in it may touch an area we’re not checking, even though we thought our process supported that.

Shumarayev: Everyone says functional verification is the biggest problem in design, and yet here we have 100,000 lines of code and there’s no good methodology to determine whether we’re covering everything. So we’re reviewing our design against things we didn’t test for.

White: I’m not so sure we’re on the cusp of anything. We have an exponential growth in rules, but EDA tools by and large have been able to improve efficiency and scalability of the engine so our customers can get the overnight turn time they’ve always wanted. That’s our job—delivering more efficient tools to allow our customers to meet turnaround time objectives and time-to-market objectives. Our customers are getting designs to market at 16/14nm and EDA tools continue to improve. But on the back end, while the tools can process this data and make it easy enough for designers to know what to do when they have, for example, a DRC failure, that’s becoming more challenging. Given the complexity of the design rules, more automation and more hints to the custom designer showing them a FIFO (first in, first out) as they’re making modifications in real-time and introduce double-patterning odd cycles—that’s becoming harder. It requires more training on the part of the customer.

Capodieci: That’s a very good point. We need to resist and go beyond finger pointing of whether it’s an EDA or fabless or foundry problem. When you add manpower, they do not all have the same skill set. With new analytical flows built on top of EDA flows, we can actually see the difference in sub-teams of our customers from a quality perspective. We see certain layouts from certain layout teams performing much better at the verification level. They verify those designs and they yield better. So there is variability in the methodologies and the skill sets that people are using across all companies, and training certainly plays a role there. That’s something we can work on.

Shumarayev: If you look at large-scale executions, another element is globalization. The design and layout teams may be separated. The complexity of design rules should drive the need for co-development, which means the design and layout teams sit side by side. When you get to 100 people, there is a big problem with the efficiency of this global model. There is a need for co-development instead of throwing more foot soldiers at a problem. You need to enable them with high-quality tools. Another aspect—and this comes from analog design guys who like their freedom—the design rules that specify a pattern to get to good yield are not a very good setup for the human brain. In some ways this reminds me of gate arrays. Drawing polygons is not sustainable. Maybe we need a new breed of physical designers that are halfway into the design space.



Leave a Reply


(Note: This name will be displayed publicly)