Design Rule Complexity Rising

Total number of rules is exploding, but that’s only part of the problem.

popularity

Variation, edge placement error, and a variety of other issues at new process geometries are forcing chipmakers and EDA vendors to confront a growing volume of increasingly complex, and sometimes interconnected design rules to ensure chips are manufacturable.

The number of rules has increased to the point where it’s impossible to manually keep track of all of them, and that has led to new problems. Among them:

Bloat. Rule decks have become so large, and the processes so complex, that no one is sure why some rules still exist or how extensive this problem has become. This increases the number of required checks, and it makes debug more difficult.
Dependencies. Some rules rely on other rules, which is a growing problem for some foundries at some processes, but not all foundries at all processes.
Uniqueness. Below 28nm, design rule decks are unique to each foundry. That makes it especially difficult to second-source a design, a problem that is exacerbated by the fact that IP vendors may not support all of the new processes because it’s too complicated and time-consuming.

To put all of this in perspective, at 28nm it was still possible to manually keep track of every design rule. With the introduction of finFETs and multi-patterning, the number of rules exploded to the point where automation is now essential. Violations can cause other violations, leading to a string of problems that can limit yield and affect reliability—which is why restrictive design rules (RDRs) were developed in the first place.


Fig. 1: Growth of design rules. Red line shows increase in rules after 28nm. Dotted red line shows EUV’s expected impact due to a reduction in multipatterning and masks. Source: Mentor, a Siemens Business

“Advanced nodes are much more complex, and the number of rules is exploding,” said Subramani Kengeri, vice president and general manager of GlobalFoundries’ CMOS Platforms Business Unit. “But it’s not the number of rules that’s the biggest concern. It’s how they’re all related. If one rule has a dependency on other rules, that can create challenges in layout.”

The level of dependency varies from one foundry to the next, and from one node to the next within the same foundry. Equally daunting is the impact of different polygons on each other.

“The number of shapes within a design that interact is the biggest driver for why we have more design rule checking,” said Michael White, director of product marketing for Calibre Physical Verification products at Mentor, a Siemens Business. “The foundries capture this in one of two ways. The first is through characterized design rules at a new technology node. The second is through learned design rules. But the net is there is a growth in design rules everywhere.”

An increase in the number of design rules at the most advanced nodes is generally expected, but the rule decks are increasing at older nodes, as well. “A 65nm process 10 years ago is very different than today,” White said. “Application processors are still being built at the bleeding edge, but there is a lot of IC content at relatively large process nodes.”


Fig. 2: Increasing complexity at 65nm. Source: Semico Research/Mentor, a Siemens Business

What are design rules?
Design rules—often referred to as restrictive design rules— are a set of known approaches to improve yield. In effect, they capture industry knowledge about how to manufacture semiconductors as a way of improving time to revenue for both foundries and chip companies. If the chips don’t work, neither side makes money.

These rules are process- and foundry-dependent. They vary from one foundry to the next, and one process node to the next, and those differences have become more pronounced at each new node as the processes themselves have become more complex.

Prior to the era of finFETs and multiple patterning, the typical scenario was that design rules would start out more stringent and then be relaxed over time. As more manufacturing data was collected, it became obvious where the real problems were and how to address them, allowing some loosening of the rules.

That trend no longer applies. “Around 20nm was the last time where design rules were relaxed over the last node,” said Mentor’s White. “We do still see yield learning as the foundry builds new chips, but in addition to pitches they’re now looking at corner cases they didn’t necessarily explore and were not captured at the beginning. The other thing we’re finding is that at older nodes, there could be wrong-way routing, which was a competitive advantage. 10nm was the first time we saw uni-directional routing layers. Design flexibility is going away. Design rules are becoming more restrictive and staying that way.”

That doesn’t mean design freedom is gone. But it does mean that the standard way of achieving design freedom, namely by shrinking nodes, is becoming more difficult.

“The number of levels of what you can do with a process has diminished,” said Mark Richards, technical marketing manager for physical implementation at Synopsys. “The key metric is yield, and the ramp-up of yield in a controlled fashion is not happening anymore. From the tools side, you have to increase the complexity of rules to make things not happen in a design. There are complex interacting rules, and a lot of that gets pushed down to us in EDA.”

How aggressive foundries are with new lithography options has an impact, as well.

“On the leading edge, the design rules across the foundries are very different driven in part from the different strategies with regard to the adoption of EUV,” said John Kibarian, president and CEO of PDF Solutions. “Foundries are realizing for the fabless, while PPA is important, the cost of R&D is a driving force. So maximizing re-use is increasingly important. Hence squeezing out PPA at the expense of a complete re-design for a derivative node has not succeeded as much as incremental improvements to a platform. 28nm is a great example of this. There are many derivatives, and from a design rule perspective the changes are minor.”

Shifting boundaries
RDRs in the past created what amounts to a class structure within the industry, where the highest-volume vendors such as Apple could push foundries to modify the rules, create their own exceptions, and sometimes alter the processes. Smaller chipmakers, meanwhile, complained they had to deal with a growing volume of pessimistic rules. But those disparities have largely disappeared at 10nm. If companies can afford to push down to the advanced nodes, everyone is working off the same rules deck.

The downside of RDRs is they limit design freedom. That includes the shapes that be printed, how the signals are routed, and the overall floorplanning for how various blocks are laid out.

“From the implementation perspective, most designers associate increasing design rules as an extra burden on the final routing,” said Rod Metcalfe, product management group director in the Digital & Signoff Group at Cadence. “While this is true, routers do have to support more complex rules, so there is also an increasing impact on placement.”

There has always been a tug of war between design and process engineers, but the growth of design rules began increasing much more quickly after 28nm. Foundries have tried to balance all of this by market, providing a growing list of options for specific applications based upon node, including various flavors of the same node, and different substrate materials. For example, Samsung Foundry and GlobalFoundries offer fully depleted SOI processes in addition to bulk CMOS, and virtually all foundries now offer some advanced packaging options that can help minimize the effects of advanced-node rules by limiting what has to be developed at those nodes.

“In some cases area is more important than anything else,” said Kengeri. “And if there are a lot more transistors, then presumably you can get a higher premium. In IoT and low-end mobile, though, the key metric is cost. In the past, area and cost were the same. It was a linear progression. Manufacturing costs did not increase as much at older nodes. But at 14/10/7/5nm, manufacturing cost and area scaling are on different curves. We spend a lot more time going back and forth on EDA, digital research and design because there are some very complex tradeoffs between power, performance, cost, area, schedule and ease of design.”

Rules of the road
As chips become more prevalent in safety-critical markets, additional rules are being introduced regarding reliability, as well. In the automotive market, for example, OEMs and Tier 1 suppliers are demanding zero failure rates over 10 years. Some of that can be addressed with more complete simulation of physical effects, but it’s only part of the picture. Fabs also need to be qualified to manufacture these parts.

“Because they’re coming onto technology nodes that already exist, we can leverage existing process and design rules,” said Walter Ng, vice president of business management at UMC. “Where we see a difference is from a reliability and manufacturing standpoint. There is a lot less tolerance for marginality. Historically, for a mainstream foundry, a lot of the manufacturing is consumer-related. The level of reliability is good, but it’s not that strenuous. In a manufacturing environment, you can rework pieces of manufacturing to add reliability. But with industrial and automotive, there are pieces of the flow where that’s not allowed. The stringency level has gone up dramatically.”

That has a significant impact on the cost of manufacturing these chips, because there is far more scrap than with other processes.

“Grade zero, one, two and three dictate the stringency and the expected scrap rate,” said Ng. “This is an additional cost or premium on those parts. On top of that, there are traceability and back-room requirements. Automotive standards require all foundries to be more educated about these requirements. But when you support a particular customer with automotive requirements, those requirements can be different from one customer to the next. It’s not all black-and-white. There is a level customization for particular applications, and that requires going down a laundry list of rules for each project.”

It’s not that high reliability is foreign to foundries. But not everything requires it.

“If you take a step back and look at some of the high-performance computing products that have been in the market for years, some of them go in your laptop,” said David Fried, CTO at Coventor, a Lam Research Company. “It’s a high-performance CPU or GPU. But some of those products also go into satellites and into the most advanced mainframe computers that are allowed to have 5 minutes of downtime every 30 years. There have always been products distributed throughout the reliability distribution. There is some stuff where if it gets to the customer and it doesn’t work, nobody cares. But IBM and Intel have been making high-performance, high-reliability CPUs at the leading edge for generation after generation.”

There are addendums to design rules to support this, as well. “With automotive, you’re looking for things like ESD and electrical overstress,” said Mentor’s White. “But now you’re adding high reliability at the most advanced nodes, so you need checks that are symmetrical, orientation-specific, and that you have consistent fill. That’s being captured, but not in DRC.”

Rules at 7/5nm and beyond
As more designs move to smaller geometries, including the AI chips that will be used in autonomous vehicles, the number of rules goes up. And while EUV does provide some relief, complexity continues to rise—and the number of rules goes up with that.

“At 40nm and above, legalizing the standard cell placement did not require many design rule checks,” said Cadence’s Metcalfe. “Adding a filler between standard cells was just a matter of inserting the appropriately-sized filler cell. As designers began moving to 7nm, more and more cell placement rules were required. Placement rule checks are now needed between cells in the same row, between cells in different rows, between cells of different VT classes, width of cell dependency, specific site assignments and many others. This has made the whole placement legalization process much more complex, and a whole constraint language has been developed to capture these cell-based placement rules. These placement effects also impact physically aware timing optimization. As timing optimization engines add new cells or resize existing cells, all these placement rules need to be considered when legalizing any modified cells to ensure the updated cell location is valid. This is one of the less-visible side effects of increasing design rule complexity, but it has a significant effect on the implementation flow.”

At 7nm, the addition of trim metal shapes requires even more design rule checks. “This affects a router in two ways,” Metcalfe explained. “First, the router has to understand how to check for these complex rules, which now include effects from layer shapes, edges and mask number. However, the more important task for a router is to prevent any of the design rule checks being violated during routing itself. This is more challenging than just performing the check itself, and is the difference between signoff and implementation.”

All of the EDA vendors competing in this space are working to keep pace with the explosion in rules, and the growing complexity of the rules themselves. But there’s even more complexity ahead.

“At the next node there will divergence from the past with triple and quadruple patterning,” said GlobalFoundries’ Kengeri. “That adds significant complexity. There are electrical issues, too, because you may have variation across three or four layers. The number of rules will go through the roof with triple patterning and quadruple patterning. There are alternate approaches with SADP/SAQP, where you have very regular structures, but that leads to additional complexity and constraints. So now you have to decide, do you put complexity into the process or into the design? Where you draw that line makes a big difference.”

The growth in complexity is one of the reasons that foundries have introduced so many half-nodes and quarter-nodes.

“We’re referring to these as market-driven nodes or boutique nodes,” said Synopsys’ Richards. “But what makes them attractive is that you shrink the logic and all of the rest of the chip stays the same, so you can ramp production very quickly. About 99% of the flow is the same. There are some additional rules as the lower levels shrink, but you’re providing additional routing density just for the digital portion. The cost of moving from 16/14nm down to 7nm is massive, but from 16/14nm to 12nm is not so bad. It costs less and allows these companies to turn around a design quicker to hit the next market window and to differentiate from their competitors.”

And finally, there is some movement on the number of rules that get carried over, although the degree to which this problem is fixed will never be fully known outside of the fabs.

“There are many myths, particularly in manufacturability rules, that get carried over for many nodes,” said PDF’s Kibarian. “These myths have been based on bad assumptions about what drives manufacturability or yield. For example, for many nodes (starting around 130nm node), fabs recommended doubling vias. The assumption to a double via is that the failure of a via is a random event, spatially uncorrelated to neighboring vias. However, if the actual failure rate is layout-dependent, it matters how you implemented the doubling of vias. In some cases, a well-landed via with plenty of metal around it had a lower failure rate. Characterizing what is actually designed matters. We also have found that minimizing pattern count is particularly important on the leading edge, as yield has moved from being driven from random failures to narrow process windows. Minimizing pattern count, allows for more process latitude. Fabs and fabless have now adopted a feedback loop between design rules and monitoring the margin of those rules in production designs. Designs now include rich monitoring content specifically targeting critical rules. This information is now being used to inform how design rules evolve.”

Conclusion
EDA vendors, chipmakers, IP developers and foundries have been collaborating for at least the past several process nodes, starting at version 0.1 of a process and working all the way up to version 1.0. But until 10/7nm, these were still somewhat separate worlds. That’s not true anymore, and the growth and complexity of design rules is now a collection of the best ways around problems that all of them are encountering.

How much of that shifts to one group versus another varies by foundry and by node, but what’s clear is that complexity is no longer just about the design or the manufacturing. It’s about the entire flow, from initial architecture all the way through to final manufacturing. And the rules to facilitate all of that are growing steadily and becoming increasingly complex.



Leave a Reply


(Note: This name will be displayed publicly)