More Design Rules Ahead

Increased complexity, limitations on tools and processes and increased cost all mean less freedom for designers.

popularity

By Ed Sperling & Mark LaPedus
For those companies that continue to push the limits of feature shrinkage, designs are about to become more difficult, far more expensive—and much more regulated.

Two converging factors will force these changes. First, the limits of current 193nm immersion lithography mean companies now must double pattern at 20nm, and potentially quadruple pattern at 14nm. If the deadline for extreme ultraviolet lithography slips further, it could add even more layers of patterning—the number now being bandied about is 8 levels of patterning— significantly increasing the cost and the time it takes to produce wafers. That has led to exploration of new options for manufacturing chips using directed self-assembly and gridded design rules and gratings, both of which require very regular approaches to shapes and placement.

Second, physical effects are becoming so pervasive and problematic—heat, electromigration, electrostatic discharge and electromechanical interference—that designs must concurrently deal with more pieces of a complete system than in the past. That includes multiple levels of the software stack, more hardware components, more I/O, and even verification planning. Those physical effects only increase as die are stacked together, forcing companies to pre-integrate more blocks into subsystems and platforms and to do much more up-front planning.

New process nodes always have carried more restrictive design rules until those processes are well tested. But the number of rules is increasing to the point where they are no longer even comprehensible to many people, with no end in sight.

“We had people at 40nm who claimed they could understand all the rules in the book,” said ARM fellow Rob Aitken. “At 32 and 28nm they were starting to admit there were a few rules they couldn’t remember. And at 20nm, nobody admits to knowing all the rules. There are single rules that take up pages of the design-rule manual for one tool. So when you look at all these things, the complexity eventually gets to the point where it’s not clear how to meet them in the first place, and then even if you do meet the rules that it will still be manufacturable.”

That, in turn, has removed far more of the flexibility in a design than existed at older nodes.

“We have this iteration between design people who want to design specific shapes and fab people who want to outlaw them,” said Aitken. “The design people look for loopholes and then the design people patch the rules. It’s past the point where you can check everything against the design rule manual.”

Uncertainty over new approaches
Perhaps the biggest uncertainty is what happens next in manufacturing. The benefits of classic scaling ended at 130nm, where chipmakers could simply rely on the next node to improve power efficiency and performance. At 14nm, new materials such as fully depleted silicon on insulator may be necessary, and entirely new ways of building transistors and memory components are being researched and tested. At 7nm or 8nm (the exact number is still in flux) the entire concept of a transistor could change, with electrons tunneling through walls.

To get there could require new manufacturing approaches. The best known of these is directed self-assembly, whereby patterns are created and components literally align themselves on a substrate. A second approach is using 1-D gridded design rule layouts, in which a metal layer at one level runs perpendicular to another metal layer.

There is a two-step manufacturing process to pattern wafers with 1-D layouts: gratings and cuts. Yan Borodovsky, a fellow and director of advanced lithography at Intel, refers to this process as “complementary lithography.” The first step is to create uniform lines and spaces using 193nm immersion lithography. Then, an etch tool trims the sides of the lines to form small and exact grating structures. “We can use 193nm lithography for (several generations) to make the gratings,” Borodovsky said. “We can extend gratings for a very long time.”

The challenge is to find the right lithography technology to handle the cuts. In this step, the idea is to form vias, contact holes or other structures using a cut step or “cut mask.” In effect, the vias connect one grating to another. Using a separate “cut mask,” there are a number of options to perform this step: 193nm immersion, 193nm immersion and DSA, EUV, or direct-write e-beam.

Each lithographic option has its advantages and disadvantages. “We are working on multiple options in parallel,” Borodovsky said. The ultimate decision for “high-volume manufacturing depends on the defects and cost-of-ownership.”

But from a design standpoint, it also simplifies things—and restricts them.

“Once we’re at a point where the only things we can really pattern reliably are gratings, then this whole distinction of logic and memory goes away,” said Lars Liebmann, an IBM distinguished engineer. “Everything is a grating. How you pattern that grating no longer really matters. There are a lot of design efficiency benefits from that.”

Still, the only option for getting there right now is 193nm immersion, as the other solutions are not ready. In a 40nm pitch scenario, for example, the gratings could be handled by 193nm immersion in one mask step. But it would take four additional masks to handle the cuts using 193nm immersion—for a total of five mask steps, he said. To reduce costs, the industry needs EUV or e-beam. For example, EUV would only require one mask step for the cuts in a 40nm pitch setting, he said. So, in total, it would only require two mask steps in a 40nm pitch: One for the gratings and one for the cuts.

The big question is clear: When will EUV be ready for high-volume manufacturing (HVM)? “In my opinion it is critical for EUV HVM introduction to attain 100 wafers per hour productivity with resists capable to limit (the) impact of stochastic effects,” Borodovsky said at a recent event. “This most probably will require sources 4X more powerful than currently under development.”

EDA’s role
From a design engineer’s standpoint, many of these changes will be transparent. But there certainly will be more rules to follow, those rules will be more complex, and there will be far fewer options to bend them.

“The implication of all of this is on physical implementation, not on things like synthesis and the placement of blocks,” said Juan Rey, senior director of engineering for Calibre in Mentor Graphics’ Design to Silicon Division. “All technologies that have been developed over time have been done with a minimal amount of disruption. They have only produced minor changes.”

Rey noted there are no major disruptions on the horizon on the design side, despite the potentially radical changes on the manufacturing front. Even die stacking is relatively well understood from a design perspective. The real impact there is on the packaging and test front.

But that doesn’t mean all sides of this aren’t getting more complicated as physics restricts the amount of wiggle room in each discipline. In the past, complications in one area didn’t necessarily cause complications in another. They are now becoming so intertwined that dependencies will emerge in places where they never existed. It will take time to sort out exactly what this means for design, but in the short term it almost certainly will mean more rules to deal with and less freedom for experimentation and differentiation.



Leave a Reply


(Note: This name will be displayed publicly)