Node Within A Node

Reducing process margin could provide an entire node’s worth of scaling benefits.

popularity

Enough margin exists in manufacturing processes to carve out the equivalent of a full node of scaling, but shrinking that margin will require a collective push across the entire semiconductor manufacturing supply chain.

Margin is built into manufacturing at various stages to ensure that chips are manufacturable and yield sufficiently. It can include everything from variation in how lines are printed on a wafer to edge placement for masks, and contaminants in materials. Typically this shows up in the design rule deck that chipmakers use as a guide for designing chips. But as manufacturing equipment becomes more precise due to better sensors and analytics, and as new technology is introduced across a broad spectrum of process steps, including inspection, metrology, deposition, etch and lithography, not all of that margin is essential. Variation can be controlled much more tightly than in the past, and that should improve predictability and yield and ease some of the restrictions on designs.

From a high-level, this adds another knob for device scaling. As the benefits of Moore’s Law scaling erode at each new node after 16/14nm, different approaches are required to achieve power, performance and area/cost improvements. (Even PPA was being stretched to include cost and/or area in recent presentations at conferences.)

“The economics of Moore’s Law are not as compelling as in the past, when you used to get 0.7X improvement, and the cost of development is increasing,” said Ajit Paranjpe, CTO at Veeco. “Moore’s Law will continue, but just going along Moore’s Law will not give you the cost benefit. You have to do more. To move to the next node you need to improve process margin. In fact, without an improvement in process margin you can’t even move to the next node.”

Improvements in density have largely been dealt with one node at a time, although not necessarily in a straight line. Over the past couple of years, the design world has added significant power and performance benefits through architectural changes. Rather than one processor, complex chips typically now include multiple different processor types, which is essential for power and performance in AI/machine learning chips. And rather than keeping everything on a single die, more is being offloaded through high-speed interfaces and advanced packaging, which have added PPAC improvements differently.

Reducing process margin is another big knob to turn, but this one is being driven from the equipment side rather than by chip architects or the fabs.

“A lot of this will be evolutionary nodes,” said David Fried, vice president of computational products at Lam Research. “Those are new new nodes, but they’re scaled nodes. Everybody is going after gate-all-around, and they will continue in that direction. But in between now and then, this opens up the opportunity to add a node in the middle after ‘radical’ variation reduction. When you total up all of the variation, people look at it and say, ‘Yes, we knew it was bad, but we didn’t know it was that bad.’ There’s a whole node here. But you have to attack it holistically. It’s not just the responsibility of the etch guys or any other single group. It’s a whole flow that eliminates variation, sub-line by sub-line by sub-line.”

In a presentation at Semicon, Fried noted that many of the possible reductions look small when viewed individually, but collectively they add up to a potentially significant change.

That includes a number of different process steps, such as lithography, etch, deposition, cleaning, CMP, doping and materials variation. “In addition to all of this, everything must be self-aligned,” said Fried. “And process control needs to be part of the solutions much earlier.”

Where the savings are
Still, the collective payoff can be substantial. Tightening up processes could provide much-needed relief for the semiconductor industry at a time when the costs of developing chips at the next node are soaring. Estimated costs for a new 5nm chip are between $210 million and $680 million, depending upon the complexity of the design, according to Gartner. On top of that, it costs an estimated $1 billion to develop new processes, and the cost of equipping an advanced-node fab is more than $10 billion.

This already is showing up in some of the most advanced nodes, where benefits in PPAC are largely based upon improvements in process control rather than shrinking of feature sizes.

“The more you reduce process variation, the more you can push existing design rules on existing scaled processes without having to aggressively scale gate thinning and metal pitches,” said Chet Lenox, director of process control solutions for new technology and R&D at KLA. “You already see examples of that in how TSMC scaled its 7nm process node. If you look at their fin and metal pitches, they weren’t that aggressively scaled. Gate pitch went from 66 to 57, and metals from 44 to 40. But what you do notice is significant enhancements to the per-fin transistor performance. So their fins are much narrower, their gate lengths are much shorter. By doing this they reduced the number of fins required in the SRAM cell because of variation improvement and overall transistor performance improvement. That allowed them to very aggressively scale the SRAM. It’s the same with logic cell heights. They were able to go from 8.25 tracks to 6 tracks. The fin pitch is only slightly reduced, but they were able to significantly reduce total track height via fin depopulation.”

This can benefit both the process as well as allow for increases in density.

“Variation mainly speaks to the design rule offsets,” Lenox said. “For example, if you have a gate cut, the overlay and CD variation affect the edge placement error of the gate cut. If all of that variation is quite large, then your design rules have to take that into account. You need larger offsets, and that generates more area. If you’re able to reduce that variation in the gate cut edge placement, then you can have tighter design rules and get incremental improvements in area scaling. If you can do that across 6 to 10 key features in the logic and SRAM cells, you can get significant area improvement.”

This already is happening in the memory world.

“A good example of using process margin for advancing capability is 3D NAND,” said Veeco’s Paranjpe. “That technology ran into issues with scaling. First of all, dimension control was problematic. Second, the cells would not work because they were too close to each other. So they went from a 2D architecture to a 3D architecture, and it happened very quickly. But adding layers is not scaling. You can etch things more precisely. It’s all about exploiting the process to add more layers. There’s no physical scaling. It’s all about process window scaling. They have really figured out how to refine the process steps. In some cases they have had to move to slightly new processes to get the new process window they need, from CVD to ALD, for example.”

DRAM, likewise, used to follow the traditional logic nodes until manufacturing moved into the 1x node. “They haven’t gone from 28nm to 14 to 7 to 5,” he said. “They’ve gone 19 to 18 to 17 to 15 to 14, which is the 1x, 1y, 1z, 1a, 1b node. They’re incremental shrinks, and all of the incremental shrinks are about pushing the process window on lithography, as well as all of the process steps. That’s another great example where over four or five generations the benefit isn’t coming from Moore’s Law scaling. It’s coming from process scaling.”

Using more and better data
One of the big changes on the equipment side is the inclusion of more sensors everywhere, which provide a view into what is happening at any given time in a manufacturing process. In addition, those sensors are faster than they were in the past, and significantly smaller.

“We’re moving to nanosensors, which are two to three times faster,” said Subodh Kulkarni, president and CEO of CyberOptics. “These are going into capital equipment and being used for advanced packaging. Sampling is going on today at fabs and OSAT levels. What this will do is allow for a functional check of packages and let you understand how much fallout there is before you get to 100% inspection.”

On the inspection side, what these sensors are looking for are unexpected distortions in light reflected off the surface of the wafer. “If you put a projector on top and a camera on the side, you can look at the silicon surface. If you use two cameras with two channels, you get a complete specular geometry and you diffuse it at the same time. You can use this to detect all sorts of new structures, such as bumps and pillars. But the pillars are getting skinny and tall, which is creating a new challenge because the cameras are at a fixed angle. You can move them, but every time you do that you have to make new calibrations.”

So as chips become more sophisticated and structures become more unique, they may become more difficult to inspect. And this is where an extra rev of a node, or a nodelet, could help significantly, because it will take time to develop an army of new manufacturing technology and get the processes ironed out with all the new structures and equipment for the next full node. Regardless, a lot more analysis of data from all of these sensors will be required for both nodelets and the next full nodes, and that is already underway.

“The key is to connect more sources of information,” said Ram Peltinov, Patterning Control Division head at Applied Materials. “If you find similar structures and you see a certain problem showing up over and over, you can reduce the amount of data you need to sort through. But the key building block is a reliable data source. Otherwise, even if you have multiple sources, potentially you will have multiple errors on each. This is the way to decrease variability. There are multiple ways of doing everything, but each process step has different tolerances. We need to be able to create more good information at the same cost.”

The whole industry is beginning to see the value in that.

“We’re seeing a whole shift to smart manufacturing where deep learning, deep neural networks and AI are built into the manufacturing process,” said Timothy Kryman, senior director of corporate marketing at Rudolph Technologies. “This is basically yield management, and what you’re trying to do is figure out how to tighten the margin to improve yield. For yield, this is well-known. But we’re also broadening this out to improve reliability. So you may yield a good die, but if the process is not tight enough, then six months down the road you may encounter a latent defect. This is particularly challenging if you have a heterogeneous package. First, you need to track the defect to the die in question or the package process used and then ascertain liability for the defect. The packaging house may not be liable for just the defective package itself, but instead the value of the device it went into.”

That, in turn, needs to be coupled with expertise in a particular facet of the manufacturing.

“The data are very valuable, but what’s just as valuable is the domain knowledge,” said Rick Gottscho, CTO of Lam Research. “The challenge is conditioning the data and knowing how to filter it, massage it, and transform it into something that’s useful for a given application. That’s all about the domain knowledge.”

Used correctly, this can create significant change across the semiconductor manufacturing world. “The way that equipment is operated and maintained in the fab will get disrupted,” Gottscho said. “The customers clearly will put more emphasis on what a company’s data offerings are, what their data strategy is, and how it aligns with their own strategy. This is a big challenge for the industry.”

Materials
Process improvements are not confined just to the equipment side. They also involve the materials used to create chips. As a result, there is a widespread effort to modify some of the materials used in semiconductor manufacturing processes and improve the purity of all of them.

This adds its own level of complication, though. Srikanth Kommu, executive director of the semiconductor business at Brewer Science, said that in the past materials used in production were sacrificial, meaning they were burned off or chemically removed in the processing of wafers. That’s beginning to change.

“We’re now working with some customers to leave certain materials behind,” Kommu said. “This is selective modification. The line features are so small that you can optimize the chemistry of a compound. So in some cases, you do not deposit anything. In others, you segregate materials and flow them, and then with temperature and chemistry to determine what gets left behind. Instead of a laborious deep etch, you leave a polymeric material with very specific physical properties.”

This has a direct impact on scalability, because many of the advanced node benefits are coming from the process rather than the feature shrinks. “This is already happening,” said Kommu. “It’s now node+, or node++. And that will be the differentiator. In the past, it was all about performance. That’s no longer true.”

This is particularly true in markets such as automotive, where 7nm designs are being developed for AI control systems. The key there is reliability. German automakers and Tier 1s are demanding defect-free chips that can perform to spec for 18 years. That has raised standards from purity from 5 parts per billion to 5 parts per trillion, a number that may be meaningless because there is no way to measure that today. However, the emphasis on reliability is very clear, and it is spreading into a variety of markets, such as smartphones. In the past, those chips were only expected to last for two years. That has since increased to four years.

“Impurities can impact reliability,” said Anand Namiar, executive vice president and global head of performance materials in Germany’s Merck Group. “That makes it critical to manage upstream suppliers, but it’s a problem because startup companies don’t really understand what reliability means. So if you think about CMP, there are particles in the slurries. Those small particles have to be sloped and rounded consistently so that you have a consistent removal rate. With more and more stacking and multiple patterning, you have more CMP, and then you have to get this gunk out. So we’re seeing new particles coming into the market, and better characterization of the materials for our customers. Basically what we’re doing is improving the materials innovation cycle. There is a richer information set and better pre-screening of materials for electrical characteristics.”

IP/business considerations
Whether this turns into multiple nodelets or helps smooth the transition to the next major node is less of a concern for the manufacturing side. But it has a direct impact on what IP will be available and when.

“We struggle with that,” said Mike Gianfagna, vice president of marketing at eSilicon. “Do we move to 5nm right away or do we optimize on 7nm, or do we go in a different direction and develop additional IP at older nodes. It’s not always clear where you point your resources. The obvious answer is 5nm is the next big thing, but there is also 11nm and 12nm. If you invest in IP there, how does that compare to other markets. At the same time, if we wait on 5nm, we miss that window. 5nm is more complex in some ways, but the strange thing is the design rule manual is thinner at 5nm than 7nm because there is less multipatterning with EUV.”

Conclusion
Being able to reduce process variation and utilize more data more quickly will go a long way toward either improving the value proposition for migrating to the next node, nodelets, or extending existing leading-edge nodes. Much of that will be a business decision by the foundries and their largest customers, and the IP vendors that must choose between nodes because they cannot develop IP for everything.

The realization that enough margin exists in manufacturing processes is only a first step. There are many more tweaks that can be made across the supply chain, from materials to sensors and new equipment to make chips more faster, lower power, and more reliable.

Related Stories
Big Shifts In Big Data
Why the growth of cloud and edge computing and the processing of more data will have a profound effect on semiconductor design and manufacturing.
Variation Issues Grow Wider And Deeper
New sources, safety-critical applications and tighter tolerances raise new questions both inside and outside the fab.
Controlling Variability And Cost At 3nm And Beyond
Lam’s CTO talks about how more data, technology advances and new materials and manufacturing techniques will extend scaling in multiple directions.
Using Sensor Data To Improve Yield And Uptime
Deeper understanding of equipment behavior and market needs will have broad impact across the semiconductor supply chain.
Variation’s Long, Twisty Tail Worsens At 7/5nm
Multiple sources of variability are causing unexpected problems in everything from AI chips to automotive reliability and time to market.



Leave a Reply


(Note: This name will be displayed publicly)