Differences in equipment under scrutiny as tolerances tighten.
Variation between different manufacturing equipment is becoming increasingly troublesome as chipmakers push to 10/7nm and beyond.
Process variation is a well-known phenomenon at advanced nodes. But some of that is actually due to variations in equipment—sometimes the exact same model from the same vendor. Normally this would fall well below the radar of the semiconductor industry. But as tolerances for variation become tighter at advanced nodes, the impact is beginning to spread out to many more companies across the manufacturing ecosystem.
From the foundry side—the actual users of equipment—equipment variation always has been a concern. It can affect everything from uptime in a fab to wafer yields, as well as chip performance and post-production reliability of chips.
“We’ve always had issues matching tools,” said Gary Patton, CTO at GlobalFoundries. “This has always required at least some effort, even with the same brand of tools. With finFETs, you want to have exactly the right profile and you need to tune chemistries to get everything to match.”
What’s different is that variation between tools, and collectively across multiple tools, is becoming more problematic as node numbers decrease. “Variation is a lot higher than in the past,” said Walter Ng, vice president of business management at UMC. “You need to do qualification every time you bring up a new tool because it can be a capacity limiter.”
Equipment makers recognize these concerns. Most are working on solutions.
“Reducing variation through system matching and chamber matching is becoming increasingly important to meet the ever-stringent variability requirements,” said Jason Shields, vice president of process control at LAM Research. “Advanced techniques, such as data analytics, subsystem diagnostics, and machine learning, are being developed to ensure that each system produces dies and wafers with exactly the same process results. Verification of results and real-time feedback with new sensor capability will enable advanced analytics with more information for endpoint control, fault detection, drift control, and fast matching.”
But effectively solving this problem requires changes across the entire supply chain because there is no single cause. It is basically a lot of little problems that add up to a much bigger problem, and it includes everything from the purity of gases, the pressure in different etch chambers, to the consistency in power supplies and even the power itself. But across the supply chain, some factors are more obvious than others.
“Process tools are the main source of variation on the wafer,” said Neeraj Khanna, senior director of customer engagement at KLA-Tencor. “Process tools can add particulate matter during a deposition, etch or cleaning process. They can drift from the center of the process or perform their process step non-uniformly. These process inconsistencies result in within-die, within-wafer or wafer-to-wafer deviations in feature shape, overlay and CD, or cause a wide range of defect types. Process tools of a given type can be poorly matched to one another, creating differences among wafers that have passed through different process tools. And all of these issues can affect yield.”
This is particularly evident with overlay issues, which have a cumulative effect because they result from multiple steps in the process flow.
“All equipment from the scanner and mask through your metrology tool and materials will eat up your error budget,” said Regina Freed, senior director of pattern technology at Applied Materials. “The maximum error budget or edge placement error or EPE allowed is approximately a quarter of the pitch. For the critical layers and at advanced nodes, feature pitch will be below 40nm. This means that the total variability of the process to create these small features and align them to the next layer has to be below 10nm. If you look for example at a via, this via is created using deposition, etch and lithography. The via will need to align to the metal layer below, which will be created using multiple patterning. Considering that many process steps are involved into creating the line/space metal patterns, you quickly add up to much more than 10 process steps. That means that on average each process step has a very small budget. So all process tools and metrology tools are affected. Looking at 5nm and beyond, the problem becomes so difficult to solve that the only path to yield will be self-aligned patterning.”
Lithography
Variation is particularly evident on the lithography side, where one EUV scanner is not necessarily the same as the next—even if they are marketed as identical pieces of equipment. The difference is akin to a manual typewriter, where one key may strike differently on one machine than another even if they were sold as the same model. But at 10/7nm, just one or two nanometers can make a difference.
“We used to wrestle with tolerances that were 50 to 100 times greater,” said John Sturtevant, director of technical marketing at Mentor, a Siemens Business. “In the future, you may see manufacturers splitting who gets what because of variations in the CD (critical dimension) budget. That could include everything from line-edge placement to pattern placement, and it could create demand for selective deposition and selective etch.”
Sturtevant noted that equipment always has been qualified by process and equipment engineers so that when preventive maintenance is required, other equipment can be substituted.
“What’s new with EUV is that each lens has its own ‘fingerprint,’ and there is a transient component to the lens over time. The scanner providers can measure aberrations, and those aberrations have an effect on imaging. On top of that, the nature and magnification of shifts depend upon the illumination scheme. But the whole paradigm for OPC is that you make one model that represents all models. You might need two copies of a reticle, but the design data is the same. If there are differences between tools, even one or two nanometers is substantial. Right now the range in edge placement is 6nm. And with scanners, aberration levels may be 2.5 to 3nm off. If we start seeing 5nm, that would be catastrophic.”
Fig. 1: Tool-to-tool aberration variability in EUV lithography. Source: Mentor
ASML, which at this point is the sole provider of EUV scanners, is viewing this issue from a “sub-component” level, according to Chris Spence, ASML‘s vice president of Advanced Technology Development. “You can find 1nm here, 0.5nm there across many processes. With computational lithography, we have developed new models, which are necessary because all of the features on the mask are getting smaller. 193i was well characterized. You could squeeze the specs to get better control. With EUV, there are new materials and challenges. There are multi-layer masks, new imaging challenges because it is not symmetric, so it has to be compensated. You have flare, so you have to back out to make the mask print correctly. So it’s a combination of tool and overlay and CD control.”
These problems will only get worse at future nodes, too. “At the 3nm node, EUV stochastics will exert an impact beyond lithography steps,” said Ofer Adan, Applied’s director of metrology. “Once EUV resist lines serve as the mandrel for spacer based pitch splitting (to achieve 3nm node), the line edge roughness (LER) of EUV lines can transfer to the spacers causing line wiggle, aka line width roughness (LWR). That impacts CD and overlay – namely, EPE. Process techniques will be needed to suppress the transfer of roughness from the lithography step to the spacer. Or perhaps it may be more worthwhile to split the pitch more times based on non EUV lithography, with less stochastics.”
Fig. 2: EPE margin shrinks with each node. Source: Applied materials.
Persistent metrology
One idea that seems to be gaining ground is what basically amounts to persistent metrology. It goes under a variety of names. TSMC calls it industrial IoT. In Europe, it falls under the heading of Industry 4.0. By constantly monitoring these systems for changes, variation can be minimized and yield can be increased.
“This is the whole idea behind the industrial IoT, which can have a big impact for fabs at these nodes,” said Joanne Itow, managing director for manufacturing at Semico Research. “You need to sense and monitor constantly and feed back results. If there is any deviation, you need to go in and adjust. Fabs used to do this, but they got to the point where they only did measurements over a span of time until the next maintenance. Now you have to measure this constantly. TSMC has been talking about this with their implementation of industrial IoT. Nothing is standard anymore. It’s all very customized, and it requires more attention.”
There is strong support for this from equipment vendors, in part because it opens up new opportunities for them and in part because it reduces the number of surprises for their customers.
“In leading-edge fabs today, process variation needs to be carefully controlled by tool monitoring so that quick corrective action can be taken,” said KLA’s Khanna. “Otherwise, errors add and convolve as a wafer continues in the process, leading to process window shrinkage, yield loss and the inability to trace the source of the problem. With advanced nodes, process windows are getting vanishingly small. The challenge has grown beyond identifying the process window, to needing to monitor metrology and defect parameters at multiple points, so that process shifts are identified and corrected as quickly as possible. To monitor a dynamic process, inspection and metrology tools need to have sensitivity to a wide variety of defect types plus robustness to process variation, so that they can capture all defect types or measure all parameters even as the process center moves. In many cases, metrology and inspection systems founded on broadband imaging enable both wide coverage for defect type detection (or complex shape/overlay metrology) and high robustness to process variation.”
The bigger picture
Understanding where problems arise requires more than just monitoring the equipment, though. There are so many variables in semiconductor manufacturing that it’s impossible to keep track of all of the aberrations and shifts.
One way that companies have dealt with variations—particularly process variation—in the past is through guard-banding. But guard-banding eats up area, and at advanced nodes it has a direct impact on performance and power. The result is that guard-banding can no longer be used to the extent that it has been used in the past, even though the problems it seeks to address are more severe.
“At lower nodes, guard-banding starts eating into the headroom for turning on a device,” said Anil Bhalla, senior manager at Astronics. “The margin for error is decreasing. At 14nm, if you’re using 100 millivolts for guard-banding, that eats up 25% of the headroom and 12.5% of the Vdd. At 10nm, it eats up 33% of the headroom and 14.3% of the Vdd at a typical threshold voltage of 0.4 volts. The flip side of this is that now a chip needs to operate in a certain range, so there is less room for process variation. Parts are more expensive, too, so you don’t want to throw them away.”
Testing for variation, though, is getting more difficult, which is why the new wave of testing involves system-level test. “Whether you’re dealing with tool variation or temperature variation or metrology variation, you don’t have enough data to determine if there’s a problem, so you have to look for patterns in different lots. It’s not like you’re going to get a message. There is a lot of work being done to define standards, which is being driven by SEMI, for how data needs to be formatted and how the industry can share that data. There is a lot of talk about big data analytics.”
Advanced packaging adds another wrinkle into all of this, because variability can now happen on multiple levels and across multiple die. The problem is that it’s harder to trace that back to the source of the issue.
“The industry trend of moving to 3D architectures adds additional challenges for variation in the third dimension,” said Lam’s Shields. “Successful manufacturing of next-generation devices will require equipment suppliers to step up variability control to the atomic scale and at extremely high aspect ratio. 3D modeling and computational capability will be essential to developing solutions on timely basis in this challenging environment.”
Viewing the problem differently
And finally, solving some of these variability problems will require some new approaches.
“As technology scaling continues to advance, the increasing number of process steps and overall complexity add significantly to the challenge of reducing process-induced within-die, within-wafer, and wafer-to-wafer variations,” said Lam’s Shields. “At the device level, controlling variation to within a few atoms will increasingly require the application of technologies such as atomic layer deposition (ALD) and atomic layer etching (ALE). At the wafer level, equipment suppliers are designing capabilities such as fast tuning for chemical and electrical gradients across the wafer.”
He noted this also will require greater collaboration across the semiconductor manufacturing ecosystem. That seems to be a growing sentiment in the industry.
“What’s needed is a comprehensive view, from design to manufacturing,” said Mentor’s Sturtevant. “That starts with design rules and involves what happens inside the development team all the way upstream to manufacturing. If blocks are out of spec, what should the spec be? You can’t look at past history here. If you have an isolated metal line with a via at the end, if you change the alignment what are the electrical implications? That’s a combination of the CD of the via, the metal and the alignment. If you can squeeze out fractions of control there is some hidden yield opportunity.”
Sturtevant suggested splitting up available budgets for variation across groups. “If we start dividing up the overall edge placement error budget among all the various contributors, it may be best to discuss how many atoms each contributor gets as their budget. So 1 nm is approximately 4 silicon-silicon bonds, so let’s call it 5 silicon atoms with 4 bonds between them. Let’s imagine for argument’s sake that 1 nm is the overall budget. What do we do if there are more than 4 process/equipment/model/OPC/mask parties who need some budget allocation?”
Fig. 3: Perspective on sizes. Source: Mentor
Semico Research’s Itow likewise believes a different approach is required. “It’s time to figure out what tools to use, how many to use, and how to combine different tools to get to more efficient manufacturing. It’s not just about the performance of the tools. It’s also the combination of those tools. Companies need to work up a solution inside a fab to match the process in order to use tools more efficiently.”
In this equation, variability is just one factor. But at 10/7nm and beyond, it’s an increasingly important one, and one that potentially can provide significant improvements in yield and scaling benefits if it can be addressed effectively.
Related Stories
Unsolved Litho Issues At 7nm
Computational challenges on the rise with EUV. Scanners are no longer interchangeable.
Looming Issues And Tradeoffs For EUV
New lithography tools will be required at 5nm, but pellicles, resists and uptime are still problematic.
BEOL Issues At 10nm And 7nm
Experts at the table, part 3: EUV, metallization, self-alignment, ALD, and the limits of copper.
Pain Points At 7nm
More rules, variation and data expected as IP and tools begin early qualification process.
Excellent article… this is something I have been talking about at my workplace ….the industry does realise the issue of accumulated error at the state of the art and devising strategies to combat it…
Good article, Ed. I recently heard of a French startup named “Pollen Metrology ” (www.pollen-metrology.com) that is addressing process variation in the semiconductor industry using image processing, data fusion, and machine learning in one package. Very interesting stuff.
Lauro Rizzatti
Thermal deformation-induced aberrations are the showstopper for EUV.
Ed – Thank you for a very well written and easy to read article. While I am not familiar with every detail that you have highlighted, the concept of how data analytics and machine learning can impact what some may consider to be very rudimentary i.e. chamber matching, is appreciated and embraced.
As the mirrors in the EUV scanners get hotter, their aberrations can change (get worse). This defies any type of OPC, which is fixed on the mask. So some have proposed the use of EUV mirrors which are active (thermally or piezoelectrically actuated). Of course, these have other issues.
He mentioned the EUV lens aberrations varied over time, that could be thermal. However, that can’t be corrected by OPC.