Controlling IC Manufacturing Processes For Yield

There is no single solution, but there plenty of room for improvement—and lots of investment around better use of data.

popularity

Equipment and tools vendors are starting to focus on data as a means of improving yield, adding more sensors and analysis capabilities into the manufacturing flow to circumvent problems in real time.

How much this will impact the cost of developing complex chips at leading-edge nodes, and in 2.5D and 3D-IC packages, remains to be seen. But the race to both generate data during manufacturing and analyze it quickly enough to be able to impact yield and time to market has begun. That includes more customized sensors, machine learning and AI systems that can separate out critical data quickly enough to impact ongoing processes, as well as longer-term data collection to identify patterns. It also involves being able to quickly eliminate data that is unnecessary.

The driver behind these efforts is basic math. The projected cost of developing 5nm ASICs is roughly $500 million, and at 3nm that number could reach $1.5 billion, according to International Business Strategies. Included in that are one or more re-spins to improve functionality and to make sure that what gets printed on a chip matches the initial design. There are a number of possible ways to whittle down that cost and reduce time to revenue, and there is growing demand by the developers of those chips to implement those capabilities.

“We are doing more, especially as you continue to shrink the geometries,” said Kevin Zhang, vice president of business development at TSMC. “When you shrink the geometries, your circuits become more sensitive. We are implementing more advanced defect detection in our fabrication lines. Last quarter, TSMC had to scrap wafers due to a resist problem.”

Those types of issues are compounding due to increased density and smaller feature sizes. What used to be a non-issue at 28nm, such as an imperfectly printed feature, can cause a real or latent defect at 7nm.

“At advanced nodes, you have to tie process control to design margin, whether that’s a finFET or a finFET plus 3D,” said Jonathan Holt, manager of volume manufacturing solutions at PDF Solutions. “Today, FDC (fault detection and classification) and sensors are separated from the design. But moving forward, you have to look at layout and margin and tie that to the process capability. There has been some success with Industry 4.0 and AI, but now people are looking at whether they have the right sensors and the right data to tie it to the process. That has a big impact on time-to-market.”

So does an understanding what data is useful and what is not. “If you’re just acquiring the data, the big question is what are you going to do with it,” said David Fried, CTO of Coventor, a Lam Research company. “So you see a ton of in-situ sensors, ex-situ sensors, metrology sensors, tool sensors, even sensors for stuff that’s not in the chamber—the mass flow controllers that are feeding gas into the chambers. There’s tool/equipment-level monitor data. Then there’s just functional data. Which wafer did the robot arm send to which chambers? There’s a certain set of that data that doesn’t need to go much further than the tool or a compute environment nearby.”

Separating data into various buckets isn’t so simple, though, particularly when it entails multiple data types. It’s also difficult to improve process control as a central operation because the various processes that make up chip manufacturing are a series of finely tuned independent steps rather than part of a single unified process. What is considered important in one area and for one application may be less important than for others, so control has to be added at every one of these steps without disrupting movement of wafers through a fab.

“Different process steps have different tolerances,” said Ram Peltinov, patterning control division head at Applied Materials. “In the past, you could give each process step its own budget, and when you added up everything it all worked. Today, with every patterning step, the error budget adds up, so you need much more control. On top of that, there is a need for more sampling and information. In the past, you could handle each layer with 10 to 30 data points to characterize a layer. Today, you need to do more sampling and feed that data back and forward. So you have more data to characterize variability across the whole wafer or die.”

That generates a lot of data and it requires local servers or clouds. In most industrial operations this would be a middle step between the sensors and cloud storage, but there has been a resistance on the part of foundries to store any data in the public cloud. As a result, the fabs themselves need to intelligently parse what data needs to be acted upon immediately and what data can be analyzed later.

“A given smart process tool could have 20 or more process controlling parameters or “knobs”. The resulting structures on-wafer are held to are angstrom-level specifications, so the tolerances for variation are much tighter,” Fried said. “On the control side, variation emerges in many ways: lot-to-lot, wafer-to-wafer, there is cross-wafer variation, on-die variation, and LER/LWR/LCDU (line-edge roughness/line-width roughness/local critical dimension uniformity). Some of the knobs we use today are seen as an opportunity to control the process better. But there are challenges.”

One such challenge is an understanding of how to use that data. Not all fabs are equally adept at what to do with data once they acquire it.

“The key is to connect more sources of metro inspection information,” said Applied’s Peltinov. “But if you can stabilize the process in R&D, the number of parameters/sampling you need to ensure reliability can be lower.”

Variability
Still, the semiconductor manufacturing world has little choice but to move in this direction. Designs are becoming bigger and more complex, and the number of variables—and potential for variation in each one of those variables—is exploding.

“You can imagine doing a lot of compensation in different process control schemes, particularly to get around the variability problem,” said Rick Gottscho, CTO at Lam Research. “The industry is just at the beginning stages of that. If you look at the finFET, the three-dimensional nature of that device has challenged us and the industry to come up with robust solutions for high-volume manufacturing. You have to worry about residues in little corners, the selectivity of etching one material versus another, the conformality of the depositions. Everything has become more complicated.”

And it becomes even more complicated after 5nm. “When you start the next generation of gate-all-around at 3nm and below, that’s another order of magnitude in complexity,” said Gottscho. “At first, it looks like a modification of a finFET. But the requirements are getting tightened, and the complexity of that gate-all-around architecture is significantly greater than the finFET. It’s a more complex device than we’ve ever seen, and we keep saying that node after node. Yet we, as an industry, keep moving forward. Along with that, there are so many sources of variability, and all of them will matter.”

Those sources range from materials impurities to how materials are applied in the manufacturing process. At each new node, thin films need to be applied with much greater precision because the tolerances are tighter. At the same time, identifying the aberrations is becoming more difficult using existing inspection tools.

“Today, the big issue is coverage,” said Applied’s Peltinov. “You’re not sure where the problems are, so you need more sampling and more information. But if you think about a 1nm resolution at the wafer level, that’s 1012 pixels. It’s an enormous amount of data. That needs to be filtered along the way. Some needs to be analyzed, and some needs to be compared to other data.”

The key is knowing where to look for problems, because not everything can be inspected and measured. However, that’s not always obvious.

“Several years ago we started to analyze variation starting from the standpoint of operator error,” said Jim Korich, engineering systems manager at Brewer Science. “What we found, though, was the variation wasn’t due to operator error. It was in the control of the process. In our world, everything is automated, including all of the recipes. Even the cleaning of the blender has been automated, so variability has gone away there. But there is still variation, so we added in machine learning for adaptive control, because once a customer establishes a product in the market they don’t want any variability. To achieve that, you need to understand adaptive variability. So now you’re dealing with variability of the process versus the product.”

New market demands
All of this needs to happen in real-time. Delays in the fab increase costs and can affect market windows not only for chips that are being manufactured, but for schedules of designs waiting to be manufactured. But allowing defects to pass through can cause other problems, particularly when it comes to chips developed at advanced nodes for such markets as automotive.

Rob Cappel, senior director of marketing at KLA, pointed to several methods for reducing defectivity. One is to closely control the process with continuous improvement programs to reduce random defectivity, using baseline yield improvement techniques such as tool monitoring. A second is to ensure the process is sampled enough to provide traceability. The third approach is still being developed.

“A method that is receiving increasing interest is the utilization of inline defect information not only to control the process, but also to identify die at risk for reliability problems while they are still in the fab, where the cost of correcting the problem is the lowest,” Cappel said. “Automotive fabs have long relied on ‘screening,’ where a high-throughput tool inspects 100% of the die on all wafers at a handful of final layers late in the manufacturing process. Die that meet the defined failure criteria (defect size/type/location) are excluded or ‘inked.’ While effective for large defects, this method alone is inadequate for smaller, latent defects. A new inline technique, called I-PAT (inline parts average testing), may be the answer. It leverages a 20-year-old automotive industry technique known as parametric parts average testing (PPAT). This original method, based on e-test, identifies any die whose test results lie outside of the normal distribution of the population, even if they are within the operating specifications. For a small sacrifice of .5% to 2.5% yield, significant improvements in reliability are gained, with some seeing 20% to 30% improvement when these outlier die are culled. I-PAT moves this concept inline, looking for die with outlier defect populations across the multiple stacked inspection steps normally performed at many process steps throughout manufacturing.”

The outlier die are statistically more likely to contain the latent defects that the industry wants to eliminate, Cappel said, adding that results can be combined later with electrical outlier methods to improve the overall go/no-go decision for die.


Fig. 1: Comparing center vs. edge yield during new device ramp as a function of time. Source: KLA

As more systems companies get involved in developing advanced chips, there also is more emphasis on making sure that what gets designed is the same as what gets manufactured. This goes well beyond just the design, however. Increasingly it encompasses the entire supply chain for advanced nodes

“If you look at the mask blanks that are used to make the substrate, you’re not going to have a defect-free mask if you don’t have a defect-free substrate,” said Ajit Paranjpe, CTO at Veeco. “We are coming down to a countable number of defects on the entire wafer. Once you know what those are, you find an area that is aligned with the pattern so that whatever few defects there are, those are in the non printing areas. These defects are miniscule, but they’re still there. You can repair some of the defects, too. The fabs are investing a lot of money to understand what’s printable and what’s not printable.”

This is similar to an issue that plaguing chip design teams for the past few process nodes. “From a design perspective, the focus is always on printability,” said PDF’s Holt. “The reason we have OPC (optical proximity correction) models is to correct for that and print something that’s recognizable. But with finFETs, it’s more complicated. You need to etch and then examine the shape of the structure after etch. You’re looking at the thickness of the materials, the pattern uniformity and things like implants. You can’t correct for manufacturing uniformity with OPC. There are issues like end point detection, OES (optical emission spectroscopy) signals, and thicknesses measured on wafers. All of those factors come into play.”

Conclusion
Process variation, rising complexity, and the laws of physics are making chip design increasingly difficult. Yield and faster time to revenue have always been intertwined, but at 5nm and below, the stakes are significantly higher. Even minor defects can cause major problems, and there is a growing recognition that the best way to handle any of these issues is through more sensors and better data.

That’s a huge issue to solve, however, and it spans the entire supply chain, from front-end manufacturing to post-silicon analysis. But the best chance of solving those issues is on the manufacturing side in a series of process steps.

“Those companies that can tie manufacturing to design process are the ones that will be the most successful,” said Holt. “That’s where the battle is being fought—a collection of data sensors and tool sensors that can feed forward and tie to the design process. That’s the focus of a lot of equipment companies right now—how to use advanced analytics. They already collect a lot of data. Now the challenge is how to make sense of that data.”

—Mark LaPedus contributed to this report.

Related Stories

Finding Defects In Chips With Machine Learning

Domain Expertise Becoming Essential For Analytics

Using Sensor Data To Improve Yield And Uptime



Leave a Reply


(Note: This name will be displayed publicly)