Yield Management Embraces Expanding Role

From wafer maps to lifecycle management, yield strategies go wide and deep with big data.

popularity

Competitive pressures, shrinking time-to-market windows, and increased customization are collectively changing the dynamics and demands for yield management systems, shifting left from the fab to the design flow and right to assembly, packaging, and in-field analysis.

The basic role of yield management systems is still expediting new product introductions, reducing scrap, and delivering greater quantities of good devices. What’s new is the need to integrate the right data at the right time, and to do that in a way that enables better yield while accommodating all types of users and needs. This includes different approaches to analyzing wafer maps that go beyond recognizing outliers. Their focus is determining the root cause of yield detractors.

One of the greatest threats to semiconductor productivity is yield excursions, which unfortunately can go undetected until wafer electrical testing is performed. “So there is expected manufacturing yield that is sort of your baseline, and then an excursion could be positive or negative from that. So the question is really around when do you react,” said Marc Hutner, director of product management, Yield Learning Solutions at Siemens EDA. “You look at how much of a financial impact the excursion would be to your organization – is it half a percent, quarter of a percentage point? And then there’s always the question of how quickly can you react to it.”

“We define yield excursions as sudden, significant, and unexpected low yield on multiple lots,” said Dieter Rathei, CEO of DR Yield. “The root cause of yield excursions is typically something that is not controlled during the manufacturing process and is therefore most often detected only at wafer functional test.”

“There are a number of best practices that can help detect yield excursions earlier than we’ve been able to in the past,” said Melvin Lee Wei Heng, senior manager, Enterprise Software Applications Engineering at Onto Innovation. “A smart sampling plan can be deployed inline to ensure maximum defect scan detection. In a similar vein, inline process monitoring could be used to deploy smart sampling plans in areas of interest. Another option is fault detection and classification (FDC) software on process tools inline. This method could be used to monitor critical tool recipe parameters or sensors and interdict when there is a process drift based on monitored parameters and sensors. A new method of interest is wafer fingerprinting or wafer sleuth that allows fabs to fingerprint every single process step down to the individual wafer level.”

There are also prescriptive methods to improve yield using AI-powered advanced process control (AI-APC). “Because of the human effort required, it just takes a ridiculously long time to detect excursions that usually go on for far too long before corrective actions can take place, leading to either scrap or rework, both of which increase your cost of goods sold,” said David Park, vice president of marketing at Tignis. “What customers can do with AI-APC can do is control the process inputs using AI based modeling so that the outputs of the process do not drift and they stay within the upper and lower spec limits, maintaining the critical dimensions that you need for individual processing steps, whether it’s litho, etch, deposition, and maintain that for a longer period of time. The end result is higher yield with less scrap and rework.”

Today’s yield management systems use a mix of in-cloud and on-premise solutions, leveraging inexpensive data storage costs and multiple data analytics capabilities to enable the exchange of critical information on the fab floor. Designers, process engineers, and yield specialists focus on correlations, efficient analysis flows, and learning across big data platforms (see figure).

“Feedback loop analysis can be used to adjust proven parameters of the tool as soon as the process output data is collected,” writes Jeffrey David, vice president of AI Solutions at PDF Solutions. In a similar manner, feed-forward predictive models can predict the impact of process drift on yield for a given wafer or lot. The company built a hierarchical model using detailed FDC data that traced the cause of a tool-related overlay excursion to yield loss and parametric IDDQ leakage in one region of the wafers. [1]


Fig. 1: The semiconductor lifecycle data chain. Source: PDF Solutions

A critical function of yield management systems is probing of potential anomalies in wafer maps to facilitate early identification of yield loss and prevention of RMA (return material authorization) incidents. An RMA happens when a device fails during field use and is returned to the IC maker for root cause analysis.

RMAs can be extremely costly financially, and they can damage a company’s reputation. MediaTek said it saved $9 million over a 6-month period by pinpointing 35 adverse incidents in its production lines. En Jen and colleagues in MediaTek’s computing department took a model that has traditionally been used in natural language processing and adapted it to provide image recognition using an unsupervised clustering technique to better identify outlier die in parametric wafer maps. [2]

Large language models, most notably ChatGPT, have garnered immense attention in the industry as a way to expedite certain tasks, such as electrical testing. “When you write a test program, it’s a programming exercise,” said Ken Butler, senior director of business development at Advantest. “You see in the press today that people are talking about using large language models for software development, and a test program is a piece of software. We already hear customers talking about, ‘How can I apply this kind of technology?’ If I’m going to have a modern device with 20,000 or more different tests, and I have to write a big test program, we need to develop a large complex system. Can I apply LLM technology to be able to do that faster? If I have a large digital device, and it’s got an embedded A-to-D converter, I need to write a linearity test for that. And so I’m basically going to describe the parameters of linearity test to a GPT type model, and it’s going to go off and write the code for me. It’s probably not 100%, and it may not be the most efficient thing, but it’s a good starting point. And now I can go and iterate on that and make it better, faster, and more efficient.”

In that example, though, the model must comprehend the architecture of the analog-to-digital function. “That can vary a lot from one product space to the next,” Butler said. “So you have to know something about the architecture of that block before it goes off to write an effective test, because no one linearity test fits all. That’s another place where Chat GPT can come in is to say, ‘Okay, I’m going to give you a certain description. This is the structure and the architecture of my A/D. Now go off and write me a linearity test for that technology function.’”

He noted that while it’s early to talk about LLMs in terms of what is commercially available, they likely will play a role in test program creation — in addition to more traditional roles of analyzing and inferring from data.

Another example from the test world comes from engineers at Emerson Test & Measurement. When a yield excursion at a particular fab came up and the developer of that specific test program was no longer with the firm, engineers decided to try a novel solution to see whether a large language model could help decipher the program. While it did not provide all the answers needed, the model was able to provide a way to understand the data. The engineers then took it from there.

Machine learning models, even though they are capable of learning, need much direction to provide the answers that engineers seek. “You need to prioritize, organize and label the data,” said Nitza Basoco, technology and marketing director at Teradyne. “A lot of times people say, ‘We could just use AI to do this.’ Okay, but what kind of inputs are you giving it? How are you labeling it? You have to teach it what is important and what’s not important. And maybe you put some structure in place and say, ‘From this point forward, we do these things.’ So it’s a concentrated strategy that needs to be implemented.”

Care goes into these implementations because the cost of yield loss is so high. “The key factor here is speed,” said DR Yield’s Rathei. “To enable swift analyses, it is essential to have all data readily accessible in a yield management system. When you have to start collecting data from various tools to address yield excursions, the delays caused by this may be very expensive. For example, we had a case where about 3,000 wafers could be saved by fast analysis of all potentially affected material and a swift isolation of the problem material. This example demonstrates the power of analytical data insight and the money that can be saved.”

When a yield excursion does occur, fabs can take different approaches to identify the cause. “First, an equipment commonality analysis or one-way analysis of variance (ANOVA) can be used to identify which tool set or process step is contributing to the excursion,” said Wei Heng. “Onto Innovation’s Discover Yield software has a Yield Mine analytics module that easily enables yield excursion analysis with a lot list and wafer equipment history. Analysis is done quickly via a proprietary supervised algorithm. Results can be easily shared to other parties in the fab by enabling workspace profile sharing, which includes analytics results.”

“From a test perspective, we are gathering a lot of data and one of the products under that is test and diagnosis,” said Hutner. “So what we’re doing is we collect up all of the scan data, so ATPG scan at logic test, and we analyze that data. For a number of our customers, they have goals that they must get that data analyzed with a certain number of hours, or a day from when the wafers and package gets tested, and then they can take a look at the trends to see if there is a systematic problem across that material.”

“So from Siemens EDA’s side, what we’ve done is we’ve defined a workflow that includes, ‘How does the test data get collected and formatted?’ We provide patterns that have both drive data and expects. We then ask the customer to format the data, either in a standard data test or format like STDF, and then that feeds directly into our volume diagnosis workflow where it can be analyzed using a bunch of computers to go figure out where the yield problems exist. Then we end up providing reports or analysis, or a human goes in and looks at them,” describes Hutner. “But the whole idea about having a volume diagnosis flow is that you get those reports out of the tool about the consistent failures that are coming out. Then that information gets fed back to our customers, who feed it back to their fab or their test house to let them know the kinds of things we’re seeing. Is it a problem that can be fixed? Is it a fabrication problem? It can be a consistent problem in a particular area of the die or a layer. We can give them all that detail within our YieldInsights as well as our testing diagnosis tools.”

Inside wafer maps
Engineers traditionally have used computer-aided tools to localize and classify defect types in wafer maps, and that continues today. But many fab and process engineers also are exploring whether ML models can provide automated alternatives to reduce this time-consuming and costly process. How successful that will be remains to be seen. A high level of data customization, coupled with limited accuracy of ML approaches, are common challenges. In addition, it’s sometimes difficult to predict the bounding box around a given defect (localization). Such defects also vary greatly in size, from micro-residues to scratches over several millimeters, which adds complexity.

Prashant Shinde and colleagues at Samsung Semiconductor’s India R&D Center reported on a YOLO (You Only Look Once) object detection architecture that achieved 94% accuracy in defect classification accuracy, while providing defect localization that often is not provided with other architectures. [3] More recently, Sanghyun Choi, software development engineer at Siemens EDA and colleagues used version 8 of YOLO to detect defects in SEM images to tie defect locations to the design layout, classify the defects, and provide clues as to related process issues. [4] In this study, five models were used to capture all defect types, and a strategy called ensemble voting assigns votes to cases where different models provided overlapping bounding boxes, thereby improving localization prediction accuracy. The ML is expected to increase the accuracy of root cause analysis and speed problem solving between manufacturing and design layouts.

A single yield management hub?
Yield management systems (YMSs) always have played a fundamental role in a semiconductor manufacturer’s toolbox, but their role may be expanding. One way of managing the massive data collection and analysis needed to facilitate higher yields is by establishing a central hub for data and analysis. Silicon lifecycle management (SLM) that spans from design through manufacturing and packaging and out into the field.

SLM goes after some of the most difficult yield problems — systematic failures. “With our silicon lifecycle management tools, we have various ways to identify systematic defects in production, using spatial pattern recognition, for instance, to identify pattern systematics,” said Matt Knowles, product management director for hardware analytics and test at Synopsys. “So we don’t have to actually have a hypothesis per se, which is one of the nice things about having a unified data analytics system. And since we go all the way to failure analysis, we have a closed loop, so customers can create their own learning in the solution. You can take that FA and the miss data, and you can feed that back into the design features, whether they be systematic patterns or spatial signatures. And you can create your own internal model to learn about the systematics in your particular product and process. This is hugely valuable, because there’s no kind of first principles way of identifying some of these systematics ahead of time. It really has to be an iterative learning process in a platform that has all this data connected.”

SLM systems represent a marked change over the status quo in semiconductor manufacturing, where different functions are performed in silos within fabs and OSATs, including metrology and inspection, wafer sort testing, FDC systems, advanced process control systems, SPC, packaged device testing, etc. With all the activity around 2.5D, 3D-ICs, and other types of advanced packaging, there’s a growing need to share data from manufacturing in the fab to packaging houses and testing providers. SoCs, meanwhile, increasingly need data sharing throughout the supply chain.

At the same time, cloud storage enables remote data analysis, yield monitoring, online and off-line analytics, and alarms. “There is a growing trend in moving yield analysis for fabless users (for foundry customers) and off-site for IDMs,” wrote Helen Yu, vice president of foundry operations and yield engineering at Renesas. [5]

Other substantial changes are coming about as a result of chiplets in advanced packages. “With 2.5D and 3D integrations, you have more content to verify, more to test and it becomes an ecosystem of multi-die,” said Guy Cortez, principal product manager of SLM Production Analytics at Synopsys. “One thing we’ve seen is we’ve had some customers who strongly suggest that other companies in their supply chain use the same platform so that they can rate the root causes and accelerate resolution of yield problems, which is much simpler than weeklong back-and-forth data file transfer solutions.”

Cortez emphasized that SLM enables a broad set of priorities. “It’s not just looking at certain tests in manufacturing, because there are certain tests — like scan test and others — where you can’t test everything in order to do diagnostics on that. You need to look at the entire test and look at the entire yield to get the complete picture. You can have a bunch of systematics, but the result could be just a 0.01%, improvement in yield, and that may not move the needle. You really need a complete picture in an analytics environment. You want to target those systematics that are causing the biggest yield impact, and our tool actually prioritizes that for the customer. If you know each of these systematics, we’ll give you the biggest hitter, and then the next one, and the next one. So if you go in that order, then you get to higher yield faster. That’s the goal.”

Hutner agrees, adding that no one company can provide all the necessary data. “We also partner with other yield tools, such as our partnership with PDF Solutions. And the systematics are really the interesting bit from the yield excursion side of it, because if there’s a consistent problem on a layer, then you can start to really impact the yield excursion.”

Conclusion
Yield excursions are one of the most disruptive and costly events that happen in semiconductor manufacturing and as such, IC manufacturers go to great lengths to contain any yield loss and to learn fully the lessons from excursions. Data sharing and partnering among players in the design, test and yield improvement areas is enabling companies to react more quickly and effectively when yield loss does occur. The ongoing advances in communications, yield tools and management systems such as silicon lifecycle management promise improved results going forward.

“The knowledge gained from yield excursions is a tremendous benefit,” said Onto Innovation’s Wei Heng. “The information learned can be used to deploy monitoring, control and FDC strategies that can benefit fab yield. For example, parameters that might have been deemed low risk in the past might give subtle signals before a tool fails, making them critical parameters.”

References

  1. Yu, S. Martin, et al., “Expediting manufacturing safe launch with Big Data AI/ML analytic solutions on the cloud,” 2024 8th IEEE Electron Devices Technology & Manufacturing Conference (EDTM), Bangalore, India, 2024, pp. 1-3, doi: 10.1109/EDTM58488.2024.10512266.
  2. Jen, et al., “Using BERT Pre-Trained Image Transformers to Identify Potential Parametric Wafer Map Defects,” 2024 35th Annual SEMI Advanced Semiconductor Manufacturing Conference (ASMC), Albany, NY, USA, 2024, pp. 1-5, doi: 10.1109/ASMC61125.2024.10545454.
  3. P. Shinde, P. P. Pai and S. P. Adiga, “Wafer Defect Localization and Classification Using Deep Learning Techniques,” in IEEE Access, vol. 10, pp. 39969-39974, 2022, doi: 10.1109/ACCESS.2022.3166512.
  4. S. Choi, et al., “Machine Learning Based SEM Image Analysis for Automatic Detection and Classification of Wafer Defects,” 2024 35th Annual SEMI Advanced Semiconductor Manufacturing Conference (ASMC), Albany, NY, USA, 2024, pp. 01-04, doi: 10.1109/ASMC61125.2024.10545512.
  5. H. Yu, et al., “Expediting manufacturing safe launch with Big Data AI/ML analytic solutions on the cloud.” 2024 8th IEEE Electron Devices Technology & Manufacturing Conference (EDTM). IEEE, 2024.

Related Reading
Defect Challenges Grow At The Wafer Edge
Better measurement of edge defects can enable higher yield while preventing catastrophic wafer breakage, but the number of possible defects is increasing.



Leave a Reply


(Note: This name will be displayed publicly)