Using AI To Improve Metrology Tooling

Virtual metrology shows benefits in limited trials, but much work still needs to be done.

popularity

Virtual metrology is carefully being added into semiconductor manufacturing, where it is showing positive results, but the chip industry is proceeding cautiously.

The first use of this technology has been for augmenting existing fab processes, such as advanced process control (APC). Controlling processes and managing yield generally do not require GPU processing and advanced algorithms, so this is more of a test than a stamp of approval. As with any AI/ML technology, virtual metrology is subject to a certain level of industry overexuberance that will take time to justify across a wider set of use cases.

Nevertheless, initial implementations are showing some benefits. Among the examples:

  • SK hynix recently deployed virtual metrology from Gauss Labs in multiple fabs, providing 22% reduction in process variation over multiple deposition tools;
  • Siemens EDA and GlobalFoundries are fine-tuning process and design interactions for new layouts. [1,2];
  • NXP has qualified deep learning by Lynceus to monitor trench etch depth and profile, a key parameter for automotive chips;
  • Synopsys is tying its process monitors to design, and
  • Metrology, inspection, and yield management tools are starting to incorporate data analytics in various forms.

Still, the practical implementation of virtual metrology (VM) in semiconductor manufacturing is challenging on multiple fronts, starting with the data, which often is scattered among different companies, or different groups within the same company.

“It’s important to understand the consequences of having fragmented data,” said Andres Torres, distinguished engineer at Siemens EDA. “Virtual metrology requires very good alignment between the physical models and the machine learning models. Because data can fool you, with incomplete data, machine learning will give you a best guess on correlation. But if it doesn’t match with the physical parameters you measure, it is worthless.”

Fig. 1: A SHAPley analysis of 16 design models illustrates what elements of design content contribute the most to final output together with the process parameters used to set up the process for the designs. Source: Siemens EDA

Fig. 1: A SHAPley analysis of 16 design models illustrates what elements of design content contribute the most to final output together with the process parameters used to set up the process for the designs. Source: Siemens EDA

The firms demonstrated how VM models can be used to increase metrology coverage for every wafer being processed. They showed a 50% reduction in RMS error between a model that did not include process details, and a model that included adjacent metrology measurements. They also showed it is possible to encode design information in a manner that allows the model to learn across multiple designs.

Interestingly, Siemens and GF determined that the main challenge in extending the approach to more applications is mainly limited by the effort in collecting, processing, and organizing different sources of historical data.

What is VM?
Virtual metrology is the use of algorithms to predict process results based on previous metrology measurements, on-wafer monitors, and/or and process equipment sensor data instead of measuring the results directly. Because only a small fraction of wafers and dies are measured with metrology tools (sampling), VM is all about predicting reliable outputs based on appropriate inputs. First, data needs to be converted into a machine-learning format, a process called feature engineering. Then, the model can predict what is happening on the wafers and dies that are not inspected, alerting the process engineer to drift in parameters before they go out-of-spec.

Ideally, VM can shorten root cause analysis, predict out-of-control parameters more quickly, improve APC, and boost yield. Several sources emphasized that the goal of VM is not to replace metrology tools, but rather to augment their use while working alongside TCAD in design and APC, and fault-defect classification (FDC) in fabs.

“Unlike humans, AI has the ability — an advantage in practice — in dealing with multi-variant factors impacting yield,” said Miki Banatwala, director of software development at Onto Innovation. “In today’s factories, tuning a single parameter will not move the needle. This work has been done and, to a large extent, optimized. Further optimization requires the ability to look at multiple variables, their interactions, and impacts in a more holistic sense. Comprehending how three or more variables interact to drive a performance or yield can only be done with AI.”

But pulling all of that data together is a challenge. “The data ecosystem is fragmented right now,” said Mark Laird, senior staff application engineer at Synopsys. “For example, you have end users that are using our sensors, but it’s for a connected car. The car manufacturer buys the chip, which is being manufactured at one of the major fabs, so the fab houses the data. Or maybe it’s different, and a fabless company that sold you the part in the first place has its own data set. The ecosystem will start to unify, to a large extent, within the next 10 years. It has to, because at the end of the day we’re making a prediction that you’re going to have a brake control failure, for instance, and you need to take an action — derate the performance and lower the voltage so you keep that part working good enough for a recall. But then you need to be able to do a proactive recall of the other parts, find the data at the fab, and fix the problem. So somehow this ecosystem has to unify.”

A familiar scenario shows how machine learning algorithms already are simplifying root cause analysis, whether it is performed on just electrically probed wafers that fail or RMA devices.

“We have very concrete examples of where machine learning is used to automatically classify defects using cluster analysis,” said Dieter Rathei, CEO of DR Yield. “For instance, you might have 10,000 wafers in production, and our clustering analysis finds maybe 400 wafer out of them that have a specific pattern. Then you can submit the 400 wafers to a tool commonality analysis, for instance, and say, ‘Okay, these wafers have been automatically classified by the system as having a certain pattern. They have a certain amount of yield loss on them, they all have been manufactured on a certain tool, and for this tool there’s a high likelihood it’s responsible for the pattern.’ Or they were processed on a combination of tools. So you can go quite close to the root cause analysis in this way.”

ML modeling for automatic defect classification (ADC) requires choosing the right model, be it re-training a convolutional neural network (CNN) transfer learning model, deep learning methods, the use of auto-encoders, ensembles, or other approaches, said Onto’s Banatwala. “A best-known method for ADC is to provide the end user the confidence that an ADC classified defect is correctly classified. If ADC does not have the confidence to classify a defect, then it must make the call that the defect is an unknown. Then experts can take these ‘unique’ defect types in a future model going forward.”

Multivariant analyses and interactions between multiple process tools demand an intelligence level beyond what humans provide. “If the 100 wafers are problem material, then you can say, ‘Okay, they have been in this etch chamber, and probably this etch chamber is responsible for that. But sometimes there is a combination of tools that is causing an issue,” said Rathei. “We have an algorithm that can go through all the material and the flow of the materials for the fab.’ And then if you say, ‘I’m interested in these wafers. What do they have in common?’ It can be a combination that they have been on that litho tool and that etcher, and maybe this cleaning tool. And you can define the depth of analysis from a combination of two or more tools, because this is recorded at the same time.”

IC manufacturing is in the early stages of deep learning. “A minority of our customers use machine learning with GPU processing, but it’s coming because people know AI applications are the future,” said DR Yield’s Rathei. “It’s probably some mixture of the semiconductor industry wanting to be in an advanced technology position and an issue of competitiveness.

Metrology also has become considerably harder with shrinking CDs and 3D measurements.Chipmakers need to make increasingly detailed measurements like CDs at multiple depths or ion implantation profiles. “They want to make sure they utilize the metrology in the customers’ fab to its maximum capability, and this is driving virtual metrology,” said Torsten Stoll, vice president of marketing and business development at Nova. Some customers do their own big data analysis, others outsource it to the manufacturers of the metrology equipment or the manufacturers of the process equipment and have them work together. “The motivation to implement virtual metrologies is well understood by fab manufacturers but there are technology challenges and constraints that limit this kind of usage.”

Nonetheless, several startup firms are targeting big data analytics, including virtual metrology. “The idea — and it’s not just our idea — is that metrology data is obviously very valuable, and there’s all this sensor data from the process machines,” said Mike Young-Han Kim, CEO of Gauss Labs. “So if you can correlate the data, you can predict film thickness, CD, overlay, edge placement error, etc.”

Implementations
Kim points to four difficult challenges with ML-based or physics-based predictions: the data dimension is  so high that one can pick up spurious relationships among many parameters (flow rates, chamber pressure, chamber temperature, etc.); the sampling rates so low that it is very difficult to observe meaningful correlations; data drifts over time  (for example, from residue buildup in chambers) and data shifts happening abruptly after maintenance events and recipe changes, and (4) the sheer number of devices, recipes, process steps, and chambers that makes it futile to handcraft models for individual cases. “So the input-output relationship is always changing and your model has to be flexible to manage those changes,” he said.

In its work with SK hynix, Gauss Labs developed AI technology for PVD, CVD, and other single wafer processes, which correlates key metrology parameters with automatically identified features from equipment sensor data. On average Gauss Labs’s virtual metrology system enabled a 22% reduction in process variation (Distribution shown in figure 2 is generic but reflects realized gains). A key advantage is the software performs real-time recipe control independent of the fab’s APC system, which is treated like a black box.

Fig. 2: Virtual metrology is deployed in high-volume manufacturing at SK Hynix, integrated with its run-to-run APC system. Source: Gauss Labs

One key result of VM is better tool allocation for metrology. “The VM performance tells you if you’re oversampling, so perhaps a 10% sampling rate may be scaled to 5%. Then you can actually do resource allocation to areas that are undersampling,” said Kim.

Others agree. “With metrology, there’s always a capacity issue,” said Siemens’s Torres. “Some measurements do provide more information regarding the final value you care about, so the fab can focus on efficiency. And it might show you that in the sixth step you get a very good prediction of test results with a particular error rate. That is significant because it’s not an assumption. It’s based on the data.”

Is VM like social media learning?
The machine learning many people are familiar with involves understanding of human behavior through social media. But that is very different from the requirements of fabs.

“A lot of the techniques that have been developed by Google and Amazon are focused on trying to understand human behavior. They rely on a relatively small number of measurements per user and many users per month, so the data sets that are narrow and deep,” said Torres. “Semiconductor manufacturing is different because the number of samples is not high, but we have physics on our side. This leads to wide and shallow data matrices. So the machine learning techniques are going to be slightly different.”

In machine learning, feature engineering facilitates the conversion of physical information into a format that can be understood by the models. For example, NXP recently qualified machine learning to predict four deep trench metrology measurements per wafer, including wafer center trench depth and wafer etch trench depth. The goal was to detect and diagnose outliers faster, reducing sampling rate on physical metrology, and reducing recovery time after preventive maintenance.

After model deployment into production, David Meyer of Lynceus reported key findings. The project involved early and ongoing engagement with engineering teams, with expected challenges in managing data. Adoption in manufacturing requires the ability to interpret the data, and building a production-worthy data pipeline and predictive model is challenging.

Fig. 3: One of the first steps in virtual metrology involves feature selection based on historic data and engineering domain knowledge. Source: Lynceus

Fig. 3: One of the first steps in virtual metrology involves feature selection based on historic data and engineering domain knowledge. Source: Lynceus

“Virtual metrology is a relatively well-defined use case of AI: you use data describing how the process is running and try to anticipate the metrology outcome. Our approach is to do this partnering with the fabs, where all the necessary data is sitting, rather than with tool manufacturers or other types of players who are naturally more limited in their ability to deploy those solutions in production,” said Meyer. “Today, we focus on predicting failures caused by equipment malfunction (which is captured by existing sensor data). The question though is: what other types of data do you want to start including in order to be covering more potential failure modes? The next natural step would be to try and catch failures which are related to defectivity. Which data sources should then be integrated and how do you go at it? We believe the answer here is not to try and include all the data that might be relevant at the same time, but rather start with a manageable scope and add more data iteratively. At some point this will involve using data external to the fab and will require data sharing agreements between multiple players in the ecosystem.”

Virtual metrology is widely used for process tool monitoring, according to Jiangtao Hu, senior technology director at Onto Innovation. “Today’s fab managers and process engineers would like to increase the use of VM due to its advantages, such as high sample rate (measure every wafer) and in-situ and/or instant feedback (i.e., no metrology queue time delay). However, there are also concerns of reliability, limited measurement capability (spatial resolution, impact from incoming variation, etc.) and the cost of sensors.”

Hu explained that the next generation of virtual metrology development will continue the focus on improving reliability, predictability, and efficiency using the latest technology advancements including:

• Using more and better on-tool sensors;
• Incorporating in-situ metrology sensors at critical process steps;
• Focusing on high-yield impact areas, such as edge yield loss;
• Improving ML algorithms, confidence indices, and noise filters, etc.
• Auto-training to update VM recipe as a process evolves, and
• Incorporating context information, such as incoming variation.

Targeted improvements
From here, the chip industry needs to do what it does best — fine tune everything.

“We developed our YieldWatchDog system for the electrical test data because I don’t know any other system that would be tailored for the electrical test data to apply some kind of SPC rules and other statistical analysis to the electronic test data,” said Rathei. “So I always thought about it as a complementary tool — do the front-end fab inline metrology SPC back on the test floor. Of course, the other direction is easier. So besides the automatic feedback from the statistical point of view, some of our customers are taking the inline metrology data as an additional data source into our yield management system.”

Many chipmakers are moving to more targeted metrology methods as needed. “There is a direct correlation between the nitrogen content in hafnium oxide and the leakage current, for instance,” said Stoll. “So by using XPS and in-line SIMS, customers can correlate film composition with transistor performance. At the same time, some customers are binning chips for performance, and relevant metrology helps them to make better decisions.

Stoll refers to XPS (x-ray photoelectron spectrometry) as an “atom counter,” because it separates species by their atomic structure. “The bread and butter for XPS is the work function metal and the high k dielectric, that are correlated to the electrical properties of your transistor. “Customers can optimize the tantalum nitride deposition process because there are five different nitridation states. By measuring these ratios you can see the nitridation state, which plays directly into the electrical performance of the device.”

Conclusion
In addition to virtual metrology, chip manufacturers are improving efficiency by combining multiple process steps and measurements together such as CD, overlay and edge placement effort. But virtual metrology likely will play an increasingly important and wider role, particularly at advanced nodes and in complex, heterogeneous designs.

There are still kinks to be worked out of these systems. Chipmakers need to be able to combine data from multiple sources, and they need to understand where VM works best, where it adds little value, and they need to understand the range of variables and where to look for what they don’t understand. Nevertheless, given the early positive results and the massive amount of data that needs to be digested, AI may offer some useful improvements for metrology.

References
1. N. Greeneltch, et.al., “Design-aware Virtual Metrology and Process Recipe Recommendations,” SPIE, Feb-March 2023, Paper 12495-75. https://spie.org/advanced-lithography/presentation/Design-aware-virtual-metrology-and-process-recipe-recommendation/12495-75
2. S. Schueler, et.al., “Virtual Metrology: How to Build the Bridge Between Different Data Sources,” Proceedings Volume 11611, Metrology, Inspection, and Process Control for Semiconductor Manufacturing XXXV; 116112D (2021) https://doi.org/10.1117/12.2588467

Related Stories
Metrology Strategies For 2nm Processes
Tools become more specific for Si/SiGe stacks, 3D NAND, and bonded wafer pairs.



Leave a Reply


(Note: This name will be displayed publicly)