Making it simpler to analyze large amounts of tool data to identify the root of problems.
Diagnosis of any kind requires the synthesis of information. As humans we are much better at doing this when we have ways of visualizing information versus looking at numbers and statistics, especially when the data represents something happening over time. Think of a GPS tracker that shows where you have walked in a day. The raw data are coordinates and time stamps. A map that shows you this information is so much more helpful to understanding.
In order to be able to diagnose its process tools, Lam built a utility called the Lam Data Analyzer (LamDA), which is able to read the data logs and provide a visualization of events. It is particularly useful for comparing a good wafer run to a bad one, or a good chamber to a bad one. It is very widely used by both customers and Lam engineers with over 8,000 active licenses in the field. LamDA’s best use is when you have a hypothesis of what may have happened, and you want to look more closely to confirm or refute your theory. But what if you know something is wrong and you don’t know where to look (the “needle-in-a-haystack” problem) or the even more challenging situation when you don’t know what you don’t know?
To address these bigger challenges, you need a different approach, and paraphrasing the movie ‘Jaws’, you need “a bigger boat” to deal with a bigger problem. This is where the latest generation of big data analytics comes in, called Diagnostics to Chamber Matching or D2CM. It is a machine learning approach that looks at all the tool data across all the tools and across long periods of time. It finds the “needle-in-the-haystack” in a field of haystacks across many seasons, even when you didn’t know there was a needle to be found. It is a kind of “Super LamDA.”
D2CM is typically used to assess the performance of a fleet of tools running the same process (also called an “application”). Customers in volume manufacturing use control limits to judge if the tools are within a manufacturing specification, but there can still be significant tool performance difference that are hard to identify and often even harder to understand the root cause. D2CM uses a multivariate (or many dimensional) approach to identify statistical significances which can point to how to make things better. A big advantage of this approach is that it allows the interactions of different signals to be taken into consideration (remember that pressure, temperature, power, etc. are all interrelated) which gives a much better signal-to-noise, and points to the root cause so much faster.
Machine learning is typically complicated, and it takes a fair amount of training to be good at using it. It can take even more skill to really understand how to apply the outcomes of the analysis to repairs or corrections, as to do that well you need to understand the process tools first.
This is where a development program being launched comes in to play. It takes many of the fundamental use cases around the big data approach and synthesizes them down to much simpler “apps.” Some examples are an app to audit all the configuration settings on all the tools in your fleet, a “subsystem health model” app and an app that helps you understand the daily tool health checks that are run. The program also creates a “sandbox” environment where process engineers can prototype different analytics approaches or different data visualizations with easy access to the tool data and core engine of D2CM.
The development program is being launched in 2020 with the first apps available to the field in the second half of the year. There is a roadmap of many more apps to come, some of which will come from users themselves via the sandbox. Bringing value to data and analytics improves process tool performance and productivity.
Leave a Reply