Predictive Fab Management

The future of factory variation is in effectively managing process and equipment.

popularity

Managing variation requires a different approach in fab management, moving from reactive to predictive methodologies.

This is easier said than done, however. Predictive fab management requires a much more detailed understanding of everything happening in the fab, including process variation, equipment variation, mix variation—all of which must be managed with dispatch strategies to produce predictable fab performance.

One way to break down the problem is to first consider the “bottoms-up” challenges of variation in equipment and process, and then to consider the “top-down” challenge of exchanging data, and scheduling the fab.

This first article will examine the bottoms-up approach in how to identify problems in equipment before they affect process. The industry has become very sophisticated in using statistical process control (SPC) and run-to-run control to manage process outcomes. Bill Ross, manufacturing manager at Sematech, said his group has met with teams at United Technologies, which develops monitoring systems for jet engines. Some 900 sensors in each engine are monitored in real time. The status and issues are uploaded during flight so that appropriate maintenance or replacement equipment is waiting as soon as the plane has landed.

This is a great example of real time predictive maintenance. The jet engine problem is simpler because there is only 1 type of engine per plane, whereas a semiconductor fab probably has more than 30 different pieces of equipment. In contrast, the penalty for a reliability failure in a jet engine is rather more significant.

Sematech has a do-it-yourself approach to identifying critical issues. The group looks at unscheduled maintenance reports to select targets. They add sensors, logging software, and pick synchronizing trigger signals from the equipment using jumper wires. They also have access to time-stamped event files to correlate with their data. Once they have data showing cause and effect, they use this to persuade suppliers to add specific monitoring functions to their equipment.

The power of this sort of understanding applied to semiconductor processing is obvious. Ross sees the challenge as identifying the correct items to invest in monitoring.

Bill Conner, principal at Inficon, describes a commercial implantation of a high-bandwidth parallel network and sensor package called Fabguard. The system collects data from custom network-enabled sensors, and triggers from the system. The host software allows off-line inspection for correlations. The importance of bandwidth for certain failure modes was illustrated by Conner, who showed an example of arcing in etchers and deposition systems, which require high frequency sampling to see the onset of problems. These are system and process specific analyses based on user installed sensors.

And Jimmy Iskandar, data scientist at Applied Materials, describes a fully integrated, generic Predictive Maintenance (PdM) solution. The company’s TechEdge PdM product consists of an offline data analysis system and a real-time prediction system that uses the understanding developed offline. The company points to examples of beta testing at customers, while suggesting that AMAT is moving to deploy this solution more widely.

According to Iskandar, the PdM solution can be summed up as follows: “The modeler collects data from any of the existing sensors inside the system. Custom sensors can be added by the Applied team and virtual sensors can be created by correlating multiple data sources. The data is summarized into a series of metrics. Afterward, the PdM software looks for correlations between the summary metrics and the system maintenance log.”

They have developed PdM capable of generalizing a predictive model to work across chambers (of the same tool type), products and processes rather than a specific model, thereby increasing model robustness and lowering model maintenance cost.

Diagnostic solutions discussed at Semicon West this summer covered a range from a “do it yourself” addition of sensors to look for correlations; commercial custom sensor networks; to an integrated generic solution that looks for correlations in statistical data summaries. The different solutions look for correlations at different levels of detail in the data.

Taking control
Another challenge is ensuring that the control limits for equipment and process delivers the desired control in devices. Rohit Lal, lean 6 sigma team leader at Global Foundries, said the foundry uses a combination of process models, factorial designed experiments, and internally developed device models to connect all the dots. “In a Finfet, 20 process steps determine gate height, so they use correlations to identify the first problem step,” said Lal. This strategy will need to be extended back further to connect equipment parameter control to process.

The focus of the Semicon West session was measuring variation to predict failures in process equipment, but there are other aspects to managing variation that are part of the ITRS focus. In spite of progress using SPC and run-to-run control, there is plenty of opportunity to further reduce process variation. The future of this sort of control is to increase granularity to wafer to wafer and within a wafer. Also, today run-to-run control relies on statistical methods for prediction, and the goal is model-based predictors.

Chris Mack, self-styled “litho guru” and lithography modeling founder, believes high quality physical models are the key to predictive and feed forward control, rather than searching for correlations in big data. He cites the example of the success in lithography modeling that was based on the fundamental physics of image formation and a more heuristic model of resist dissolution. He sees multi-patterning as driving new opportunities.

“I have heard that complete 20nm Metal 1 logic patterns at foundries still require lines running in both X and Y and need up to six lithography steps,” said Mack. The six steps all interact to determine final geometries, both critical dimensions and pattern placement. Feed forward control for overlay and line width will require models for patterning, including lithography and etch. In his opinion, etch models are a serious weakness today.

The industry is just starting to appreciate the opportunity presented by a better monitoring of equipment health. The “bottoms-up” picture is of a need for a much better understanding of equipment and process to deliver predictable performance.



Leave a Reply


(Note: This name will be displayed publicly)