Using Fab Sensors To Reduce Auto Defects

Fab sensing technology coupled with analytics provides a path to improve reliability of autos.

popularity

The semiconductor manufacturing ecosystem has begun collaborating on ways to effectively use wafer data to meet the stringent quality and reliability requirements for automotive ICs.

Silicon manufacturing companies are now leveraging equipment and inspection monitors to proactively identify impactful defects prior to electrical test. Using machine learning techniques, they combine the monitor data with feedback from subsequent manufacturing steps to improve yield, quality, and reliability.

Wafer fabrication and wafer test have been rich in data that engineers use to support their work. Fabrication data uses metrology data for statistical process control (SPC) charts. Wafer test, meanwhile, sorts good from bad die using pass/fail limits, which started off as simple values.

Over the past two decades the adoption of statistically driven test methods, such as part average testing (PAT), has been effective in improving quality while reducing false negatives. Now, engineers leverage wafer test data at subsequent test steps to simultaneously reduce test time and decrease escapes.

What’s new is using wafer equipment sensor data and wafer inspection monitor data to feed-forward to test steps. The converse occurs, as well, with test data fed back to manufacturing steps. These forays into big data analytics spanning more manufacturing steps provides another tool for silicon providers to meet the defectivity rate of 10 parts per billion demanded by automakers.

That’s also becoming more complicated as the automotive industry shifts from mature to bleeding-edge processes, particularly for the centralized logic needed to avoid objects and make split-second decisions for assisted and autonomous driving. That is driving significant changes in the mindset of the automotive IC ecosystem.

“Fabs used to get defectivity down to the point where the chips would yield at final test,” said Doug Sutherland, principal scientist in technical marketing at KLA. “Now they need to get the defectivity down to the point where the chips continue to perform 5 and 10 years after they have left the fab. Yield impacts reliability, measured over the lifetime of the chip.”

This use of less mature 10/7/5nm processes requires automotive IC manufacturers to manage not only random defectivity, but also an increasing level of residual systematic defectivity. “We have to understand all the sources of systematic defectivity and resolve them faster because the time between a process coming to market and that part going into a car is now measured in months, not years,” observed Jay Rathert, senior director of strategic collaborations at KLA.

Connecting data silos becomes essential to learning faster. “The semiconductor industry has historically been really good at collecting data,” said said Doug Elder, vice president and general manager of the Semiconductor Business Unit at OptimalPlus. “However, this data was very specific to an operation and or device family. Traditionally, this siloed data could not be easily correlated against other process steps or historical data. There was no common data format, platform or tools in which to perform these analyses. However, with the advances in sensor technology (all forms) and improved data collection, engineering, analysis and storage capabilities, this is all changing.”

It’s changing for everyone in the semiconductor manufacturing business. “Equipment sensors typically report data as a single point for wafers, while reliability is influenced as radial performance of the equipment, changes over time, especially at the wafer edge,” said Jason Shields, vice president of equipment intelligence at Lam Research. “Multi-variate models constructed with a composite of these single-point sensors, correlated to yield or in-line metrics, can be a powerful tool in enabling higher reliability and yield.”

Data analytic frameworks now make connecting fab equipment monitoring, wafer inspection, and test data possible. As with any complex system, the details matter as much as the overall framework.

Sensors: measure in-situ, check health, inspect wafers
Semiconductor manufacturing sensors exist in several flavors. Equipment monitoring sensors provide checks on equipment properties that influence manufacturing correctness. In-situ sensors take measurements in the equipment during wafer processing. In between processing wafer(s), health check sensors make measurements inside the equipment. Inspection sensing monitors examine production wafers for anomalies.

Jon Holt, senior director of fab applications at PDF Solutions, noted that equipment measurement types and equipment sensor numbers have grown with wafer size and process complexity. Consider these on-average numbers:

  • 6-inch wafers: 20 to 30 sensors
  • 8-inch wafers: 60 sensors
  • 12-inch wafers: 200 sensors

Equipment sensors sample data at a defined frequency which results in a time series chart. Manufacturers store this data in its raw form and as an equation. Mathematical techniques extract features of the time series charts to derive the equation. For large wafer manufacturers one day generates a petabyte (1015) of sensor data.

Equipment requiring significantly more control naturally requires more sensors. For example, EUV lithography equipment has thousands. Plasma tools fall into this category, as well. “There are more sensors on plasma tools and aftermarket companies provide sensors that can be retrofitted especially for plasma tools,” Holt noted.

When engineers perform periodic equipment health checks, they have two options. One method is to run blank wafers (i.e. non-product) through a tool and optically scan it for defects. Another method relies upon optical sensors to take measurements within the equipment.

“Our sensors basically are used to improve yield and productivity,” said Subhodh Kulkarni, CEO of CyberOptics. “Built out of carbide material, they are not supposed to be physically present when the wafer is being processed. They provide diagnostic information rather than in-situ process information.”

This isn’t exactly how industry experts expected things to evolve. “Just looking at how many sensors we have shipped over the last 5 years, our gapping sensor, which measures spacing between a plasma chamber’s electrodes, makes up our highest volume of sales,” said Kulkarni. “The lowest volume sensor we sell is our particle sensor. If you would have asked me 7 to 8 years ago which one we would have sold the most of, I would have predicted exactly the opposite.”

Inside of fabs, particle counts always has been of keen interest. That includes size, position on wafer and occurrence at each process step. But equipment health checks only can provide an inferred measurement, such as what happened between wafer steps. Long used to detect process excursions, inspecting production wafers represents another inferred measurement for particles.

Looking for anomalies, optical measurement techniques are used. The technology historically has been applied on a sampled basis. But two things have changed — the technology to scan wafers has become faster, and the demand for near perfect quality from automotive now requires it.

“We used to believe that final test was ground truth of whether a die would be reliable or not,” Rathert said. “What we’ve learned is that our in-line defectivity information is just as important in helping to figure out which die are going to be reliable in the long term.”

This all adds up to a lot of data being collected that may or may not have anything to do with latent defects, which is the automakers’ chief concern.

It also goes hand in hand with a concerted push across the supply chain to track devices manufactured with specific equipment. “What we see more and more is traceability,” said John O’Donnell, CEO at yieldHUB. “You have to be able to regenerate wafer data and be able to search for a specific die. Our customers have data in modules, in final test and in wafer sort. You cannot test everything, but with good data you can test for one parameter or another and get something out of the data that you couldn’t without it. You also can check databases for trends and go well beyond what’s available today. This is an important trend.”

Where do latent defects come from?
The automotive ecosystem recognizes that more needs to be done to find reliability defects. Latent defects may be detected as a failure after an electrical stress during manufacturing test or fail in the field. So what causes them? Where in the equipment do these defects manifest themselves? And how can you detect them sooner?

Let’s start at the very beginning. Defects that impact product quality can be due to intrinsic or extrinsic forces.

“ESD is representative of latent defects that are caused by intrinsic damage to the material properties,” Holt said, “Knowing the defect mechanism, then you’re probably looking at a plasma generating tool like an etcher or deposition tool. At least as far as ESD, that’s one of the most important generators of latent defects.” Plasma tools can impact both gate oxide integrity and the insulation layers between metal interconnects.

“Extrinsic defects are a lot harder to find,” he said. “The reason they are harder to find is these defects tend to be random. You hope that your in-line inspection can detect follow-on defects.”

Particles primarily cause these extrinsic defects, which motivates using particle counting sensors to assess equipment and optical scanners to inspect wafers. Due to the latter’s slow speed, engineers typically inspect a sampled wafer population.

Fortunately, wafer inspection technology has gotten faster — and just in time to address the higher expectations of the auto industry. “OEMs and Tier 1s are telling us that final test and burn-in just isn’t enough to stop all of the escapes, especially the latent reliability defects that are getting through,” said Rathert. “It is driving us to blend data across domains that have been siloed in the past. So in-line fab defectivity data is definitely part of the reliability solution going forward.”

New connections lead to new ways to use equipment and inspection data
Silicon manufacturing equipment contributes to intrinsic and extrinsic defects, and in mature processes these defects appear to randomly occur. But as multiple industry experts noted, if you look deeper at data from multiple sources, they are not necessarily random. That is driving the industry’s interest in using this data to detect issues sooner and respond in a deliberate manner.

There has been focused effort on both equipment analytics and analytics spanning multiple manufacturing steps.

“Equipment sensors are unlikely to predict individual die failures,” said Lam Research’s Shields. “However, data from equipment sensors can be used to assess whether the wafer output of the process equipment meets quality and yield requirements. In fact, the sensor sensitivity, data collection density, and sampling frequency enable the construction of models to assess the performance of equipment on a real-time basis. These equipment models enable higher quality wafers at lower cost and cycle time than traditional verification with metrology or electrical data.”

This didn’t happen overnight, however. Silicon device manufactures have evolved their approaches from using the SPC charts to applying advanced multi-variant models. A decade ago, manufacturers started learning that single variant SPC charts could not be straight forwardly applied to equipment sensor data.

“They started implementing it on sensor data,” said Holt. “But as sensor count grew and data sampling increased, even if you were to implement 6-sigma control limits (99.97% variability) to detect abnormal variation in the process, you shut down the factory.”

When engineers apply advanced analytics equipment, sensor data then can be used to reduce defectivity. First, manufacturers need to know which sensor data to monitor, and that can only happen with feedback from downstream data sources, such as spanning multiple manufacturing steps.

“We can take the data from the front-end piece of equipment and analyze this data,” said Elder. “We can then correlate it against the test data and model the performance of the silicon from the front-end step with the test performance/device parametric test results at test. This has been done with ML models to help predict the performance (latent defects, infant mortality, etc.)”

Even with health check data, device manufacturers build ML models. “Fabs like TSMC and Samsung are doing more with this aftermarket sensor data, more than just monitoring the equipment,” Kulkarni said. “Data from sensors is fed forward and backward in the manufacturing process. They have their own high-level AI/ML processes.”

Where and how far up or down stream does feedback come from?

“It depends where the next metrology point is,” said Holt. “Sometimes it’s directly from a measurement after the tool. Some metrology is in-situ in the tool, like CMP, and some is further downstream,” said Holt.

Engineers want to know which sensors on which to focus their attention. With domain expertise, engineers can identify the obvious ones. With ML techniques, a more sophisticated data analysis can point to less obvious yet impactful sensors.

Wafer inspection data on all production wafers provides a new avenue for screening parts that can meet automotive high quality and reliability. On layers with greater reliability risks, fabs are doing just that. “They will use a high-speed inspector and look at 100% of the wafers on reliability critical layers looking for outlier wafers, and increasingly looking for individual outlier die — especially if you aggregate their defectivity across several layers,” said Rathert.

The part average test screening methodology now has moved upstream to wafer inspection steps. In addition, inspecting all wafers enables production test and field failures to be traced back to associated wafer process steps.

Conclusion
Process complexity has driven device manufacturers to increase both equipment sensor sampling and scanning wafers for anomalies, while reliability continues to be a challenge for automotive ICs. Those two worlds have now converged.

Causes of time zero and field failures are well known. Tying in an equipment-generated cause to that failure requires modeling. To sort out which sensor matters could not be done 10 years ago. Engineers had neither the compute power nor modeling techniques to find the needle in the field of haystacks found in complex semiconductor manufacturing facilities.

Connecting data between multiple sources and using machine learning techniques to proactively identify issues to fix, defects to screen or test flows to modify has attracted attention across the semiconductor industry.

“Not surprisingly, device manufacturers are increasing their investment in big data management and data science teams to develop in-line models which assess quality for every wafer,” said Lam’s Shields. “Process equipment suppliers are collaborating with device manufacturers to enable predictive equipment models that can deliver the required wafer quality at the same or lower risk and cost than traditional metrology approaches.”

Investment in smart manufacturing frameworks enables data sharing across the manufacturing steps and provides feedback, which creates a continuous learning system. There is still more work ahead. “Nobody has been a 100% successful yet with these approaches,” said PDF’s Holt. “We have to become more successful if we are going to meet the reliability criteria from automotive of 10 parts per billion.”

KLA’s Rathert agrees. “We have to merge data across domains. It’s all hands on deck to get this done.”

Related Stories

Using Sensor Data To Improve Yield And Uptime

New Uses For Manufacturing Data

Sensing Automotive IC Failures

Automakers Changing Tactics On Reliability

Different Ways To Improve Chip Reliability



Leave a Reply


(Note: This name will be displayed publicly)