中文 English

One Test Is Not Always Enough

Combining two test parameters to make a pass/fail decision is getting easier.


To improve yield, quality, and cost, two separate test parameters can be combined to determine if a part passes or fails.

The results gleaned from that approach are more accurate, allowing test and quality engineers to fail parts sooner, detect more test escapes, and ultimately to improve yield and reduce manufacturing costs. New data analytic platforms, combined with better utilization of statistically derived values, have made this kind of approach easier to implement than in the past.

Recent progress in testing has been based on a defect-oriented approach, which goes beyond standard testing which includes the data sheet. This is especially true for semiconductor devices used in safety-critical applications such as automotive, medical and aerospace. Using methods like part average testing (PAT/DPAT), engineers apply simple statistics to an electrical test’s value.

But these methods are no longer sufficient for finding system test failures and early life failures (a.k.a. latent defects). In addition, PAT limits still can fail good parts.

“Static and dynamic PAT methods are not sensitive enough to decouple ICs with latent defects from natural process variation,” said Alex Burlak, vice president of test and analytics at proteanTecs. “Therefore, ICs with latent defects escape production testing and fail over time in the field.”

More sophisticated statistical approaches look at narrowing the population to consider using a die and its nearest neighbors. But results can be improved further if two parametric test results are combined to find outliers. That effectively reduces test escapes and latent defects. In addition, it makes use of existing tests which negates the development of a new test that needs to be run through the whole test program qualification process. Together, two test values enable a finer discernment between good and bad die/units. To do so effectively, engineers need data analytic tools to assist in identifying effective test pair values, recommending pass/fail limits, and verifying on a large historical data for their product.

“Many of the commonly used data analytic platforms come pre-equipped with tried-and-true outlier detection methods that can be easily applied to any data set,” said Mike McIntyre, director of software product management at Onto Innovation. “Several of the more sophisticated systems even allow them to be used in various logical (and/or) combinations.”

Two tests are better than one
Combining two tests adds value, but it’s non-trivial to implement. Fundamentally, a good device and a bad device exhibit different electrical behaviors. The test program includes a set of tests to take advantage of these differences. Whether a large system on a chip (SoC) test with 80% digital circuitry, or a small RF device with trimming and a long list of analog tests, the range of parametric test values available can range from tens to thousands.

Bad devices are due to process excursions, greater than expected intra-die process mismatch, or defects. In theory, one set of individual tests can detect all failures. That’s generally not the case, though. Bad parts get missed at wafer test and can cause failures in the field. In addition, stringent pass/fail limits may fail many good parts, which adds to the overall cost equation.

Combining two test values can effectively address these troubles. Engineers find this approach extremely effective in preventing field returns, often in the range of tens to hundreds of parts per million. In effect, determining root cause for yield improvement is far less important than finding a test which will detects pesky test escapes.

But which tests and how do you combine the values? Starting two decades ago, engineers looked to algebraic combinations — subtraction and division. These classic measurement techniques remove background noise, which is effectively the expected range of process variation observed at wafer test, from wafer to wafer and lot to lot.

The evolution of IDDQ testing exemplifies using such techniques. At the beginning of the millennium, sub-micron CMOS processes resulted in higher variability in leakage current. Engineers foresaw that IDDQ testing with characterized limits would no longer work. However, for digital circuitry they valued IDDQ’s ability to detect defects that escaped ATPG patterns. This prompted engineering teams to investigate and ultimately use multiple IDDQ test values to discriminate between good and bad parts. They subtracted two IDQQ values measured at different voltages. They also used ratios between two IDDQ measurements. Hence, they used two test values after one algebraic operation.

With two different analog parameters, such as gain and DC levels, or DC levels and background current, they can find outliers. For some test pair values, a defective part and a good part will pass each’s determined value. But for some defects, a difference can be found if the two tests have strong correlation relationship. Such a relationship enables fitting the raw data to an equation (e.g., Y = aX+b) using standard linear regression techniques.

By plotting two highly correlated values against one another, the outliers stick out. If a die/unit is significantly off the line, the engineers have found a test value combination that works.

Fig. 1: Bivariate outlier graph and associated wafer map. Source: National Instruments

Why would two unrelated test values detect defective parts? “When you find correlations, everybody always expects there to be an absolute easy to understand reason why the correlation exists,” said Ken Butler, strategic business creation manager at Advantest America. “We used to try to defend why these sets of measurements find this defect. There is a lot of semiconductor physics that comes into play. But in reality, it could be any of a number of things.”

With a customer-returned part in hand, engineering teams can focus on finding a means of detection. Most field returns measure in tens or hundreds of parts per million (ppm). Determining a root-cause to increase yield is not a focus area. In fact, a two-value based pass/fail determination may fail some good parts. If it’s not a significant yield loss, an engineer revises the test program with the new pass/fail limit.

By taking a two-test measurement approach, the new limits exhibit robustness to the normal process variations. Yet sometimes even that is not good enough. Over the past 10 years, some engineers have shifted to using statistical values instead of the measured values, expressing the pass/fail limits in terms of greater than two standard deviations from the median or mean. That’s significantly different that either PAT or DPAT, both of which set a test value pass/fail limit based upon a statistical analysis.

“One of the least well understood aspects of outlier detection is how to data log the results to drive the most efficient and effective screening,” said Greg Prewitt, director of Exensio solutions at PDF Solutions. “Most outlier detection mechanisms are statistical in nature, and therefore an outlier screen is a statistical test made in statistical units where the current and voltage are no longer germane. The best practice is to data log each measurement in terms of standard deviation of each measurement considered. Then you can properly analyze the variability of outlier limits across multiple wafers and lots. This best practice was pointed out by those who traveled the path before me. Specifically, I learned this enlightened approach to data logging statistical tests from Jeff Roehr at a SEMI CAST meeting on outlier detection years ago.”

Roehr, an IEEE senior member, elaborated further. “It is quite evident that if each wafer were processed as a single entity (lot), and values in terms of RSD (residual standard deviation) calculated for each data value, then multiple wafers (lots) can be combined using the RSD scale, and the NNR (nearest neighbor residual) method could be used to find bivariate outliers even in large populations from multiple lots. Basically, transform all the data from measured values to RSD variations, then do all the outlier data analysis only using the RSD values.”

Fig. 2: Correlation between two tests using Residual Standard Deviation. Source: Jeff Roehr

Carefully discerning pass vs. fail
Statistically based test limits, be it one, two or more test parameters, should be applied only after the performance limit has been met. Focusing just on the statistically derived limit can lead to test escapes.

“The most obvious example is where the acceptable PAT distribution fell within and outside of a performance spec limit,” said Onto’s McIntyre. “The engineer chose to include the parts because they fell within a PAT distribution ‘limit’ even though some of the distribution fell outside of a defined performance control limit.”

Thus, for analog and mixed signal parts with a high percentage of parametric tests, the test program maintains the data sheet limit tests and then applies the outlier limits.

Another perspective on determining pass vs. fail can be viewed through the lens of on-die monitors, also known as chip telemetry circuits. With large SoCs, this internal data can be used with great effect to separate passing and failing parts.

“With on-die parametric data, outliers can be detected on a personalize basis, not population based,” said proteanTec’s Burlak. “Chip telemetry allows for advanced estimators to be built early on, allowing for another dimension to be added to the measurements. Now, test engineers can detect outliers that are seemingly within PAT limits without losing good yield, even reclaiming potentially lost yield. Since it’s precision-based, we eliminate false positive and false negative outliers, and this can be verified. One of the approaches is to run HTOL batches containing detected outliers vs. the normal population. The expectation is that the group containing outliers will fail at a higher percentage relative to the baseline.”

Easing the burden with data analytic platforms
Engineers began using bivariate outlier detection techniques 20 years ago. “Going back to about 2005, the concept of outlier was out there,” said Roehr. “But commercial software and methods were not around. Anybody who was doing outlier detection methodologies back then was writing their own code. You had to invent it yourself. These data analytic companies didn’t exist. Today if you want to do it, the software and tools are now commercially available. You can use them to turn the crank and at least get an entry level solution done overnight — literally.”

After that entry level solution, engineers need data analysis capability to refine the algorithm parameters and verify the impact of those parameters.

“With analytics it is looking at historical data, discovering issues, and trying to understand them, said Paul Simon, group director for silicon lifecycle analytics at Synopsys. Then you implement algorithms to improve either product quality, yield, or test time. Now when you implement an outlier algorithm, it has a certain number of parameters. You want to have the algorithm deployed in such a way that you don’t lose too much yield, because there’s a tradeoff with quality, and that’s what the product engineers need to decide. Depending on the product, are they ready to lose 10% of yield to gain a little bit of quality, or the other way around? So that requires simulation of very complex algorithms over historical data, and then tuning those algorithms. Then you deploy the tuned algorithms on the test floor.”

Customer returns often prompt engineering teams to consider bivariate test limits. The parts have passed all the individual parametric test limits. In exploring test data values in pairs for a correlated relationship, engineers often find an effective discriminator that detects them. Discerning pass/fail in this manner benefits quality and yield.

By using derived statistical values in place of raw test values engineers enable better robustness of a test pass/limit to normal process variation. This significantly increases as transistor features and metal line widths shrink.

Twenty years ago, engineering teams wrote custom code for applying these methods, both data analysis and test program application. With the arrival of data analytic platforms, engineering teams focus on exploring the possible test combinations and on analyzing the new pass/fail limits. In the end, sometimes two test values are better than one.

Related Stories

Adaptive Test Gains Ground

Part Average Tests For Auto ICs Not Good Enough

Geo-Spatial Outlier Detection

Chasing Test Escapes In IC Manufacturing

Leave a Reply

(Note: This name will be displayed publicly)