Adaptive Test Gains Ground

Demand for improved quality at a reasonable cost is driving big changes in test processes.

popularity

Not all devices get tested the same way anymore, and that’s a good thing.

Quality, test costs, and yield have motivated product engineers to adopt test processes that fall under the umbrella of adaptive test, which uses test data to modify a subsequent test process. But to execute such techniques requires logistics that support analysis of data, as well as enabling changes to a test based upon the decision tree set in place by product engineers.

Analyses range from simple — determining which speed bin a part falls into — to complex, where latent defects are identified using multi-variant analysis. Since the 1990s application of speed binning techniques and tools have evolved, but barriers remain for wide implementation throughout the test industry.

The fabless/foundry business model makes it more difficult to share data across the manufacturing test flow, and it creates a logistical challenge for testing identical devices differently. Several industry experts highlight the need for changing the status quo of testing all devices the exact same way.

In their 2004 ITC paper, researchers from LSI Logic and Portland State University articulated how adaptive test changes the status quo:

“The key to adaptive test is to utilize data generated from the tester or relevant data from previous processes or measurements in predicting the process for the future tests in order to reduce or increase testing as and when required. The ultimate goal is to apply only the minimum set of tests required to screen the ICs that will fail in the system either as shipped or over time.”

Early work with LSI Logic devices demonstrated that historical and local test data can be used to adapt test processes to achieve these results. Other product engineering groups listened.

“It was really an interesting way for many of us to start thinking about the problems that we had with the test economics,” said John Carulli, DMTS Director Fab 8-Test at GlobalFoundries. “Why do we want to keep on testing stuff that we think may already be good statistically? Why do we not want to test more of this stuff that may be a little bit more challenged? Why do we have this kind of antiquated view of the world from a historical quality perspective? Why we must always do everything the same way all the time?”

What is adaptive test?
Adaptive test encompasses macro, micro, offline, and real-time decisions based upon data from three different stages:

  • Data that was just collected,
  • Data taken at a previous test step, and
  • Data from a set of previously tested die/parts.

Figure 1 (below) shows selected techniques and where they can be applied.

Fig. 1: Selected adaptive test techniques. Source: Anne Meixner/Semiconductor Engineering

Product engineers play a key role in implementing adaptive test as they set-up the test plans and manufacturing test flows. They enable adaptive test methods by understanding the tradeoffs of the cost/quality/yield triad, by using analytic solutions to assist enabling their test plan, and by implementing them within the device’s test manufacturing flow.

Because of the beneficial ROI, engineers implemented adaptive test despite a lack of available tools. So in the beginning, the tools to implement these were very customized and required a lot of human glue. Automation has made this easier, but it still requires understanding the logistics of moving data and devices from one manufacturing test process to another. It also requires product engineers to understand various statistical relationships to fully leverage the power of all the available semiconductor test and manufacturing data.

Both wafer sort and package test facilities need to support the data collection and to facilitate moving the data to the next decision point. This often requires an IT investment within MES (manufacturing execution system) and test cell software. Thus, there needs to be the motivation, often in form of return on investment (ROI).

Semiconductor data analytic companies provide data delivery systems, implementation of the rules, etc., to enable the ROI.

“The value add is automating that process, automating the experience, and taking the learning on the test floor into actionable events, then identifying patterns and the methods which deliver an ROI,” said Brian Archer, silicon lifecycle management solutions architect at Synopsys. “You need an analytics background to really zero in on what those issues might be, including how to reach higher quality and reduce test escapes while optimizing test content. From our view that’s what adaptive test is about.”

Adaptive test methods have evolved over the decades to balance quality, yield, and cost, all of which product engineers must consider in meeting the products goals. And analytics companies now provide the tools and the data management framework that eliminate the need for a large team of product engineers to make this happen.

That leaves four possible options: test more, test less, test differently, or test with different limits. These strategies can be understood further by learning about some the adaptive test techniques that have been deployed over the past 25 years.

Modifying test process steps
Making downstream test step choices for a die or packaged part based upon test data has often been a product engineer’s first taste of adaptive test. It presents an easy trade-off to assess and to implement. The former because calculating the economic benefits of avoiding burn-in or segregating parts into performance bins are easy to justify.  The latter because the data analysis can be done after the test step and the ability to direct a device to different test step is straight forward.

A basic set of test process steps, as shown in Figure 2 (below) provides a context for modifications that product engineers apply in speed-binning and for reducing burn-in.

Fig. 2: Basic semiconductor device test process steps. Source: Anne Meixner/Semiconductor Engineering

Multiple product engineers interviewed for article have cited speed binning as their first experience with adaptive test. In the early days, it was quite cumbersome to implement.

“Going back to the early ’90s at Texas Instruments, we had to manage a lot of the early speed bin distributions for the Sun microprocessors. Their need for higher speed devices motivated identifying them,” said Carulli. “There weren’t any systems back in the day for doing that. We did a lot of manual network and data file coding, etc. This supported configuring the testers to decide which test program to apply based upon the wafer test results (see Figure 3, below).”

Fig. 3: Speed binning test flow using separate final test programs. Source: Anne Meixner/Semiconductor Engineering

There is more than one way to implement speed binning.

“It starts at wafer probe,” said Preeti Prasher, principal ASIC test architect at LeddarTech. “Running structural at-speed test content in the probe environment is completely feasible. At the next step, different implementation choices can occur. One company that I’ve worked at would test at wafer probe, and earmark each wafer for specific products — one final test program for each product. At another company, wafer test results would provide insight on a wafer’s possible distribution. Yet the final test program used a multi-binning flow, where you would then end up with multiple outcomes for good parts. Bin 1 would be for one specific product or target customer and bin two for another product/target customer.”

Reducing device burn-in continues to be highly attractive to product engineers. The triad of cost/quality/yield shows significant gains in all three areas when you can eliminate burn-in or significantly reduce to a smaller sample, e.g., 20%. Supporting a burn-in reduction test flow requires a statistical analysis of devices at the end of the flow. The results identify wafer-level test measurements, which correlate to burn-in failures. Depending upon the confidence level you can fully eliminate it or significantly reduce the devices that require it.

Fig. 4: Adaptive test flows based upon for burn-in. Source: Anne Meixner/Semiconductor Engineering

The LSI Logic/Portland State University researchers provided empirical data using real products to show this can be done. Others became motivated to adopt it for their products.

“Bob Madge made very strong statements that they could go without burn-in entirely,” said Ken Butler, Test Systems Architect at Texas Instruments. “Now, [LSI Logic] made ASIC devices and we made custom SoC devices, some of which received 100% burn-in in the early phases of production. We partnered with Rob Daasch of Portland State University to develop the core set of ideas, and then worked on our own to improve them and put them into production. The benefits were that we were able to achieve sample burn-in on several key high-volume products.”

A similar path to sample burn-in had been pursued at IBM 20 years ago. “On a per die bias, we picked the subset that was most likely to fail and burned those in,” said Phil Nigh, R&D test engineer at Broadcom. “So their reliability rose to be similar to the remaining parts. With cherry picking the worst die based upon some specific wafer-test measurements, we saved cost and improved reliability.”

Making the most of test content choices
In terms of cost/quality/yield, the test content applied dictates both cost and quality. Yet these attributes pull in opposite directions, so product engineers need to determine the acceptable tradeoff at any one particular test step.

Historically, engineers have made this tradeoff by applying all test content and analyzing the fallout to identify non-failing tests (often called test time/pattern reduction). Yet to mitigate risk the paranoid product engineer tests a sample of units through the full test program. Adding an additional test step in which you keep all those tests and send a sample of product through can be done. This represents an added expense when you consider the logistics of segregating product, dedicating factory floor space for an ATE, and maintaining yet another test program.

What if you could sample product within the normal test program to your full test suite?

“I first became involved with the development and support of adaptive test in 2005 while at PinTail Technologies. The primary customer benefit at that time was to reduce test time for large complex mixed signal devices without negatively impacting quality,” said Greg Prewitt, director of Exensio Solutions at PDF Solutions. “The approach used reduced test time by sampling rather than simply removing test coverage to meet cost-of-test objectives. By sampling, one could achieve roughly 90% of the time savings without total removal of the tests, thereby reducing the quality risk. Up to a 25% reduction in test time could be achieved for complex SoC devices.”

Another aspect of test content is looking at test results and adjusting the subsequent test content to apply.

“Depending on the product segment and the economics, it may be advantageous to recover that die,” said Synopsys’ Archer. “A product engineer can choose to apply more tests at everything and make sure that this is truly a passing die. So now, at least in the digital sphere, the engineer can apply high-resolution patterns or additional tests (e.g. a high-voltage stress) to guarantee that part is good.”

Adding test coverage boosts a product engineer’s confidence that the quality goals are met. The opposite happens, as well, looking at test results a product engineer can with confidence apply less test coverage. Some product engineers have used wafer-test data to segregate parts into longer or shorter test times in system-level test.

Changing test limits and identifying outliers
Die uniformity has driven automotive IC manufacturers to apply outlier detection techniques, which often change the limits on a wafer-by-wafer basis.

“In automotive, we try to make sure that every outlier is identified, even when it meets datasheet spec,” said LeddarTech’s Prasher. “Outlier identification can be thought of in terms of die uniformity. You test for a specific parameter, and when you characterize your silicon you observe a tight distribution. So because you care about the uniformity, you adjust the limits to be more stringent than the spec limit. However, you can have a variation on that distribution, so instead of a fixed upper/lower screening limit, you consider the distribution for that particular wafer or lot and then evaluate the parts.”

Data analytic solutions can assist with identifying outliers and choosing the appropriate implementation.

“With analytics, engineers can look at the historical data, discover issues and understand them further, and then implement algorithms to improve either quality, yield, test time, or data quality at a good cost,” said Paul Simon group director of SLM analytics at Synopsys. “When you implement outlier algorithm, for example, good die/bad neighborhood, that algorithm has a certain number of parameters. You want to have the algorithm deployed in such a way that you don’t lose too much yield, because there’s a tradeoff with quality. The product engineers need to decide if they are ready to lose 10% of yield to gain a little bit of quality, or the other way around. That requires simulation with the algorithms based upon the historical data, then tuning those algorithms to achieve the desired yield versus quality tradeoff before deploying the tuned algorithms on the test floor.”

Conclusion
Adaptive test uses data to make smarter test decisions. Yet it results in product engineers choosing to test the same device differently, and for some this remains a new approach.

“Product engineers need to see the challenges, and improvements, to learn from them,” emphasized André van de Geijn, business development manager YieldHub. “They are the drivers in those processes that result in the benefits of lower costs, better quality, reduced time tom market, and which deliver new functionality.”

So far, those benefits have not been fully realized throughout the semiconductor industry.

“Adaptive test is becoming more widely accepted for optimizing both test efficiency and quality,” said PDF’s Prewitt. “While it is being used more broadly, overall, it is still in the early adopter phase. Typically, it is only applied by larger more mature companies that can rationalize the cost/quality tradeoffs, or those participating in regulated market segments.”

That has limited the uptake of this approach. “The whole adoption rate for adaptive test has been a lot slower than I expected,” said Broadcom’s Nigh. “If there were good-enough applications, or ways to productively use that data — good ROI stories — there would be more adoption.”

To make it easier for the smaller companies, building better systems would help in adoption.

“If one really wants to get into adaptive testing, I don’t know that we’ve done the rest of our technical homework,” noted GlobalFoundries’ Carulli. “Can we actually log the level of detail needed on every single die exactly to track and analyze the different flows? I don’t think we need to invent more techniques. There are things we need to build like the data systems to truly and easily track in a fully adaptive world.”

To surpass the activation energy of implementation, engineers will need to document more proof points. Still, expect more adaptive testing to be used for more products in the future as demands for reliability, speed and cost continue to dominate semiconductor production.

Related stories:

New Data Format Boosts Test Analytics

Why Data Format Slows Chip Manufacturing Progress



Leave a Reply


(Note: This name will be displayed publicly)