Testing More To Boost Profits

With performance binning, chipmakers profit more from test.

popularity

Not all chips measure up to spec, but as more data becomes available and the cost of these devices continues to rise, there is increasing momentum to salvage and re-purpose chips for other applications and markets.

Performance-based binning is as old as color-banded resistors, but the practice is spreading — even for the most advanced nodes and packages. Over the last three decades, engineers have applied test processes to segregate products based on performance, and the effort is only becoming more granular as new capabilities are added.

Data and costs have driven the need for refining performance binning processes. First, with higher granularity of data chipmakers can put a premium on higher performance, lower power and more precision. And second, the cost of design and production has become so high for some of these devices that it has increased the value of less-than-perfect devices, as well. Also, performance binning largely has been available only to IDMs or large fabless companies because they could afford the engineers and engineering effort required to put this in place. With the promise of end-to-end analytics, the industry moves towards enabling sharing data across the fabless/foundry supply chain. That enablement and the increase in automation capability means that smaller semiconductor purveyors can consider adopting this method and hence increase their profits.

Adaptive test flows assist in maximizing yield demands for higher-performance parts, while balancing the associated test costs for identifying them. Using analytics, test data, and factory automation companies determine the performance bin.

The decision can be made at wafer acceptance test (WAT), wafer test, final test, and even at the warehouse with offline configuration. Depending upon the scope of the manufacturing execution system (MES), these decisions can be made real-time or after-the-fact — which is sometimes referred to as virtual test. A product may have one, two, or even more parameters on which to base the bin selection upon.

For microprocessors, performance binning began with the maximum clock frequency, or Fmax. With the advent of mobile computing, product engineers began to bin based upon power consumption, as well. For mixed-signal devices used in audio applications, the quality of the analog-to-digital converters (ADCs) and digital-to-analog converters (DACs) comes into play. In RF, base stations require matched pairs in both gain and phase shift- another two-parameter binning.

Wherever someone is willing to pay more for some numerical value, engineers will figure out how to do it in as cost-effective manner as possible. And when the number of product bins becomes greater than three, then engineers often use adaptive test flows to guide die to the appropriate final test program. The key factors in enabling performance-based binning for IC test include:

• Having the right data on which to base a decision;
• Making the best decision based upon current market demands;
• Moving that data or decision to the next logical test step, and
• Taking minimal manual inputs from the engineering team.

These performance test processes represent the feed-forward decisions that Industry 4.0 touts as essential to maximizing profits, and which respond to manufacturing data in an agile fashion. It has not always been so automated, and there exists room for improvement.

Testing motivated by profit
Performance binning existed before integrated circuits. The motivation is purely economic. A component has a range of performance. Certain customers require a specific performance range and are willing to pay more to get it. The remaining good components can be sold to other customers.

Consider the humble resistor. For four-color-band resistors, the final color band indicates the tolerance of the resistor value, while the first three bands indicate the resistance value in ohms. Typical tolerance bands are 5%, 10%, and 1%. The resistor manufacturer does not have a fab process specific to the resistor tolerance. Instead, testing selects the parts that go into the three distinct tolerance bins.


Fig. 1: Resistor binning into bad, 10%,5%,1%, >10% uniform distribution. Source: Pixabay/Anne Meixner/Semiconductor Engineering

Looking at good resistor distribution for performance leads to an interesting observation (see fig. 1). A set of thousand-ohm resistors with 5% tolerance will never fall into the range of 990 to 1,010 because they belong in the 1% tolerance bin. A 10% tolerance resistor will have values of 900 to 950 and 1,051 to 1,100.

Performance binning can be on done any measurable parameter. Digital signal processing relies upon data convertors, which have a long list of specifications. Depending upon the application, one specification may warrant performance bins. Such distinctions of performance bins are based upon wafer test results, yet there can be additional calculations (virtual test) and data added to wafer-level testing. For traceability and data integrity purposes, engineers preserve the original wafer map and its associated data.

“We always store the original wafer map,” explained Andre van de Geijn, business development manager at yieldHub. “Then you perform the analysis on the wafer test data. Then we copy the wafer map and then change the copy with the new data.” This new wafer map can move directly to assembly, or it can be merged with data from other process steps prior to moving it to the assembly house.

For simple binning, the test cost is non-existent. The costs only increase when engineers need more bins than their test cell can support. Also, the economic impact differs when selecting the final test or package choice on a per-die basis as opposed to a wafer basis. This evolution in more complex test and assembly flows has prompted product and test engineers to work more closely with factory automation engineers. Data analytic and automation solutions support the feed-forwarding of data from wafer test to assembly and final test.

Looking at microprocessor performance-based binning illustrates the evolution of the technology improvements made to automate feed forward decisions.

Binning in the early days
For decades microprocessor companies, like Intel and AMD, have binned on maximum frequency. Product engineers call this selection process speed binning.

The handler and test program time limits drive the speed binning implementation. A test cell’s handler typically has four to five output trays. One is reserved for bad parts, and the remaining can be used for good parts. But there is a limit for available good bins. To guarantee performance at a particular frequency, the test program runs all functional tests at that frequency. If you have 10 possible performance bins, there are neither the bins in the handler nor the test time budget available to run all 10 good bins in 1 insertion.

These limits were seen in the early 1990s. Product engineers starting using data from process monitor and wafer test to predict speed bins at final test. To pull it off, they had to build the IT infrastructure from scratch.

“At TI for the Sun Sparc microprocessors, we had to manage the early speed bin distributions,” said John Carulli, DMTS Director Fab 8-Test at GlobalFoundries. “There weren’t any systems for doing that. We did a lot of manual network and data file coding. We worked across a mainframe system that had access to process data, tool types, and metrology information. At that time, Unix systems handled the test integration aspects of the test floor. Once we custom-coded the databases and glued all that stuff together, we delivered it through a cron job that would just run all the time. We assessed the wafer’s likelihood to be in different speed bins and then direct them to the appropriate test program at final test. At that step they would be differentiated further.”


Fig. 2: TI’s Sparc microprocessor test flow wafer to final test. Source: Anne Meixner/Semiconductor Engineering

The arrival of mobile computing in the 2000s prompted product engineers to bin on power, frequency and sometimes both. With multiple product bins to support, the test flows on a white board seemed straight forward. Implementing them remained a complex affair, requiring audits to guarantee the test results for each product family. The two-dimensional binning naturally created some complicated flows, which created different challenges related to data integrity.

Preeti Prasher, most recently principal ASIC test architect at LeddarTech, noted that in the past the performance binning process for mobile applications often were confined to speed and voltage. “Test time in the test program was not much of an issue. The biggest challenge came with the need to audit the test results after final test,” she said. “When binning based upon speed and power, a device can go through different test flows. Each bin requires different tests. You can audit all the different paths that a part took to being finally considered good. You often give different test numbers to these tests to make your audit easier.”

Facilitating feed forward to reach die-level predictions
Most performance binning begins at wafer test, though engineers use wafer acceptance test or performance chip monitoring, as well. This enables engineers to select a specific final test program, as well as to segregate the wafer or die prior to the assembly process. For microprocessors, faster wafer/dies can be directed to packages for workstations and data centers, and the slower-speed parts can be designated for desktop and laptops. Add power consumption in with frequency, and the diversity of package choices increases.

To segregate at a finer granularity, wafer-level product engineers started using functional test at wafer test. At wafer test, even with a low pin count tester, engineers can execute microprocessors functional tests by doing what Intel called structurally based functional test (SBFT). Intel briefly described it in a 2002 International Test Conference (ITC) paper. Sun engineers described a similar method in a 2006 ITC paper. Other ways to predict frequency performance at unit test from wafer test results include power leakage and on-chip ring oscillator measurements.

To leverage wafer test results consistently, you need to automate the data decision. Yet implementation challenges mean this is not as simple as the flow written on a whiteboard.

“One challenge is data latency – having the upstream data available at the downstream point in a timely fashion,” said Ken Butler, test systems architect at Texas Instruments. “It can be particularly difficult in a disaggregated test flow where the upstream and downstream test insertions are at separate facilities and/or companies.”

Data integrity plays a role here, as well. “It is difficult to do adaptive tests because of manufacturing challenges with data integrity,” said Stacy Ajouri, senior member of the technical staff at Texas Instruments and co-chair of the RITdb Taskforce. “You have to be sure you’re getting the right data to that test cell. We are trying to address this by having good data provenance in the RITdb standard. How do I add robustness to the integrity and goodness of the data by the nature of what I put into this test?”

Engineers at Intel specifically described how they assessed the data integrity when sharing their experiences implementing feed-forward for microprocessors at the 2006 International Test Conference. Requiring a data integrity value of at least 99.995%, they defined data integrity as a function of data availability and data accuracy. “In essence Data Integrity is product of data availability and accuracy and can be tracked at the unit or batch level. Data accuracy is generally a measure of either the source data (tester output files) accuracy and the ability of the automation infrastructure to correctly load or retrieve the data…Data availability is a simple measure of presence of data for a unit.”

By having this level of data integrity, Intel engineers could support unit-level decisions at downstream process steps — wafer sort at assembly, burn-in, delayed product configuration. With any change in a test program, feed-forward automation, or infrastructure upgrade, they could verify that the change did not adversely impact the data integrity. Naturally, this implementation required a unique identifier for each die/unit.

To correctly implement an adaptive test flows takes a significant engineering effort. This explains why only IDMs or large fabless companies have reported doing so. To bring such flows to other fabless companies requires overcoming some barriers.

First, product engineers need to connect the data from disparate manufacturing sources. Then, engineers need to contend with ownership of the test equipment and information flow. Data analytic solutions offered by multiple companies facilitate shared data between the wafer production, wafer test, assembly, and final test.

They also may provide agents for manipulating a tester program in real time.

Yet if the data is not aligned between these manufacturing steps, you can’t make a go of it. Not all test floors have the same rigor that an IDM such as Intel has for data governance, data movement and data analysis to facilitate the automation of performance bin test processes.



Leave a Reply


(Note: This name will be displayed publicly)