How Much Testing Is Enough?

Wedged between rising complexity and an industry that refuses to increase test costs, testing equipment makers are raising important questions about what is enough test coverage.

popularity

As chipmakers move towards finer geometries, IC designs are obviously becoming more complex and expensive. Given the enormous risks involved, chipmakers must ensure the quality of the parts before they go out the door. And as part of quality assurance process, that requires a sound test strategy.

But for years, IC makers have faced the same dilemma. On one hand, they want a stringent test methodology to prevent unwanted field returns. On the other, they are unwilling to pay a premium for test and sometimes consider it a necessary evil.

With those trends in mind, IC makers, and their customers, often wrestle with the same basic question. How much test is enough test for a given part?

The answer is not so simple, as the term “enough test” means different things to different people. It could mean test times and test coverage. It also could involve the amount of testers and EDA tools a vendor throws at the problem. “To me, it always translates into cost, which is driven by test times, and secondarily, cycle times,” said Ira Feldman, a test veteran and principal at Feldman Engineering. “Higher coverage can often be achieved by more test times and more engineering times to write the tests.”

If “enough test” is measured by cost alone, then it boils down to mere pennies. On average, a square centimeter of silicon in an IC package costs about $4. Typically, a chipmaker is willing to spend only about 5% of that figure, or about 20 cents, for test in high-end devices. “This could be lower or higher,” said Ralf Stoffels, director of product marketing and solution architecture at ATE vendor Advantest. “It depends on the device and competitive situation.”

Greg Smith, vice president of SOC marketing at Teradyne, looks at the issue from another angle. “Enough test is defined by the level of defects that a customer is willing to tolerate,” Smith said. “They will do enough test to achieve that level. Unless they are mismanaging their process, they won’t do much more than that, because there is a relatively causal relationship between the amount of test and how much test costs.”

The key metric is defective parts per million (DPPM). “It used be that the automotive industry demanded that DPPM was below 100. Actually, they are now pushing their suppliers down to 10 defective parts per million,” Smith said. “The rest of the industry, like PCs and other consumer devices, would be significantly higher than that. But if you look at the kind of volumes that Samsung and Apple do on mobile phones, they need their suppliers to hit DPPMs that are close to automotive-quality levels.”

So all told, for these and other chips, IC makers must follow the same formula. They need to boost their test coverage and test times to meet the desired DPPM levels, but they must do so without taking a hit on their overall manufacturing costs. “There is a balance you want to keep,” said Steve Pateras, product marketing director for Silicon Test Solutions at Mentor Graphics. “You need to do more high quality testing, but the efficiency has to improve. So, you have to be more intelligent how you do your testing.”

Testing the flow
Over the years, the IC test industry has experienced some dramatic changes. In the 1990s, for example, chipmakers declared a war on test. At the time, chipmakers believed that test costs were too high and they refused to pay for the expensive automatic test equipment (ATE) in the market. Generally, many of those ATE systems provided functional test capabilities, which is a costly way to test the entire function of a chip in a package.

The strategy worked. Responding to the demands of customers, Advantest, LTX-Credence and Teradyne developed a new class of low-cost modular testers. In fact, the days of the giant and expensive ATE systems are over.

At that time, there was also a major shift from functional to structural test in the industry. Unlike functional test, structural test deals with issues at the chip level. Using design-for-test (DFT) technologies like fault models and test compression, structural test looks for manufacturing defects and ensures the device has been fabricated correctly.

In structural test, the chip also is partitioned into smaller sub-blocks. The two most common structural test methods are scan and built-in-self-test (BIST). Both scan and BIST makes use of on-chip logic to diagnose, monitor and test a design. Today’s ATE systems from Advantest, LTX-Credence and Teradyne can handle both functional and structural test.

“Ten years ago, there was a difference in the complexity of designs. You could get away with more back then. Functional test was more doable for many devices,” said Mentor’s Pateras. “More and more, it’s no longer the case. Now, we’ve moved to structural-based test. We are looking for ways to get better coverage based on structural test.”

Generally, DFT-based structural techniques represent about 70% to 80% of the overall test coverage in a device. The remaining test coverage is handled in the test and assembly flow. After a wafer is processed in the fab, the wafer goes through the following steps—a wafer acceptance test; wafer probe/sort; IC packaging; and final test.

On top of that, many chipmakers have implemented a technology called adaptive test into the flow. In simple terms, adaptive test makes use of software analysis tools, which provides information about the performance of each test over a multitude of parts. For example, a given tool could determine which tests can be safely removed in the flow, thereby reducing costs.

“Typically, the way test is done is binary. If you meet the criteria, the part goes through. If not, the part fails,” said David Park, vice president of marketing for Optimal+, a supplier of adaptive test and manufacturing intelligence tools. “There is also a more sophisticated way of looking at it. In fact, chipmakers have to have a heck of lot more quality built into the system prior to shipping. The way we look at it is: ‘Is good really good?’ ”

Going back to functional test?
Today, a given test methodology and flow depends on the chip design and application. For example, in wafer-level packages and 2.5D/3D stacked die, a large part of the test coverage is conducted during wafer sort with a prober.

For many complex chips chipmakers still use structural test, but they haven’t quite given up on functional-like testing techniques. “The trend was moving to a point where everybody thought you would require less and less coverage. The world was going to structural test and you could do anything with scan,” said Advantest’s Stoffels.

“With the advent of the smartphone, that has changed,” Stoffels said. “From my perspective, it has a lot to do with the large players in mobile. They really can’t afford to have factory returns for their phone. So it’s more important to have good test coverage. Of course, they don’t say, ‘Whatever cost is okay.’ No one will give (ATE vendors the) right to raise prices on test coverage. But there are new considerations. That means test coverage. It can mean new methodologies in terms of the data of test.”

Clearly, the shift towards more complex chips in smartphones and other products has prompted the need for more test coverage. And in some cases, a chipmaker will even require an extra test step, dubbed system-level test.

For example, a chipmaker wants to test an application processor. The IC maker would develop a board, which resembles the PC board in a mobile phone. In that board, a test socket is configured for the processor. Power is then applied to the board and the system is booted up. The part either passes or fails. Then, a handler puts another chip on the board and the process is repeated.

“Almost all digital IP is primarily tested through structural test at this point of time. It’s only the external interfaces and the mixed-signal IP blocks in a device that have functional test associated with them,” said Teradyne’s Smith. “With very complex devices like multicore and application processors, the reliance (of a device) is tested using structural test. But that ends up not getting to the level of quality that end customers want. So for at least part of the cycle, the device goes through a system-level test insertion as well.”
Since system-level testing is expensive, the process is only conducted during the engineering phase or the early ramp of a device. “Over time, the yields at system-level test go up. When the yields go high enough, then they will take away the insertion of system-level test,” Smith said.

Moving to finFETs
The test flow is expected to see more gyrations amid the shift from planar devices to finFETs. “FinFETs have two attributes, which require more test,” said Advantest’s Stoffels. “One attribute is they use more test patterns. The other aspect is that those designs run at even lower supply voltages. Those require a much more careful interface design.”

For finFETs, the test engineer has several options. “One method is to go broader and use more pins for scan access. That limits you in multi-site,” Stoffels said. “The other way is narrow access and higher speeds. Technically, from the tester point of view, that is a better option, because it is cheaper to build high speed channels. In both cases, you speed up test. You can test more vectors in and out. That does not mean the test costs go up. We can keep test costs constant.”

The test times depend on the methodology. For finFETs and other complex devices, the surge in vectors will test the limits of DFT. Basically, DFT-based test consists of various components, such as fault models, automatic test pattern generation (ATPG) and compression. Fault models are required for defect detection. ATPG is needed to devise the test content with high fault coverage. ATPG also uses test compression techniques to reduce the number of test patterns in the process.

As chip complexity increases, the question is can DFT stay one step ahead of the curve. “Ten years ago, people did straight ATPG. They would create test patterns and then store them on a tester. And they would be done with it. But as the test patterns kept increasing, you no longer want to do that,” said Mentor’s Pateras.

“Compression came into play about 10 years ago, where you could compress all of the unwanted data away and store the critical data. In the past 10 years, we’ve been improving how well we can compress and de-compress data. Ten years ago, the compression ability was maybe 3X to 5X over regular ATPG. If you look at it today, it’s not uncommon to get 200X or 500X compression. And that keeps growing. So as the design becoming bigger, you need to maintain that efficiency,” he said.

Still, the question remains: How much test is enough and at what cost? “The economics dictate that. It’s a question of how much do you want to spend on DFT versus how much do you want to spend on ATE equipment versus how much do you want to spend on field returns,” Pateras said. “But if you look at the cost per transistor for test, that has to keep on going down.”



1 comments

How Much Testing Is Enough? | Intelligent Testi... says:

[…] How Much Testing Is Enough? Wedged between rising complexity and an industry that refuses to increase test costs, testing equipment makers are raising important questions about what is good enough.  […]

Leave a Reply


(Note: This name will be displayed publicly)