Chip Test Shifts Left

Semiconductor testing moves earlier in the process as quality and reliability become increasingly important.

popularity

“Shift left” is a term traditionally applied to software testing, meaning to take action earlier in the V-shaped time line of a project. It has recently been touted in electronic design automation and IC design, verification, and test.

“Test early and test often” is the classic maxim of software testing. What if that concept could also be implemented in semiconductor testing, to reduce the number of chip failures at various stages of testing, and thereby reduce cost and time for weeding out defective components?

One approach to this concept may be portable test and stimulus. The Accellera Systems Initiative formed a Portable Stimulus Working Group two years ago with the goal of creating an industry standard for portable test and stimulus. “When completed and adopted, this standard will enable a single specification that will be portable from IP to full system and across multiple target implementations,” the group said in its 2015 announcement statement.

At DVCon U.S. in February, the organization presented the “Creating Portable Stimulus Models with the Upcoming Accellera Standard” tutorial. While the standard is being crafted and reviewed, there are other paths to achieving “shift left for chip test,” according to industry executives.

“In a classic V diagram, you can shift left in the sense that you can do some kind of ‘test,’ in the most general sense, at the same level of the design or the subsystem or the device,” says George Zafiropoulos, vice president of solutions marketing for the VWR group of National Instruments. “What would be its pre-manufactured analogy? What I mean by that is, in a V diagram for an electronic product, the upper left-hand corner is typically a behavioral system model. Then you refine it further down the V to the lower-level circuit model, for example, where you’re doing simulation pre-silicon. Then you fabricate the device at the bottom of the V. As you start coming up the right side of the V, you’re now in device characterization, and then subsystem assembly, and at the top of the V you’ve created the physical implementation of your chip, subsystem, etc., and you want to compare the results from your early top left of the V to the top right of the V.”

Another way to look at this is that the design industry already is doing testing of one form or another from the very beginning of the chip design process.

“We call it algorithmic validation, or we call it design verification,” Zafiropoulos said. “Or we tape out and call it validation and characterization and ultimately production test. We already are doing testing all through the flow. In my mind, it’s more of a matter of whether it’s efficient. Is it an optimized flow? I would suggest at this point, that it’s not, and there’s an opportunity for improvement there. Think about it this way — if we designed a chip and never simulated it, and manufactured it, and the first time we ever really tested it is when we had the first-article silicon, you could say, ‘Gee, maybe we should be testing this before we actually fabricate it.’ Well, they already do. They already do simulate the circuit pre-silicon. But it’s very inefficient flow. And there are a lot of things that could be done to improve that.”

Zafiropoulos took note of the standard-setting by Accellera’s Portable Stimulus Working Group. The challenge to such efforts is that the industry is broken down into too many silos, he said.

Karthik Ranganathan, director of engineering for semiconductor test products at Astronics Test Systems, agrees. “In general, test has always been broken out into what’s been done on wafer sort and what’s done on final test. That’s the traditional way of looking at test.”

It’s also a growing problem because there are now so many different functions, components and IP blocks that must work together in a complex design. System-level test has been added to the test menu in the past decade as an afterthought.

“More and more of the speed test is done at wafer sort,” Ranganathan says. “Final test hasn’t been as thorough, and people are trying to move more and more into system-level test to try and compensate for the fact that final test doesn’t catch all the defects that possibly hide. Part of the test is moving upstream toward wafer sort, and part of testing is moving downstream toward system-level test, as these nodes shrink and more and more customization is added on.”

There are roles for automatic test pattern generation, built-in self-test, design for test, and JTAG, according to Ranganathan.

“EDA companies are evolving,” he says, addressing test issues with their design tools. But it’s the end-use customers, more than automated test equipment vendors, EDA firms, or chip suppliers, that are driving the use of bare-metal testing and “thinking beyond the vectors for test.”

That’s especially true for automotive manufacturers and consumer electronics companies. System-level testing is especially suited for the standalone chips that will be used in autonomous vehicles, Ranganathan says. With IoT devices, most testing is done at wafer sort, “and not so much at final test,” he says.

Manufacturing intelligence
How to best solve these issues has created some debate within the semiconductor industry. For the most part, manufacturing quality has been a question of what is “good enough” for a particular application. So if a consumer device didn’t perform as well as it did when it was first purchased, that would be a factor that would be considered in a cost/quality analysis.

But as devices enter into new markets, where some parts are expected to last a decade or more, reliability—which is a measure of quality over time, or mean time between production and failure—has taken on a whole new dimension.

David Park, vice president of worldwide marketing for Optimal+, believes companies need to look at test data from their global supply chain, analyze it in real time, and make data-driven decisions as a result. He calls that “manufacturing intelligence,” which is the application of big data analytics to semiconductor and electronics manufacturing.

“It’s just like the ‘test early and test often’ concept for the left side of the ‘V,’” Park says. “And most companies test every device exactly the same way because it’s easier. But if you are using big data analytics, you can create ‘test more, test less’ populations. The concept is based on the fact that most companies plan around a fixed test budget (money and time). Test is a ‘cost’ to the company. So how do you take best advantage of the test budget you have? Well, if you test everything the same, then you have the status quo. You know your test costs and your quality and yield are what they are. But what if you could test your really good devices ‘less’ and test your questionable devices ‘more’? Basically, apply your test time and resources where they are most needed and not wasting it where it isn’t needed.”

This kind of granularity is beginning to show up throughout the design process, from rightsizing memories and processors through heterogeneous integration to utilizing the best IP for a particular application. But in test, this is a different way of looking at the quality problem.

“Using big data and analyzing devices as they move through manufacturing test (wafer sort, final test and system-level test), you can build up a ‘DNA profile’ for each device tested,” Park says. “If you have a device that is so good, so perfect in its test results, do you really need to apply the full battery of tests to that device? At some point, mathematically, you know that the device just isn’t going to fail. So save that unnecessary test time and apply it to devices that are not quite so perfect, and be sure that you can safely send those devices to your customers. This is the concept behind ‘test more/test less’ populations. Using big data analytics, it is possible to determine which devices really need more tests to ensure quality vs. ones that are rock-solid good devices and don’t need as much testing. This allows companies to maintain, or even lower their cost of test operations, while improving overall quality.”

For example, if 10% of devices are good, they might might only need 80% of the total testing. It can be mathematically proven through simulation that they would have never failed any downstream tests. Chipmakers also might even skip a major test step like burn-in, and those test time savings can then be applied to the 10% of population where the devices are marginal and where testing is much more critical.

“You can now apply 20% more tests to those devices to get a better idea of whether they are high enough quality to send to customers,” Park says. “And if you want to lower test costs, you don’t have to apply the full 20% extra test time to the 10% marginal population. The customer can choose to apply only 10% extra test time instead of the full 20% that was saved. So they get some additional testing done and higher quality, but lower overall costs as well.”

Conclusion
The cost, time and quality of test are all considerations that need to be weighed for each device, each market, and each company. But no matter what approach is taken or how critical reliability is for a particular application, all of this needs to be done earlier than ever before.

Test is moving much closer to design as quality becomes a key metric. As it does, it is moving much further to the left of the design through manufacturing flow.

Related Stories
Quality Issues Widen
Rising complexity, diverging market needs and time-to-market pressures are forcing companies to rethink how they deal with defects.
Testing IoT Devices
Internet of Things devices present new challenges in testing.
Test More Complex For Cars, IoT
Safety-critical markets add new challenges for testing methodology, which can affect functionality, reliability and yield.



Leave a Reply


(Note: This name will be displayed publicly)