How To Sleep Easier If You Test Auto ICs For A Living

Final testing can be up to 30% of an automotive IC’s total cost. Can you reduce it without risking quality?

popularity

Last month, I looked at the product definition process of automotive ICs, using the $7 billion microcontroller market as an illustration of design exploration to optimize performance, features, die size and product cost. Now I’d like to look at the back end of the process — the final IC testing that’s still critical no matter how sound the upfront work in defining a featuring set and aptly using software tools to create the physical chip.


From Mentor CEO Wally Rhines’ Sept. 20 IESF keynote in Plymouth, Mich.; complete video available here.

Pretty much every vehicle you see on the road today (likely including your own, unless you have a classic) has a microcontroller taking care of the braking system and another one managing the complex motor control algorithms in your electric power steering system. In fact, MCUs are sprinkled throughout a vehicle performing a host of safety-critical functions you don’t want to mess around with. Physical testing is critical in determining these chips and other electronic components are automotive-grade, which means zero-defect, long-lived performance.

Zero-defect, as any auto engineer will tell you, is acknowledged as an ongoing journey, not a final destination. Automotive IC companies have to show where they are on the path and what they are doing to continually improve quality and speed up failure analysis and diagnosis. And they must keep pace with failure rates that are now plunging from the parts-per-million to parts-per-billion levels. In an article for EDN last year, David Park had a good explanation for the move to Defective Parts Per Billion (DPPB):

“Consider a premium vehicle that has more than 7,000 semiconductor devices across its various electronic systems. If you assume a DPPM rate of 1 for all the semiconductor devices in that vehicle, it equates to seven failures for every 1,000 cars. This may not seem like a large number, but for a car manufacturer that sells two million premium vehicles a year, it represents a failure rate of more than one per hour, every day of the year.”

All of the major automotive IC houses have genuinely impressive quality improvement and defect reduction strategies, having refined their capability over decades of design and test experience. If you ever utter the common lament “my role only gets noticed when something goes wrong,” spend some time with automotive IC test engineers or crisis communication teams among automakers to feel better.

For automotive ICs, final test (excluding the wafer level probe test) accounts for a high percentage of overall product cost. Depending on the chip configuration and die size, this testing can be anywhere up to 30% of the total product cost. Any product manager working in this area will tell you stories of optimizing some awesome new chip and having a rock solid business plan, only to have it sunk trying to get test costs under control.

Why is this? The answer boils down to a combination of automotive quality demands and legacy test software, and the tug-of-war to increase quality and yield while simultaneously reducing time on that expensive tester.

These look like conflicting challenges. In fact, with traditional test methodology, they are. ICs get more complex with every generation and technology node. These chips often control safety-critical functions, and test software generally evolves unidirectionally — that is, to always add more tests with every new product release but never take any tests out. So how do you make this cost competitive?

One way is to use statistical data and correlation to remove tests, an approach generally seen as a high-risk, low-reward. Sure, you could remove a few legacy tests using statistical data, for a negligible overall test time reduction. But the potential downside is that you suddenly become the most in-demand person in the organization after some German carmaker is suddenly parking new cars off the production line on a football field somewhere until they (you, working nights and weekends) can figure out why something is failing.

In my experience, IC test engineers are super innovative in optimizing test cost by spreading the fixed cost. Examples include increasing parallelism (testing more ICs in parallel, therefore spreading the fixed cost overhead over more units) or some kind of yield enhancement program where the benefits of some fractional yield improvement are realized over a period of years.

But what about reducing that fixed cost overhead, rather than just spreading it about? Just as in the last blog I looked at the balance of on-chip IP vs. what can be implemented in software, the IP tested on-chip using in-line self-test has to be balanced against what has to be run on the tester, burning expensive seconds of tester time.

Getting the balance right means using a fault coverage simulator, which analyzes all the tests you are running, sees what kind of test coverage you have, and, importantly, identifies all the useless legacy test patterns. This kind of simulation at an abstracted level can lead to significant reductions in test time (and reported cost savings in the tens of millions of dollars per year).

A fault coverage simulator also identifies potential weaknesses in the test program, allowing even greater optimization of reliability metrics. And the benefits of this type of fault simulation are especially apparent in the mixed-signal domain; in automotive, analog is where the majority of field failures occur.

Digital fault simulation has been commercially available for some time, and now finally we have the same kind of simulation for analog circuits to drive automotive reliability even higher while simultaneously optimizing the test software. We can apply this kind of design automation at the chip level, where we can offload the test resource to the chip using BIST for logic and memories, further reducing the time needed running on the tester.

Customers benefit from higher quality and from the ability to access these on-chip resources for in-system self-checking. For example, at power up when you start your car, memory BISTs can be running in-system checks even while the memory blocks are being accessed.

So it’s entirely possible to reduce the size of the test software through fault coverage simulation. And once the test software is optimized for coverage, it can then be digitally compressed by 10x or even 100x for further test time reduction.

Now, what about options to increase yield?

The key here is to get deeper into the chip circuitry and use technology that’s for structural, cell-level testing, not just cell inputs and outputs. This becomes more significant as we move to smaller geometries, as there it is estimated that up to half of all circuit defects occur within cells. Memories can be repaired and test fail data analyzed to determine which defects are systematic and what needs to be sent back to the design team for design rule analysis.

The result is to speed up the ramp to volume on new processes and improve yields on existing processes. This type of diagnosis-driven approach uncovers yield-limiting defects that would traditionally take time to find (if found at all) using typical scan diagnosis.

Benefits from improved IC testing abounds throughout the design team and hierarchy, including to the product manager and the business unit executive management, who can show cost competitiveness helping their market position as well as their P&L. But more importantly, customers benefit from higher test coverage and more robust test methodologies that support increased reliability, in-system checking and faster fault diagnosis, allowing the IC house to show their customers (the tier 1s and carmakers, who are taking an increasing interest in IC design and test methodologies) that their zero-defect journey continues.

So rest easy. If you are involved in developing automotive ICs, and the back end of the process is giving you sleepless nights, there are a range of available technologies to help you get more zzzs. And these technologies are evolving constantly to address the next generation of chips powering sensor fusion boxes for autonomous vehicles, IGBTs for inverters in electric vehicles and the ICs for every other vehicle application.

Tell me what I got right (or wrong) about auto IC testing on Twitter (@AndyMacleod_MG) or LinkedIn.

Related whitepaper: Inline DFT Tech for Self-Correcting Automotive Architectures

Related article: Recipe For Automotive IC Design Success



Leave a Reply


(Note: This name will be displayed publicly)