Lab-To-Fab Testing

Test vendors are trying to build a bridge between verification and automated test in manufacturing.

popularity

Test equipment vendors are working on integrating testing and simulation in the lab with testing done later in the fab, setting the stage for what potentially could be the most significant change in semiconductor test in years.

If they are successful, this could greatly simplify design for test, which has become increasingly difficult as chips get more complex, denser, and as more heterogeneous processing is added into SoCs. But this isn’t a trivial integration project for a number of reasons. Among them:

  • Test is a discrete and somewhat fixed step in the manufacturing process, but it’s used much more flexibly in the lab. In the fab, testing needs happen quickly enough to keep wafers moving along, while in the lab it can occur over a period of time as new features are added.
  • Lab testing is often done in conjunction with design steps such as simulation, where different features or approaches are added or removed from a design. But while this is an effective way of figuring out where there are problems in the lab, it’s a sizeable leap to tie that into what’s done in the fab. Verification engineers look for functional errors in code, while test engineers physically test a chip to make sure it functions within electrical and thermal limits.
  • Test, particularly ATE, historically has been limited to 2% to 3% of total development cost. Changing this formula will be difficult because it requires readjusting the cost across business silos, so it’s not clear yet what the tradeoffs will be to make a compelling business case for this approach.

Moving various tasks further left in workflows has been underway for some time, particularly in areas such as verification and virtual prototyping, which allows software to be developed even before the hardware is ready. But combining test with what happens in the lab represents a different kind of challenge because these two worlds have grown up separately with barely a hint of trying to bridge them.

Still, as rising complexity and reliability concerns begin to filter into designs, particularly in automotive and industrial applications, there is growing pressure to make everything work together so problems can be discovered earlier, when it is less expensive to fix them. At the same time, there is pressure to increase test coverage in larger chips used for AI and machine learning, and to improve the reliability across a growing spectrum of safety-related applications.

“The typical development cycle is design, test and production,” said Shu Li, business development manager at Advantest. “But if you can move the test strategy up earlier in the flow you can optimize coverage, decrease cost and increase throughput. There is a strong linkage with design and test, but you have to deploy it the right way. The problem is that in the design stage you try a lot of things in the simulation stage and not all of them go to production. What happens in verification and what goes to test is a balance.”

This becomes particularly difficult with analog and mixed signal, which is a growing part of test in areas such as automotive sensors.

“With the typical digital chip you focus on pattern bring-up,” said Li. “There are patterns for functional tasks. With RF and mixed signal testing, you use digital resources to control a device, but a lot of the RF and analog testing is actually done on instruments like an oscilloscope or a spectral analyzer. Technology is advancing to be able to combine all of this, but the test programs are getting bigger. The cost of development is based upon the quality of the test program and the test time in production, so if you need higher coverage you have to spend more time. And that’s the big tradeoff—how to develop a high quality test program in a short period of time.”

Follow the data
Behind this shift is an increasing focus on data throughout the flow. In the past, most of the data in semiconductor manufacturing was controlled by the fabs, which were reluctant to share that data because it was considered a competitive weapon.

That has changed significantly over the past few years for a couple of reasons. First, foundry processes are so unique and so numerous that it’s much more difficult to compare data from one foundry with another, and sometimes even within the same foundry. Second, there is so much pressure for foundries to differentiate that they need to get those unique features to market faster, which requires the help of IP and EDA vendors, chip design houses and equipment companies.

“Customers are starting to adopt a different approach for throughout, decreasing the amount of time it takes to characterize parts and setting up a common architecture for production so that you can reduce correlation issues,” said John Bongaarts, principal solution manager for semiconductor test software at National Instruments. “The challenge is to take the data in a system at the lab and understand devations in the interface, the test methodology and the test equipment.”

This sounds straightforward enough, except that the amount of data is exploding from multiple sources.

“The data is increasing because parts are more integrated,” said Bongaarts. “With RF ICs, there are more cellular bands, different WiFi standards, 5G, and you need to test more. The amount of test data needed to validate parts is increasing. There is still a lot of talk about balancing test time and throughput, but there also is pressure to reduce the number of defects.”

In addition, not all of that data is in the same format, a problem that has become very apparent on the manufacturing side where different equipment collects different kinds of data.

“The first class of problem is a big data problem—getting it all into a structure and format that’s usable because we are dealing with a massive amount of data from a massive number of sources with different formats,” said David Fried, CTO at Coventor. “Solving the format problem doesn’t sound that difficult, but think about a temperature sensor in a deposition tool versus a slurry pH monitor in a slurry tank feeding a CMP tool. It’s a different type of data sampled in a different way using a different set of units. Just putting that into a format where you can operate on the data set is a massive big data problem.”

Market adjustments
Bridging lab to fab testing, if successful, also sets the stage for a battle across what formerly were separate market segments. ATE vendors see this as an opportunity to shift left in the design through manufacturing flow, and lab test vendors see this as an opportunity to shift right.

“The goal here is to have the same instruments on test,” said Anil Bhalla, senior manager at Astronics. “People want system-level test because of the value it adds, but what else can you change and where else can this be applied? So can you do system-level test across a probe? And can you move other tests and scans into system-level test so that you can have massively parallel test earlier?”

Test equipment vendors certainly would like that to happen, and there are some overlaps in what is done today in the lab and again in the fab.

“If you’re testing a device in a socket, you’re looking at thermal, tri-temp and functional test,” said Bhalla. “But the cost of test can’t go up. If anything, it has to go down. And you have to test more complexity and more stuff. So you’ve got multiple temperature insertions in automotive. There are more dies to test in advanced packaging and more clock coverage.”

And that’s where problems begin to creep into this whole concept. There are a lot of moving parts on the design side, and processes need to be put into place to account for all changes that affect the testing process.

“Several sources of error can cause results not to correlate, and you need to understand why that is happening,” said David Hall, head of semiconductor marketing at National Instruments. “That could involve the interface to the device. In a wireless device, it could be a result of how you interpret wireless standards. You also may be testing close to the limits of what an instrument is capable of doing, so there may be differences between the instrument and the device.”


Fig. 1: Bridging the gap between lab and fab. Source: NI

DFT strategy
All of this circles back to the design for test strategy, which is developed at the very beginning of the design flow. So if all of the pieces are not tested, or tests don’t correlate, then problems crop up at the manufacturing stage that can affect yield. Or worse, semiconductors can ship into the market and reliability problems can show up at a later date in the field.

In automotive, a recall can be very expensive. Toyota just recalled 2.43 million Prius models due to stalling issues issues, and another 188,000 pickup trucks and Sequoia SUVs, where faults in the airbag ECU could disable one or more sensors used to detect crashes.

These types of issues are extremely troublesome in safety-critical markets, such as automotive, aerospace, medical and industrial, where a failure can cause injury or death.

“We’re seeing a need to inject faults into actual silicon,” said Kurt Shuler, vice president of marketing at Arteris IP. “This isn’t fault injection into a design. You need to do it on the die, not into the netlist or RTL. It can be done on-chip, or at runtime if you can isolate it, but you want an external observation point for functional safety. That’s more sophisticated than a traditional boundary scan. Chip vendors have to prove safety in the chip and that it will work as intended.”

In the automotive world, this is addresed by the AEC-Q100 standard, which is a “failure mechanism-based stress test qualification for integrated circuits.” The standard proposes a series of tests, such as human body model electrostatic discharge, IC latch-up and solder ball shear test, among others.

But it applies in other markets, as well. A design-for-test (DFT) strategy used to be something of an afterthought. It’s now a critical element in all complex designs, especially when it comes to packaging.

“Complexity is growing even with packaging,” said Mike Gianfagna, vice president of marketing at eSilicon. “With 2.5D, you have to think about the silicon substrate, thermal and mechanical stress, and more analysis. So the packaging and DFT teams are involved much earlier and all the way through the development process. DFT can impact the entire schedule.”

Conclusion
What are the chances that lab-to-fab test will work? No one is quite sure, but there are efforts underway among all of the major players to make this happen. And for test vendors, there is economic incentive to making this work because it can greatly increase their reach and revenue sources. Nevertheless, it’s not so simple on a number of fronts.

“These are very different worlds,” said Advantest’s Li. “On the digital side, it is motivating designers and DFT engineers. For the bench side, there is a lot of RF to run tests on. But how to correlate everything in test and production is very challenging. The ideal is to have continuity from the lab to ATE. Right now, this is a process of exploration and discovery.”

Related Stories
The Race To Zero Defects
Complexity, scaling and economics are hindering test engineers from finding all of the faults.
Auto Chip Test Getting Harder
Each new level of assistance and autonomy adds new requirements and problems, some of which don’t have viable solutions today.
How To Test Autonomous Vehicles
New technologies, methodologies and flows will be required to guarantee self-driving cars are safe.



Leave a Reply


(Note: This name will be displayed publicly)