Five Disruptive Test Technologies

More complexity will require more test coverage, longer test times, and possibly higher test costs. Changes are on the way.

popularity

For years, test has been a critical part of the IC manufacturing flow. Chipmakers, OSATs and the test houses buy the latest testers and design-for-test (DFT) software tools in the market and for good reason. A plethora of unwanted field returns is not acceptable in today’s market.

The next wave of complex chips may require more test coverage and test times. That could translate into higher costs. To address these and other issues, chipmakers must look for the latest breakthroughs in test technology.

So what are some of the current and future technologies that will change the landscape in test and give chipmakers a leg up in the arena? Semiconductor Engineering and various test experts have compiled a list of five disruptive or breakthrough technologies.

In alphabetical order, they include the following technologies—adaptive test; advanced DFT; faster mixed-signal testers; fine-pitch probe cards; and standards. These technologies already exist in the market today. But most, if not, all of these are still evolving and morphing into new forms with higher levels of functionality. Here’s the list of disruptive technologies:

Adaptive test
Not long ago, the International Technology Roadmap for Semiconductors (ITRS) added an obscure technology to its list in the test category—adaptive test. Adaptive test is not an ATE platform or a traditional DFT tool.

Yet, Nvidia, Qualcomm and others have inserted adaptive test into their flows. Adaptive test makes use of software tools, which monitors the test flow. It enables a chipmaker to change or adapt how a device is tested “on the fly.” It is used to achieve yield and cost goals.

“You can look at a large amount of parametric test data from multiple steps before you have to make a final decision,” said David Park, vice president of marketing at Optimal+, a supplier of adaptive test tools. “The way we look at it is simple: Is good really good?”

Adaptive test consists of several components, such as data feed-forward. In this key technology, data collected from a previous test step can be used to change how the same parts are tested in the future.

“What we call adaptive testing today will blossom into elaborate on-the-fly procedures. The techniques are aimed at simultaneously meeting two goals—the desired test coverage; and the targeted cost-of-test,” said Dave Armstrong, director of business development at Advantest. “I’ve used the term, on occasion, ‘statistically-based testing.’ At some point, we will need to accept that we can’t afford to do all the testing that we feel we should be doing. Some would probably say that we’re already passed this point today. I just know that test content, test order, test limits, test temperatures, and even test insertion points will likely be adjusted dynamically in the future.”

Advanced DFT
Over the years, there has been a shift from functional to structural test in the industry. In functional test, much of the testing occurred in the final test steps with expensive automatic test equipment (ATE).

Expensive testers are a thing of the past. Instead, chipmakers use less expensive ATE with structural test capabilities. And structural-based DFT software tools handle a larger percentage of the test coverage. Using DFT technologies like fault models and test compression, structural test looks for manufacturing defects and ensures the device has been fabricated correctly.

The two most common structural test methods are scan and built-in-self-test (BIST). Both scan and BIST make use of on-chip logic to diagnose and test a design. In structural test, the chip is also partitioned into smaller sub-blocks. Test is conducted at the core or block level. This “divide-and-conquer” strategy is sometimes called hierarchical test.

“We are using hierarchical test,” said Robert Ruiz, senior product marketing manager at Synopsys. “Longer term, I see an even greater interaction between design technologies—such as timing analysis and synthesis–and testability. I see putting more of the burden on testability, not so much on DFT technologies, but further upstream and into the design techniques.”

Steve Pateras, product marketing director for Silicon Test Solutions at Mentor Graphics, also said that DFT will move into new frontiers. “We will see better intelligent bandwidth management. In other words, I have a fixed test resource, test bandwidth and test time. How do I allocate that as efficiently as possible on the various components on my SoC? It is really optimizing how I use my limited resources to get the most out of test. In other words, do I spend more time on one given core or less time on this core? Optimizing things and doing it individually, and instead of blindly, is going to be critical,” he said.

Another key DFT technology is BIST. For example, embedded memories are tested using BIST. BIST generates patterns to the memory and reads them to log any defects. Memory BIST also consists of a repair and redundancy capability. In this technology, each die has spare circuits. If a circuit is bad, the defective circuit is disconnected and replaced with a good one.

Memory BIST is also used to obtain known good memory stacks for 2.5D/3D devices. But even with BIST, there are still test challenges at the die level. “It’s virtually impossible to do at-speed test, because of the parasitics of the test head,” said Robert Patti, chief technology officer and vice president of design engineering at 3D memory supplier Tezzaron. “If you look at our case, we have thousands of pins. Doing it at any speed is a problem, much less at-speed.”

In response, Tezzaron is refining its memory BIST solution, dubbed Bistar. “In 3D, we do BIST extensively,” Patti said. “In fact, we’ve augmented Bistar. We put drivers on chip and have smaller buffers. So our test processor loads up those buffers and they burst at full speed into the device as if it was coming from an external tester. By taking this next leap down the road in design-for-test, we’re now able to do testing that is more effective.”

What’s next for BIST? The answer may be repair and redundancy for non-memory devices. “As we get more and more transistors, people will start to think about using them to improve yield,” said Advantest’s Armstrong. “Just like what we have done with memory devices, SoCs will start to deploy this in the next few years. For example, processors with multiple CPU cores are becoming commonplace. What about adding a couple of extra CPU cores and swapping them if needed? This will significantly improve our yields.”

Faster mixed-signal testers
For some time, digital and mixed-signal circuits have been integrated on the same chip. This presents more challenges in the test flow.

Take the power management IC (PMIC) for smartphones and tablets, for example. PMICs are mixed-signal SoCs, which are designed to control the power in a system. Going forward, the PMIC will see an increase in digital complexity as well as features like dynamic voltage scaling (DVS).

“The test techniques (for PMICs) have not changed,” said Anthony Lum, business development manager at Advantest. “You still need to measure a current and a voltage. What has changed is the ability to do parallelism and the speed at which we are acquiring data.”

Today’s mixed-signal ATE for PMICs requires high-speed digital pins and a way to test audio functions. The tester also requires voltage/current (VI) resources. To meet the demands for future devices, vendors will need to upgrade the analog/mixed-signal tester.

“You probably will need more digital functionality and digital pins,” said Mark Kahwati, senior product development manager at Teradyne. “(DVS for PMICs) has been talked about for a while. That will require some tight integration within the tester between the digital pins and VI resources. This is so they can dynamically scale the voltage up and down throughout the test.”

Fine-pitch probe cards
One of the bigger, and sometimes forgotten, challenges is the ability to obtain known good die (KGD). KGD are used for 2D-based stacked-die packages. In addition, KGD is also critical for advanced 2.5D/3D chips.

A bare die is tested using a wafer prober. A prober includes a custom probe card, which has thousands of probing needles that hit the bond pads on a die. The prober detects defective die, which are eliminated in the flow.

In wafer probe, the overall test costs are sometimes higher. And the big challenge for the industry is to develop fine-pitch probe cards that can handle greater than 1,000 contacts and pitches at 50µm and below. For example, testing 3D DRAMs will stretch the limits for today’s probe cards. The existing Wide-I/O standard consists of 25µm/15µm microbumps and a 40µm-50µm pitches.

So, after years of being in the shadows, the probe card is entering the limelight. “The perfect probe card is essentially electrically invisible. That’s not easy to do when you are spreading things over larger and larger areas and having more and more signals in a given area,” said Mike Slessor, president of FormFactor.

“The number of contacts per chip is going up. But to drive down the cost of test, people are trying to test more chips at once. This has already played itself out in memory, where we are already at single touchdowns. But we are in the first inning of it for SoCs,” Slessor said.

Going forward, the industry will require more breakthroughs in probe cards. “The number of probe points we need for these 3D devices is getting significant,” said Advantest’s Armstrong. “If you multiply that by the cost per probe, it’s becoming a significant challenge.”

Standards
Keeping track of standards can be a dull undertaking. Yet, standards are an important part of the semiconductor industry. It enables chipmakers to design products around a given spec.

Standards are essential to enable 2.5D/3D stacked die, especially in the testing process. To develop these products, a chipmaker requires KGD. Besides KGD, 3D devices will also require the following test insertion steps—a pre-bond test before stacking; a mid-bond test in the partial stacking phase; a post-bond test after final stacking; and a final test.

“If all chips/die in a 3D package are manufactured by the same company, then there is no need for an industry standard,” said Bassilios Petrakis, product marketing director at Cadence. “Standards become necessary when you attempt to place multiple die from different vendors into a single package and expect to be able to structurally test them in some meaningful way. With true 3D chip stacks, you can only communicate with dies on the top or middle by sending signals up through the die or dies below them. If a die being placed on top has a specific test interface, but the die below it doesn’t meet it, then how well can we test the package? It becomes very inefficient to design a new test interface for each new package design, so it makes sense to define a standard test interface and include it in all die in the package.”

To test 2.5D/3D chips, there are existing IEEE standards, such as 1149.1 (boundary scan test), 1500 (embedded core test) and P1687 (embedded instrument access). For some time, the 3D-Test Working Group has been hammering out a new standard, dubbed P1838. The goal is to have a new and standard test access architecture. P1838 is taking a long time to get through the standards bodies and has yet to gain approval.

Standards are just one piece of the overall puzzle. “Standards like P1838 will help. That will help with the productivity parts. But it’s really a matter of methodology and the cost of test that are the biggest drivers or concerns for the implementation of 3D ICs,” Synposys’ Ruiz said.



Leave a Reply


(Note: This name will be displayed publicly)