Testing ICs Faster, Sooner, And Better

Why test cells could become the critical information hub of the fab.

popularity

The infrastructure around semiconductor testing is changing as companies build systems capable of managing big data, utilizing real-time data streams and analysis to reduce escape rates on complex IC devices.

At the heart of these tooling and operational changes is the need to solve infant mortality issues faster, and to catch latent failures before they become reliability problems in the field.

“People are looking at how can they attain the highest device quality, and we need to start thinking about solving this problem in a different way,” said Eli Roth, engineering director at Teradyne. “One approach certainly gets into data analytics, AI and machine learning, because when you’re at one part-per-billion defect rates, you’re looking way beyond Six Sigma. You’re really looking at tails of the distribution and trying to find infant mortality issues. It’s a question of piecing together data from inspection steps and the electrical characterization, and all the data our products can provide to reach the quality requirements.”

Engineers in the test community face numerous challenges today. “Device scaling and transistor density increases continue, creating some interesting defect modes and process variability that we have to deal with in the test world,” said Matthew Knowles, product management director for hardware analytics and test at Synopsys, in a recent presentation at ITC. “The packaging revolution is happening and we need 3D integration and chiplets. And when we put these systems of systems into larger systems, they get more complex. There’s an enormous amount of data that needs to be understood and aggregated to understand the reliability and performance of these systems. So there’s defect and yield optimization that needs to be done reliably to reach extremely high test coverages for hyperscaler and automotive applications, for example.”

Knowles pointed to recent innovations introduced by test companies (see figure 1). “Over the past couple of years we’ve seen a number of developments happen in the industry,” he said. “These include test fabric and very advanced compression techniques to handle high pattern densities. RTL DFT has been around to help us shift left, enabling time-to-results by making improvements upstream. And power-aware ATPG has been introduced for safe, controlled testing.”

Fig. 1: AI-driven automation, monitoring and analytics help to address escalating test requirements. Source: Synopsys

Fig. 1: AI-driven automation, monitoring and analytics help to address escalating test requirements. Source: Synopsys

One way chipmakers are improving quality is by making better use of design and verification data downstream. “There’s an entirely new effort to take all of the perspective that you’ve gathered during the validation and verification period and ensure that it’s well tied to how you measure the device as it goes into production,” said Robert Manion, vice president and general manager of the Semiconductor and Electronics Business Unit at NI Test & Measurement, an Emerson company. “In production, you’re significantly reducing the overall number of tests that you might be running on the part, but you still need to validate that it’s going to maintain the performance you observed through the validation and characterization cycles.”

Nearly every equipment supplier in the semiconductor industry is employing ML or AI to help perform new analyses of the data or to conduct operations more quickly and cost effectively. Nowhere in manufacturing is the triangle of yield, quality, and cost more critical than in semiconductor test.

Data analytics and AI clearly are being embraced by the industry as enablers. However, much data processing must happen before the first program is run.

“We talk about a lot of these challenges — data security, data access, who owns the data, data IP, secure transfer, and data corruption,” said Nitza Basoco, technology and market strategist at Teradyne, in an ITC presentation. “Identifying the right data and its context (including metadata) are critical for good outcomes with AI, but so are the models. Imperfections in the model can lead to unexpected results. You also need to ensure the data is properly structured for the system to interpret, and that it’s delivered securely without any opportunity for modification by an unauthorized systems. What I see most often overlooked is the importance of a clear objective for the AI. It is key that the system is set up to optimize for the criteria that matter to you.”

It’s also important for companies to collaborate in order to share data, which often is kept in siloes that are too narrow and outdated. “Sometimes you realize you need information you don’t have, and that’s an opportunity to collaborate,” added Basoco. “Companies are collaborating at every level to aggregate all the data and apply machine learning models to gain insights.”

Insights involving performance typically are gained through data collected from sensors integrated into devices and equipment. “The next evolution that’s really going on involves gaining device insights, and that’s where the customers are finding competitive advantage based on what they infer from their own devices,” said Teradyne’s Roth. “The nirvana step is when the customer can do feedback and feed-forward for real-time responses in production. That’s why we’re trying to establish an open environment, where the data is available and secure, to enable customers to develop and provide that competitive advantage.”

An open architecture can utilize analytics from third parties or other applications best suited to the devices being tested (see figure 2).

Fig. 2: Open analytics solution can provide local test optimization and rapid data analysis in a secure, bi-directional data stream. Source: Teradyne

Fig. 2: Open analytics solution can provide local test optimization and rapid data analysis in a secure, bi-directional data stream. Source: Teradyne

A big part of today’s test environment involves prompt data access. “Getting real time data is probably one of the most important pieces that we all ignore but we know we need — how fast you get that data,” said David Vergara, senior director of sales at Advantest. “What’s important when you test is, when a device fails or doesn’t fail, you can figure out the cause. You have the data behind it to go and solve the problem.”

Advantest Cloud solutions is a real-time data infrastructure that typically combines on-die sensor readings or monitors, an edge computing to execute complex analytics workloads, and a unified data server. The edge computer sends outcomes of inferences back to a unified server on the test floor that securely downloads analytics during test sessions. The data processing needs span from on-chip monitors to wafer probe, package test and system level test (see figure 3).

Fig. 3. Testing needs now include greater test content from wafer probe through SLT. Source: Advantest

At the same time companies are streamlining advanced test pattern generation (ATPG) and improving design for test (DFT) methodologies. With each new device generation, testing requirements get upgraded.

Device needs change
Testing requirements change depending on the expected performance and expected lifespan of the device or system under test.

“If we look at the RF and wireless space, for example, our customers are trying to prepare for new standards from bodies like 3GPP, which adds new measurement requirements,” said NI’s Manion. “As we look at things like FR1, FR2, FR3, companies are regularly having to adapt their test approach to ensure they’re validating their product against parts of those standards. This means new frequency ranges, new bandwidth, and new waveforms. It changes the number of tests they have to run, the types of interference they might be looking for, or corner cases that they may have to address. This is what’s ultimately driving a lot of test development activities early in the cycle — to try to prepare for proving those things out on new designs.”

Manion points to dynamic changes in the MEMS space, as well. “We’re seeing an explosion of new and increasingly digital and smart sensor devices, which require high-performance analog measurements that were not required previously,” he said. “This is driving our customers to increase the amount of analog testing that they’re doing with those devices.”

ATPG
The function of advanced test pattern generation (ATPG) is to enable the test equipment to differentiate between correct circuit behavior and faulty circuit behavior caused by defects. Many companies are using AI/ML algorithms to settle on fewer test patterns, and to do it faster.

“Automatic test pattern generation is both an art and a science,” said Synopsys’ Knowles. “You have an ATPG engine that’s going to be doing the simulations, and an engineer needs to come up with an optimal set of parameters that gives the coverage they want while minimizing that pattern count and test cost. But it doesn’t always work perfectly in the first round, and so the engineer has to iterate. Each one of these iterations can be very, very long — days even — and the quality of results is probably not optimal, even after all that time. And you can’t even predict how long it will take, which is a problem. That’s where test-based optimization comes in. By taking user targets and settings, the AI can explore the parameter space automatically, learning as it goes along. It tells the user up-front how long the analysis will take. We’ve run many types of designs with many different patterns, and we’re seeing the test optimization provide up to an 80% reduction in pattern count and a 20% average decrease in test costs.

Predictability is a big win for increasingly complex scheduling. “It ensures the efficiency of the engineering tasks with expert-level productivity without the expert,” Knowles said. “We’ve taken this concept and infrastructure and pushed it upstream into the DFT configuration. We’ve applied that AI engine by including a synthesis step, and that DFT configuration and initial results are showing very promising outcomes.”

Synopsys recently introduced its AI-driven workflow optimization and data analytics platform, which employs AI across the whole EDA stack into manufacturing, test, and out to in-field monitoring of chips. “This data continuum allows us to connect all these different phases, from design to ramp to production to in-field, leveraging all these data sources,” he said. “And once you have unified data, you can build semantic models that also can be leveraged across these different domains, and that’s when we get the true power of the AI.”

Fig. 3: Increasing the efficiency of ATPG by automatically minimizing the pattern count for the targeted test coverage within a scheduled period of time. Source: Synopsys

Fig. 4: Increasing the efficiency of ATPG by automatically minimizing the pattern count for the targeted test coverage within a scheduled period of time. Source: Synopsys

Basoco gave the example of using assistive intelligence for test co-generation. “Rather than building every test program from scratch, a more powerful method is to combine a test plan with a set of library codes,” she said. “This can be done using an AI program generator, but it still needs engineering insights.”

Adaptive test and real-time computations
Adaptive test targets test to where it’s needed most. In other words, each device receives the right test content to validate its performance, while using the minimum number of tests to get there. Adaptive test takes data generated by the tester, and relevant data from previous measurements, to predict the testing needs — either adding tests for risky parts to increase quality (reduce DPPM) or eliminating tests that capture no failures.

“In adaptive test, you removed or added from your test flow to improve quality or improve throughput,” said Roth. “The automotive space trying to move to one part-per-billion quality rate likely means more tests, because you’re trying to try to flush out any issues prior to going to market. But more tests adds cost, so that’s where we think an analytic model using AI or ML can provide an advantage.”

Adaptive test is all about testing smarter. It begins at wafer probe and ends at system-level test. As IC products become more complex, the analytics that govern adaptive test strategies move from being relatively simple to utilizing more complex statistical models and machine learning. Design and testing companies are building real-time data infrastructures to enable adaptive test among other capabilities.

For example, at wafer sort an engineer might examine a stacked wafer map of 25 wafers to identify clusters of failures in one zone on the map. An algorithm identifies failure severity among the cluster and surrounding chips. Additional tests are then applied at final test of the parts deemed risky while minimally impacting test time. [1]

In a second example, adaptive test permits the adjustment of test limits and outperforms DPAT methods. Sensors embedded in a chip can monitor operational metrics like power and performance. Here, a sensor-aware method identified a correlation between sensor data and the results of a specific VDD consumption test. The sensor-aware bivariate model enabled more accurate limits on speed/power consumption test, which resulted in improved quality through lower DPPM.

Data security
Data security is essential, and the industry is adopting zero-trust methodologies for handling data between testers, servers, etc. “A zero trust model of security provides some advantages and changes in the way you have to think about architecting and deploying your services and equipment. We’re trying to protect IP and we’re doing that by authenticating and encrypting every node connection along the way,” said Brian Buras, production analytics solution architect at Advantest America. “And you need to share data from one facility to another, and from one insertion to another insertion in the process. Some of our customers have spent a lot of resources and time developing complex analytics and they want to protect their IP, so they want to know when they deploy into our systems that it is secure.”

Conclusion
Tester companies are setting the stage for real-time data access, data analytics, and closed-loop feedback to testers, which enable better ATPG, DFT, and adaptive test. But the larger goal of securely managing data between tools and between equipment suppliers, customers, and third parties is still being ironed out.

The ties between design, test and manufacturing are becoming tighter out of necessity. “First silicon bring-up is a very, very busy time for companies in this space,” said Manion. “After first silicon comes back, their test solutions have to be ready to validate all of the original design requirements. And increasingly, our customers are spread out in sites around the world that are all trying to work in coordination. We’re helping those teams gather data in a similar way and with similar algorithms with similar measurement science, so that ultimately they can compare results across the various sites.”

Reference
G. Cortez and K. Butler, “Deploying Cutting-Edge Adaptive Test Analytics Apps Based on a Closed-Loop Real-Time Edge Analytics and Control Process Flow into the Test Cell,” IEEE International Test Conference, 2023, P5.

Related Stories
Integration Challenges For ATE Data
Collecting data to boost reliability and yield is happening today, but sharing it across multiple tools and vendors is a tough sell.

Fab And Field Data Transforming Manufacturing Processes
Data from on-chip monitors can help predict and prevent failures, as well as improve design, manufacturing, and testing processes.

New approaches, from AI to telemetry, extend well beyond yield.


Leave a Reply


(Note: This name will be displayed publicly)