Reducing Chip Test Costs With AI-Based Pattern Optimization

Artificial intelligence (AI) is an innovative way to meet the requirements for a modern test pattern generation flow.

popularity

The old adage “time is money” is highly applicable to the production testing of semiconductor devices. Every second that a wafer or chip is under test means that the next part cannot yet be tested. The slower the test throughput, the more automatic test equipment (ATE) is needed to meet production throughput demands. This is a huge issue for chip producers, since high pin counts, blazingly fast interfaces, and deep pattern memory have all caused the price of ATE hardware to increase dramatically with recent generations of devices at advanced process nodes.

At the same time, the ever-increasing functionality of today’s chips means that there is more logic to be tested, requiring more patterns, more tester memory, and as a result, more cost. More patterns also require longer tester runtimes, increasing the number of testers needed to maintain throughput. Automatic test pattern generation (ATPG) is universally used to generate the programs that run on production testers. The complexity of modern chips is putting pressure on the test process as well, often requiring long runtimes that can delay the start of production tests.

Especially for high-volume products where millions of chips will pass through the test floor, every second of test time that can be saved pays enormous dividends. However, any reduction in patterns must maintain the high test coverage or the quality of parts shipped to the customers. Thus, an effective and efficient ATPG solution places high requirements on the test programs generated in addition to the process involved in generation.

The traditional pattern generation flow is an iterative manual loop where a user starts out by setting up typical ATPG tool parameters such as providing fault models, defining design constraints, and specifying the ATPG metric goals for the generated tests, and so on. They, then, run the pattern generation with their best estimates of the tool settings required to meet the target quality of results (QoR). It is highly unlikely that the first attempt will achieve the ATPG goals, and usually takes considerable expertise and many tries to fine-tune the tool settings iteratively to converge on acceptable results. This is due to the interdependence between multiple ATPG tool parameters and their impact on ATPG QoR, making it highly complex to manage manually. This may cause even the test experts to take significantly longer to reach optimal results. Even if the desired results are achieved with such a flow, repeatability from design to design is not guaranteed which introduces unpredictability in the turn-around time and test pattern sign-off schedule. This could mean that test patterns may not be ready by the time silicon comes back from the fab for testing, putting ATPG on the critical path and design schedule at risk.

Introducing artificial intelligence (AI) is an innovative way to meet the requirements for a modern pattern generation flow. An AI-based ATPG solution can intelligently learn about the design characteristics, ATPG engine behavior, user constraints/targets, and the available settings with parallel runs. When refining the settings, correlating the results to learn what works and doesn’t work is exactly the kind of task at which AI excels. Convergence to test coverage goals happens within the tool, without any manual iterations or manipulation of settings to achieve first-time-right results.

The recommended flow is to use the standard ATPG for initial runs to get the design DRC clean, followed by distributed ATPG runs to analyze, optimize, and validate target test coverage through netlist drops and/or ATPG changes with very fast runtime. With desired test coverage achieved, AI can be used to minimize the test patterns before chips are available for production tests. This flow enables achieving the fast turn-around time, and highest quality with the lowest cost tester-ready patterns while maintaining the design schedule.

Synopsys TSO.ai (Test Space Optimization) is an AI-driven ATPG solution that learns and tunes the settings, consistently producing the smallest number of test patterns while also eliminating unnecessary iterations and accelerating time-to-results for any design. In several cases, it has also achieved higher test coverage for a fixed pattern count when tester memory is limited. While this technology can be used to minimize the patterns both for the final tape-out netlist or design already in production to save test costs quickly, it can also learn throughout the design process with netlist drops to reduce the turnaround time of the final pattern reduction process.

This approach has been shown to achieve a consistent test cost reduction across all application segments, with a typical 20% to 25% pattern count reduction and more than 50% in some cases. This accelerates production tests, saving time and cost while reducing the number of testers required for a given production volume.



Leave a Reply


(Note: This name will be displayed publicly)