How shifting left can improve the effectiveness of AI in optimizing pattern generation.
Artificial Intelligence has become a pervasive technology that is being applied to solve today’s complex problems, especially in the areas involving exponentially large amounts of data, their analysis, and corresponding decision making that are otherwise limited by human abilities. Therefore, complex challenges in semiconductor design, test and manufacturing are a perfect match for AI.
The adoption of advanced node technology and heterogenous integration has caused manufacturing test costs to skyrocket, primarily driven by the need for screening reliable parts in various applications including consumer, high performance computing (HPC) and automotive. These test costs include the cost of production testing at different stages including wafer-sort test, package test (ATE), burn-in test, and system level test (SLT). As seen in figure 1, the industry spends billions of dollars per year on test. The total cost of test in 2019 was $10.4Bn and it is expected to continue to rise close to $15 Bn by 2025.
Manufacturing test costs are directly proportional to the test time spent at each stage, and the majority of test time consists of structural tests like scan (ATPG) test patterns. The number of test patterns and the volume of patterns has been growing exponentially to meet the demand for lower DPPM (Defective Parts Per Million) levels. Traditional stuck-at and transition-delay fault models are not enough to achieve desired DPPM goals. Testing now requires more test patterns which target advanced fault models like cell-aware, power-aware, slack-based etc. Every second counts when it comes to test cost savings in High Volume Manufacturing (HVM). So, design for test/diagnosis/yield teams and product and test engineering teams are constantly looking for new and innovative ideas to minimize test costs.
Fig. 1: Test cost predictions by VLSI Research. (Source: VLSI PP master (swtest.org))
Achieving desired test coverage and test quality with fewer test patterns is a constant challenge faced by DFT teams and it often takes expert user involvement and long, tedious iterative processes of fine-tuning different parameters to generate an optimal ATPG pattern set. Also, to further optimize the test pattern count and reduce volume, the test configuration needs to be optimized (i.e., number of scan-inputs, scan-outputs, scan-chains). Finding an optimal test configuration that is customized for each design core can be a laborious and challenging task.
Fig. 2: Synopsys TSO.ai advanced DFT and ATPG with AI/ML.
Synopsys’ TSO.ai is an AI driven test space optimization solution. It leverages AI to optimize ATPG pattern generation on the gate-level netlist, resulting in fewer production test patterns and, therefore, test costs; an average of 25% pattern reduction is observed with this capability. Recent enhancements enable engineers to further accelerate pattern generation by integrating Synopsys TestMAX Distributed ATPG feature to intelligently distribute and manage the test runs onto multiple machines and threads. However, since the ATPG feature targets a gate-level netlist, where the DFT test configuration has already been fixed, additional pattern reduction can only be achieved by optimizing the test configuration at the DFT planning stage itself.
Fig. 3: Synopsys TSO.ai for DFT planning and optimization.
Synopsys TSO.ai now offers enhanced capabilities in the early DFT planning stage to optimize the test configuration across different parameters including the number of scan-chains, scan-inputs, scan-outputs, etc. Users can perform quick what-if analysis early in the RTL stage for their test coverage and test times targets and produce the most optimal test configuration for a synthesis run to implement at the gate-level netlist in a single pass synthesis flow. In a traditional flow, users have to wait for a synthesized gate-level netlist to assess the coverage with the chosen test configuration; then to optimize the test configuration users must repeat the scan-stitching and synthesis flow to assess its impact on test coverage and test patterns. This is a very long iterative process. Furthermore, using the same test configuration for all the design blocks would not be optimal as the best optimization depends on the functional design of the block – repeating these trials for all design blocks is a labor-intensive task and could impact the design schedule by weeks to months.
Fig. 4: TSO.ai benchmark results – Correlation between DFT planner and gate level ATPG.
Figure 4 highlights results from a benchmark design in which a 33% pattern reduction can be achieved compared to the baseline test configuration. For the same test configuration and test coverage, the number of ATPG test cycles assessed from Synopsys TSO.ai at the RTL stage are very tightly correlated with the real Synopsys TestMAX ATPG runs on a synthesized gate-level netlist.
An average of a 20% pattern reduction is observed with Synopsys TSO.ai DFT planning capabilities, in addition to the average of a 25% reduction seen with Synopsys TSO.ai ATPG feature, resulting in a significant reduction in test time and hence test costs. Also, since Synopsys TSO.ai DFT capabilities can optimize the test configuration for the target test coverage and test pattern count at the RTL stage itself, users avoid the need for multiple long iterations involving gate-level synthesis, thereby significantly reducing design cycles from months to days.
Synopsys has been at the forefront of applying AI/ML to semiconductor designs. Synopsys.ai is a full stack AI driven EDA suite, optimizing the design, verification, test and manufacturing of digital and analog devices. Synopsys TSO.ai is a key component of the Synopsys.ai solution and provides optimal test configurations and test patterns to solve challenges of tighter design schedules and increasing test costs. At Synopsys, this is just a beginning of AI in test, and it plans to continue developing important solutions to address next generation test challenges.
References
Leave a Reply