Any effort expended to reduce the number of test patterns or runtime on the tester pays dividends many times over.
As chips get ever bigger and more complex, the electronic design automation (EDA) industry must innovate constantly to keep up. Engineers expect every new generation of silicon to be modeled, simulated, laid out, and checked in about the same time with the same effort, despite the growth in die size and density. One area of particular focus is manufacturing test. Any effort expended to reduce the number of test patterns or runtime on the tester pays dividends many times over, literally every time that a new wafer or packaged device is tested.
Chip developers realized long ago that hand-crafting manufacturing test patterns was just not feasible for deep submicron designs. They adopted a design for test (DFT) philosophy, in which structures were designed into the chip to facilitate test. With such structures in place, automatic test pattern generation (ATPG) tools could produce test patterns faster, denser, and with better fault coverage than could ever be achieved by manual methods. This flow became the standard for virtually all chip designs, and it worked very well.
However, in recent years, ATPG has been at risk of falling short. Some of the challenges are due to the very nature of contemporary chips. Transistor counts have grown much faster than pin counts, making it harder both to provide input patterns with good coverage and to extract results with enough debug information to diagnose silicon failures. Using embedded processors for built-in self-test (BIST) can help on the pattern side, but ultimately, the results still must be sent off-chip for analysis. Serialized result extraction is common, but this increases test time.
Power consumption is also a big concern. Many of today’s large chips will suffer thermal breakdown if every function is running at full speed simultaneously. During mission mode, there are hardware and software mechanisms to prevent this from happening. On the tester, the patterns must be produced by a power-aware ATPG tool so that consumption limits are not exceeded. Finally, defect densities are higher for fine geometries and there is more process variation across large chips, putting more pressure on the quality of the test patterns to detect failures.
EDA vendors have been quick to respond to these challenges, with constant evolution in their DFT and ATPG solutions. One approach that has proven valuable is fine-grained multithreading across multiple cores. Many types of EDA tools have been modified to exploit parallelism whenever possible. This has proven very effective for ATPG, reducing runtime while also overcoming memory bottlenecks. Performance varies by design, but measurements have shown that running on an eight-core CPU will generally yield a speedup of at least 6X.
Another approach widely used today is hierarchical DFT, in which the chip design is broken into smaller subchips for DFT and ATPG runs, each of which may take advantage of multithreading. When the subchips are run in parallel on multiple processors, runtime is reduced significantly over running on the entire flattened chip design. In addition, separate DFT structures in the subchips may enable more parallel test within the chip during manufacturing, reducing test time and cost. Since the wafer or chip test has access only the chip pins, the subchip-level ATPG patterns must be integrated together into the full chip test program.
The latest innovation is distributed ATPG, in which the overall problem is divided among a group of processors on a network. One machine runs the manager process, which spawns worker processes and establishes inter-process communication. Each worker process targets different faults, updates global fault status changes to all other processes, and transfers test patterns to the manager process after fault simulation. Runtime is much faster than on a full chip, although the number of test patterns may grow because of less visibility within each worker process.
The challenges discussed above must be addressed by any manufacturing test EDA solution, providing robust, proven implementations of power-aware ATPG, multi-threading, hierarchical DFT, and distributed ATPG. The Synopsys TestMAX family of products is such a solution, offering innovative, next-level test and diagnosis capabilities for today’s largest and most complex chips. Capabilities include early testability analysis and planning, test pattern compression, logic BIST, and memory self-test and repair.
Recent studies have shown impressive results for its distributed ATPG mode. For equivalent fault coverage, the average speedup is 2.6X for 7 workers and 4x for 15 workers. This brings a week-long ATPG run down to less than two days, a significant reduction in project schedule. Distributed mode often results in 1-2% higher fault coverage than non-distributed mode. As noted earlier, some increase in the number of patterns has been observed, typically 1.2X for 7 workers and 1.4X for 15 workers. Memory usage for manager and workers is about the same as for a single machine in non-distributed mode.
The table below shows impressive results from a leading semiconductor company running multiple designs using a manager and only three workers, with each of the four machines running 16 threads.
Manufacturing test certainly isn’t getting any easier, and Synopsys continues to drive innovation solutions. Areas of active work include running DFT/ATPG on graphical processing units (GPUs) and applying AI technologies. Watch this space for more information going forward.
Leave a Reply