Highly Efficient Scan Diagnosis With Dynamic Partitioning

A clever technique increases volume scan diagnosis throughput by 10X.

popularity

Charged with the task of improving yield, product engineers need to find the location of defects in manufactured ICs quickly and efficiently. Typically, they use volume scan diagnosis to generate large amounts of data from failing test cycles, which is then analyzed to reveal the location of defects. Scan failure data provides the basis for many decisions in the failure analysis and yield improvement. It helps in the selection of failed devices for physical failure analysis, finds hidden systematic failures, and helps direct design and manufacturing decisions that will ultimately improve yield.

However, design sizes have been growing, a trend that is unlikely to stop. In addition to design size growth, newer processes and transistor structures present new defect modes inside the cell that  require cell-aware diagnosis. Diagnosis of these cell-internal defect modes requires more analysis and processing by the diagnosis tool to pin-point the location of the defect inside the cell. Performing volume scan diagnosis on today’s large, advanced node designs puts outsized demands on turn-around-time and compute resources.

Diagnosis is performed on input failure log files from the ATE (automatic test equipment), along with a design netlist and scan patterns. The memory required to perform diagnosis is proportional to the   size of the design netlist. The fail logs and supporting design files are analyzed (diagnosed) in parallel by distributing them using a compute grid. The efficiency of that distribution depends on the grid’s processing resources. For example, imagine you had to run diagnosis on a design that requires 100GB of RAM and would take about an hour for each diagnosis result. If you have 11 processors available on the grid, but only one has enough RAM to handle that job, then all the diagnosis runs will happen sequentially on that one machine, and the other 10 sit idle. In terms of diagnosis throughput, which is a function of the time needed to create a diagnosis report and the amount of memory needed for the analysis, this situation is not ideal. The higher volume of diagnosis you can complete, the higher the value of diagnosis (Fig. 1).


Fig. 1. Higher diagnosis volume drives higher value.

How could you improve the throughput of diagnosis? By either reducing the amount of memory needed or by reducing the diagnosis time, or both. There is a new technique called dynamic partitioning that both reduces the memory requirements and the diagnosis runtimes by reducing the input file sizes.

It works by first analyzing a fail log and then creating a partition that contains only the parts of the design that are relevant to that fail log. This partition serves as a new, smaller netlist that is used to perform the diagnosis. The process works because typically only a small part of the design is actually needed to perform the diagnosis of a fail log because a defect is located in a very specific part of the design.

For example, say a defect in the design causes the observed values in a couple of scan cells to change. These changes translate to a failing cycle on the tester. There are other scan cells where the measured value on the cycle matches what was expected that translate to a passing cycle on the tester. These passing and failing cycles are used to determine which portions of the design are relevant to the defect captured by the fail log.

The analysis and partitioning of fail logs, and the distribution of diagnosis processes, is performed by the Tessent Diagnosis Server, which works on your compute grid. The server includes partitioners that create smaller files based on the fail logs, the analyzers that perform the actual diagnosis on the partitions, and a monitor that coordinates between the partitioners and analyzers and intelligently distributes the diagnosis processes (Fig 2).


Fig. 2. Automating the dynamic partitioning flow.

Using the multiple smaller input files with just the relevant fail log data means that more processes (an unlimited number, actually) can be run in parallel, using a wider range of CPUs, and getting results faster and more efficiently than ever.

Dynamic partitioning was created to address hardware resource limitations by greatly reducing the memory footprint of the diagnosis process performed by the analyzers. With dynamic partitioning enabled, smaller machines can to be used to do diagnosis. This, in turn, results in a greater number of fail logs being diagnosed in a given day. Typically, the dynamic partitioning leads to a 5X reduction in memory and a runtime reduction of 50% per diagnosis report.

This dynamic partitioning technology makes a larger volume of scan diagnosis results available much faster, increasing the overall throughput of failure diagnosis by 10X. For existing Tessent Diagnosis users, setting up the dynamic partitioning flow is extremely easy. The benefits of dynamic partitioning include:

  • Highly improved diagnosis throughput
  • Reduced hardware resource requirements
  • Enabler for volume diagnosis and yield analysis

Saving time and compute resources during volume scan diagnosis can confer a competitive advantage by reducing turn-around-time, equipment costs, and improving yield faster.

Download our new whitepaper Improve Volume Scan Diagnosis Throughput 10X with Dynamic Partitioning to learn more.



Leave a Reply


(Note: This name will be displayed publicly)