Scan compression is a critical technology used in nearly every design, but it doesn’t come without costs.
Scan compression was introduced in the year 2000 and has seen rapid adoption. Nearly every design’s test methodology today implements this technology, which inserts compression logic in the scan path between the scan I/Os and the internal chains. In this article, we take a critical look at the technology to understand how scan compression has matured.
The road to scan compression
Since the 1960’s, digital IC testing has transitioned from the use of functional verification tests to structural tests, which relied on having design flip-flops (FF) configured into a shift register or scan chain. Scan chains allowed every FF to be controllable and observable, in turn allowing for the application of a stimulus and measurement of a response that would test for faults in the combinational logic of the design. Scan design brought predictability of the test quality, making it much more attractive than functional tests. However, its adoption was gradual with arguments over the area overhead and impact to the timing of the design. Scan design became prevalent for three reasons:
Scan design blended very well with design flows, and its simplicity led it to become widely used in other scenarios that required access to the internal states of the design. Most importantly, because the design libraries included scan-FFs, the area and timing impact of converting an FF to become scannable was not visible. Figure 1 shows a design with 24 FFs implemented with two scan chains each with a length of 12 (left part of the figure).
Figure 1: A design with two scan chains being modified to achieve 4X scan compression
As design size increased, the I/O interface did not scale with the increasing FFs. As a result, the scan chains became too long. Test time and data volume of scan-based tests is also shown in Figure 1. Both the test time and data volume are linearly dependent on the chain length. The longer the scan chain, the larger the problem. Influential semiconductor companies started presenting charts showing the test costs of future devices becoming equal to the cost of manufacturing when evaluated on a per transistor basis.
The solution to this problem is called scan compression. Because of the intensive focus on the cost of test, the adoption curve of scan compression was more rapid than that of scan design. Today nearly every design with some volume in production uses scan compression.
Scan compression in use today
Scan compression relies on breaking the link between the scan I/O and the scan chains such that many more internal scan chains can be constructed making the chain length shorter. This concept is shown in Figure 1 (on the right-hand side). The internal scan chains are 4X the number of scan chains in the scan design, hence, the internal scan chains are 4X shorter. By doing this, the test time is targeted to be 4X less than that of scan design.
Once the link between the scan I/Os and the internal scan chains is broken, the remaining problem to be solved by the scan compression technology is to determine the interfacing logic between the few scan I/Os and the many internal scan chains. A simple example of a scan compression architecture fans out the scan inputs to the internal scan chain inputs and xors the internal scan chain outputs to connect them to the design scan outputs. The logic on the scan input side is called the decompressor, and the logic on the scan output side is called the compressor. Together they become the codec. In this example, the fanout of the scan input to multiple internal scan chains causes groups of FFs to take on the same values at all times. FF values are dependent on each other during test. These dependencies impact the amount of compression one can target in a design. While the simple example codec would give reasonable compression, more complexities are needed in the codec to achieve higher compression with good fault coverage for the digital designs of modern times. There are combinational or sequential codecs to achieve the industry median compression range between 50X and 100X.
Regardless of the codec being used, the dependencies created by supplying test stimulus from a small interface to the many internal scan chains is the same. Similarly, the problems of observing the test data from many scan chains at a few scan outputs is common to all implementations.
New problems introduced by scan compression
While scan compression solved the test data application time and test data volume issues of scan design, it introduced new problems to test flows.
Scan compression added a complexity to the scan flow, which changed the hierarchical nature of the test architecture. Where scan chains of one block could previously be concatenated with other scan chains hierarchically, you cannot compress a block that already has a codec to get another level of compression. This is similar to the fact that you cannot “zip” a .zip file to reduce the size of the file.
Codec logic and interconnect is significant enough that it needs to blend in with the physical design flow.
The one-to-many and many-to-one connections cause potential impact to the quality of results of a test solution. With the increasing demand for compression, the dependencies may not only increase the pattern counts for a design over the scan equivalent results, but there can sometimes be a coverage loss when the dependencies introduced interfere with fault detection requirements. On the observation side, when designs have too many unknowns in the capture values of a test pattern, masking logic would be needed to protect the fault detections, resulting in increased pattern counts.
With a loss of information due to scan compression, the debuggability and diagnosability of the test information becomes much harder than the original scan design.
Focus of scan compression going forward
While the industry has accepted the complexities introduced by scan compression, it has not been willing to give up the biggest benefit that scan design gave to test—namely the predictability of implementing a good test solution. Thus, engineers have consistently used less aggressive compression implementations so that they could ensure that they do not encounter the negative aspects that scan compression introduced to the design flow. The future of scan compression will be to focus on aspects of the technology that allow higher compression implementations without impacting the predictability of the test solution.
With the center of gravity of an IC design flow moving to layout, the most important aspect of a scan compression flow is that it does not impact the predictability of this step in the flow. However, scan compression relies on creating high fan-out connections or brings together widely dispersed scan chains to few scan-outs.
Table 1: Wire length of scan compression as a percentage of the total wire length
Table 1 shows the increase in wire length due to scan compression in relation to the amount of compression being targeted. To get better scan compression numbers, there is a significant increase in the layout impact. This is the single most important problem that needs to be solved by the codec technology, and its implementation as a prediction of the successful layout of a design with scan compression is what will define the future of scan compression itself.
Summary
The chip design industry has adopted scan compression as a default flow. Because of the significant impact on the routing of the netlist, the industry has settled with a quality of results (QoR) that is much lower than the achievable compression in a design. To achieve better numbers, scan compression needs to manage its impact on congestion.
Leave a Reply