How To Manage DFT For AI Chips

AI-specific processors call for design-for-test techniques that boost time-to-market.

popularity

Semiconductor companies are racing to develop AI-specific chips to meet the rapidly growing compute requirements for artificial intelligence (AI) systems. AI chips from companies like Graphcore and Mythic are ASICs based on the novel, massively parallel architectures that maximize data processing capabilities for AI workloads. Others, like Intel, Nvidia, and AMD, are optimizing existing architectures like GPU, CPU, and FPGA to keep up with the performance requirements of AI systems. All the AI chip designs are very large (billions of gates), have a large number of replicated processing cores and use distributed memories.

Whatever the chip architecture, success in this rapidly growing market depends on getting working chips into customer’s hands ASAP. Every part of the design flow—including everything needed for IC test and silicon bring-up—needs to work toward faster time-to-market.

What are some key features of a design-for-test (DFT) strategy for AI chips? These three are tops:

  • Exploit AI chip regularity
  • Insert and verify DFT at the RTL-level
  • Improve the silicon bring-up flow

Let’s take a brief look into each of these three DFT strategies for AI chips.

Exploit AI chip regularity
Traditional DFT methodologies are based on inserting DFT logic and performing ATPG at the chip level. It is impossible to do that for large designs such as Graphcore’s Colossus chip, which contains many cores. A hierarchical DFT and pattern generation methodology aligns perfectly with the AI chip architectures, which contain many repeated identical processing cores.


Figure 1. Save time in DFT by performing all the work on one core, then replicating that complete, signed-off core for the top-level DFT.

Hierarchical DFT allows the designer to do all that DFT work just once for the core, and then replicate the complete, signed-off core to complete the chip-level DFT implementation. This approach takes DFT out of the critical path to tapeout, avoiding any impact on the project schedule.

This hierarchical DFT also allows for core-level diagnosis. Core-level diagnosis, like DFT, is faster than diagnosis and failure analysis at the chip level. Hierarchical DFT is already in use at many leading semiconductor companies and can speed ATPG by 10x and radically accelerate bring-up, debug, and characterization of AI chips.

When planning the DFT work for an AI chip, you may want to use these other techniques that are designed to share resources and exploit the AI chip architecture further:

  • Broadcast the same test data to all the identical cores with channel broadcasting
  • Share a single memory BIST controller between multiple memories in multiple cores
  • Test more cores together without increasing the test power by using an embedded test compression (EDT) low power controller

Insert and verify DFT at the RTL-level
A second key feature of a DFT solution for AI chips is to insert and verify DFT logic in RTL rather than at the gate-level (during or after synthesis). If you insert IJTAG, memory BIST, boundary scan, EDT, logic BIST, and on-chip clock controller logic at in RTL, simulation and debug runtime goes much faster. RTL compile runtime is about 4x faster than gate-level compile, and debug runtime is about 20x faster.

RTL-level insertion also means that if the DFT logic changes during the design phase, you do not need repeatedly perform synthesis. If the DFT is inserted at the gate-level, you’d have to go through synthesis again after every change. For a big AI chip design, repeating simulation, debug, and synthesis after every change to the DFT logic can ruin the design schedule.

How do you check and tune the test coverage without performing synthesis and pattern generation? Typically, designers have to iterate between defining DFT configurations and generating test patterns (through ATPG) to check the test coverage. You can skip these big, time-consuming iterations and just perform testability checks and make most fixes at the RTL level.

Another significant benefit of RTL-level DFT logic insertion is that it allows the design team to do early I/O and floor planning of the chip, which further shortens the design development cycle.

Improve the silicon bring-up flow
The third key feature of a test methodology for AI chips involves fixing the silicon bring-up. After design and manufacturing, the silicon has to be tested and analyzed – and this flow is ripe for improvement. It involves multiple iterations between different groups scattered across the globe, using different tools and formats, and possessing different knowledge. Pattern debug, characterization, test optimization, and test scheduling involve the cooperation of two unique tribes: the DFT domain and the test/ATE domain.


Figure 2: Iterations between the DFT domain and the ATE domain is error-prone and increases the time required for IP evaluation and silicon bring-up.

The fix is this: establish a direct connection between the DFT and the ATE domains so the DFT engineers can perform the silicon-bring up themselves and the test engineers can run diagnosis without the help of DFT engineers. Connecting the DFT and ATE domains reduces silicon bring-up time from weeks to days.

There are a couple of ways to bridge the divide between DFT and ATE:

  • Connect a desktop computer running DFT software directly to a bring-up/validation board that includes the device under test. This way has the added benefit of avoiding conflicts with scheduling tester time. Graphcore used this solution for silicon bring-up and also for complete testing of their AI chip.
  • Connect a desktop running DFT software remotely to the ATE. This allows the DFT engineers to directly observe and control IJTAG-based IP in the design.

The explosion in AI chips, with more than 50 startups and 25 established semiconductor companies racing to capture parts of the emerging AI segment, presents an opportunity to rethink and redesign DFT flows using tools and methods that are better suited for the needs of AI chips. The key features of a DFT solution for AI include:

  • Exploit AI chip regularity
  • Insert and verify DFT at the RTL-level
  • Improve the silicon bring-up flow

Certainly, applying traditional DFT and test methods to AI chips is not an option. To have a chance of success in the AI segment, invest in DFT solutions that cut time-to-market without lowering test quality.

For more information, download our whitepaper AI Chip DFT Techniques for Aggressive Time-to-Market.



Leave a Reply


(Note: This name will be displayed publicly)