The Third Generation Of FPGA Prototyping

The evolution of FPGA prototyping from build-your-own efforts to highly automated solutions.

popularity

Bench setups with physical prototypes lie at the very heart of electrical and electronic engineering. With all due respect to the many powerful forms of modeling and simulation, at some point the engineering team wants to work with hardware. When a system is built entirely from existing components, it is possible to build a prototype of the product as soon as it has been designed. When the design includes chips that must be fabricated, waiting for that process to be complete is much too late in the project to start hardware debug. Software developers also rely on bench setups to do much of their development and integration. Problems discovered in the lab may require another chip turn, delaying time to market (TTM) by months and ballooning project cost.

When field programmable gate arrays (FPGAs) became available, they offered the chance to develop prototypes and debug the design much earlier. There have been three generations of FPGA prototyping solutions, each offering greater capacity, performance, and features. Before commercial offerings were available, essentially Generation 0, some teams developed “build your own” (BYO) prototypes. Partitioning a large chip design into multiple FPGAs could be a significant effort, but the portability provided by register transfer level (RTL) languages and logic synthesis made it possible to map the design into programmable technology. BYO teams found that building and maintaining prototypes consumed precious resources, as shown in figure 1.


Fig. 1: Cost breakdown of a BYO FPGA prototype. (Source: Synopsys industry research)

In Generation 1, many development teams moved from BYO FPGA prototypes to commercial solutions. In addition to the cost savings, a key motivator for this move was software that partitioned the design under guidance from the designers. Users no longer had to split the design into individual blocks, each suitable for a single FPGA. These tools introduced time-division multiplexing (TDM) on FPGA pins to minimize the number of interconnect signals required between multiple FPGAs. Debug techniques also improved so that users did not have to go back to simulation to diagnose every test failure. Generation 2 expanded these capabilities, with large, multi-board systems providing much greater capacity and high-speed TDM techniques, plus deep trace buffers and other advanced debug features.

The second generation also focused more on system validation of system on chip (SoC) designs, including long test runs with real software workloads. The hardware became modular, enabling the construction of SoC platforms from FPGA prototypes of major intellectual property (IP) blocks. An ecosystem of off-the-shelf accessory cards arose to connect the protypes to other systems, often via standard interfaces such as Ethernet. As low power design became increasingly important, the prototypes added support for power validation in the context of full system workloads. Perhaps the most important evolution was support for enterprise prototyping farms with multiple designs and multiple users active at the same time.

Generation 3 of FPGA prototyping is now underway, driven by the tremendous demands of designs for graphics processing units (GPUs), artificial intelligence (AI), machine learning (ML), servers, storage, networking, and 5G. Capacity has increased dramatically in this generation, with support for a billion gates or more. The FPGA partitioning software has had to scale to meet this level of demand. The requirements on debug support are also much higher, including:

  • Multi-FPGA global state visibility, which provides the values of all registers in the design without any manual instrumentation
  • Support for SystemVerilog Assertions (SVA), which help with self-checking tests as well as debug
  • Deeper sample/trace queues to minimize the need to go to the simulator for debug
  • Ability to view signal waveforms for a simulator-like debug experience

As shown in figure 2, both desktop and rack configurations are required. The desktop is ideal for bench setups, especially when using accessory cards to interface to real-world systems and when reconfiguring the hardware frequently. The server rack configuration provides scalability and makes it possible to co-locate FPGA prototypes with compute farms for easy access across multiple teams and multiple geographies. A centralized enterprise prototype farm enables higher utilization and lowers the total prototyping cost. An enterprise deployment management system must be available to keep all users synchronized on the latest version of the design and manage the prototypes for efficient utilization.


Fig. 2: Two FPGA prototype configurations. (Source: Synopsys)

After providing products in the first two generations of FPGA prototyping, Synopsys has pioneered Generation 3 with the HAPS-100 solution. It satisfies all the requirements listed above and includes some unique features. It uses a unified incremental compiler that spans formal verification, simulation, emulation and prototyping. The mapping to FPGAs includes constraint-driven partitioning, high-speed time-division multiplexing of I/O pins, and system-level routing. Connection to real-world I/O interfaces and other devices is facilitated by a partner ecosystem of compatible accessory cards and by speed adapters for popular protocols such as PCIe Gen5, USB 3.1, CXL, DDR5 and 100/400G Ethernet. Management of prototyping resources can be done interactively or with a Python application programming interface (API).

Once a challenging BYO effort that distracted from the main project, FPGA prototyping has become highly automated and widely adopted. For each of its three generations, this approach to system verification and validation has become more powerful and easier to use. The value of FPGA prototypes for software development is clear: programmers can run on a hardware platform much earlier in the project, long before fabricated chips are available. This also benefits the hardware team since lingering corner-case design bugs are found during hardware-software validation rather than during chip bring-up in the lab. With Generation 3 now available, there is no reason why every significant chip project should not use FPGA prototyping. A white paper is available to learn more.



Leave a Reply


(Note: This name will be displayed publicly)