The popular design methodology enables more sophisticated hardware/software verification before first silicon becomes available.
FPGA technology for design prototypes is making new inroads as demands increase for better integration between hardware and software.
FPGA prototyping, also known as physical prototyping, has been supported by all of the major EDA players for some time, and it has been considered an essential tool for the largest chipmakers, along with emulation and simulation. But its reach is growing, spurred by the (IoT), a variety of new markets, and most importantly the tighter connection that exists between hardware and software in sophisticated chips, which can have a significant impact on power and performance.
Still, FPGA prototyping isn’t always user-friendly. It’s difficult to work with, particularly when it comes to partitioning. And while the technology is fast, it has limits.
There has always been plenty of praise for this technology. “It’s a unique combination of speed and accuracy before your first silicon comes back,” Doug Amos, ASIC prototyping and FPGA activist at Mentor Graphics, says of FPGA prototyping.
Designers can resort to a SystemC virtual model to get close to what a device can do very early in the design flow, but they do have differences, especially with respect to timing that can cause software to execute differently. Many prefer to wait until RTL is available.
“With emulators and simulators , you can look at everything—you’ve got all the visibility you need,” Amos says. “But I can’t run enough of the software stack, so I’m down at the bottom of the stack, at the hardware-dependent end of the software. I believe that prototyping is unique in its ability to take the whole stack of software, run it at the fastest speed—probably not anywhere near what the final silicon’s going to be, but it’s the best you’re going to get before that silicon is available. And then, off you go. You’re really exercising the SoC functions to its high-wall use. You’ve got the chance to integrate your hardware and software together, and run those within real-world data streams. That’s why people go to all these lengths to create these FPGA prototypes.”
So where does FPGA prototyping fit into the tools lineup?
“Emulation runs much faster than simulation,” says Frank Schirrmeister, group director for product marketing of Cadence’s System Development Suite, which includes FPGA prototyping, emulation, and simulation with a common front-end. “Emulation is typically in the megahertz range. It’s available fairly early in the flow because over the years we have developed complex compile front-ends to bring your RTL up fast. FPGA-based prototyping, where you’re now inherently taking your design and mapping into a set of FPGAs, is fairly automated once you have manually optimized the design for FPGA. It can be in the tens of megahertz range, so an order of magnitude faster than emulation. As such, it is very applicable to software development. The main use for FPGA-based prototyping is software development, and that’s how design teams bring up their operating systems. They’ll bring up bare-metal software running on the design well before the silicon is available.”
Tom De Schutter, director of product marketing for physical prototyping at Synopsys, cites the complexity of design cycles in FPGA-based prototyping. “When designing a high-performance and low-power device, there’s a huge amount of software that needs to run on that device,” he says. “You need a high-speed development system to verify all that.”
Zibi Zalewski, general manager of the hardware division at Aldec, agrees. “Selection between FPGA prototyping and emulation is a tradeoff between speed and debugging capabilities. Prototyping provides higher speeds, since clocks are driven directly from clock oscillators or PLLs. But they run at constant frequency and cannot be stopped, which is necessary for advanced debugging. In emulation, the emulation controller generates and controls all design clocks and can stop them at any time for the sake of debugging. The result is that prototyping speed is faster than emulation, but emulation provides debugging capabilities closer to simulation and different options of integration with other tools like simulators (acceleration) or virtual platform (hybrid). That’s why prototyping is used for software projects, where developers need the highest possible speed of OS bootup or application execution, for example, while emulation can be used by hardware teams for verification and debugging, or system-level teams in hybrid modes with virtual platforms connected with an emulator.”
Partitioning
One of the key challenges with prototyping has always been partitioning. And while that has improved over time, it’s still not perfect.
Partitioning is done using a classic divide-and-conquer approach because systems are so complex that it’s difficult to do everything at once. Synopsys’ De Schutter notes that partitioning is critical in FPGA prototyping, and it’s a difficult task. “We make sure that our partitioning software is aware of the exact details of the hardware,” he says. “We’ve been able to mostly automate the partitioning. With some of our most advanced stuff, just because of the fact that the system can run so many different options, it can actually do better than manual partitioning in a lot of cases.”
Cadence’s Schirrmeister likewise says that partitioning is getting easier. “The problem has become a bit more manageable. It’s all about smart partitioning between the different FPGAs.” He says that’s true even as FPGA designs have scaled up to 2 billion gate-equivalents. The edge-node designs in Internet of Things devices are typically smaller designs, and may not have to be partitioned.
With smaller designs, it helps that FPGAs themselves are getting denser, which in turn increases capacity. But that has barely kept pace for more complex designs. “FPGAs keep growing, but the designs keep growing, too,” Amos says. “So, roughly speaking, you’re still chopping it into the same number of lumps. Probably half to two-thirds of designs still need multiple FPGAs.”
While some start out with the goal of designing two FPGAs, that’s not really prototyping, Amos says. “What you’re doing there is you prepare the FPGA designs from scratch, and you’re therefore creating RTL, which is nice and friendly to the FPGAs. You’re understanding the FPGA’s limitations in terms of scale and clocks and memories and so on. And people have always done that since the beginning of programmable logic. But that’s not quite what I would call prototyping. Prototyping to me is the other way around, where you’re starting with the RTL of the chip. Even if it’s one FPGA, you’ve still got all the work to get it in there before you start to think of the partitioning.” And with that, he adds, “there’s a lot of what I call FPGA hostility in that [RTL] code.”
Remotely located software developers have become a fact of life for the industry, De Schutter says. “More and more software developers are sitting across the globe,” he says. There are many software developers in India, for example, and China is increasingly becoming a source of software code, working with teams in Japan, Korea, and the U.S., according to the Synopsys executive. Some developers prefer to work at home. “That type of flexibility is becoming more important,” he adds, and it is much easier and cheaper to ship a prototype to them than an emulator.
Other issues
Amos notes there were several debugging startups that turned up at DAC five years ago and most of those have either disappeared or been acquired. Tektronix acquired Veridae Systems in 2011, and Mentor Graphics bought that debugging technology from Tektronix two years later. “Debugging remains an issue for prototyping,” he says. “It is an extension of design verification, and not meant as a substitution for other verification platforms. Some developers also turn to prototypes for regression testing,” he notes.
This is why companies with deep pockets have utilized every engine available. “People will, for example, bring up the OS using a combination of RTL and emulation, and then things like the ARM fast model in the hybrid environment,” says Schirrmeister. ” Instead of the boot-up taking an hour or two, you can get to it much faster in FPGA. The downside for getting it faster is you don’t have as much debug insight into the hardware. The things design teams naturally do on the FPGA-based prototyping side are much more software-related. Software bring-up is in the later cycles, and you will focus more on the software debug aspect. It certainly helps with the hardware/software integration and the overall system validation, because at speeds of tens of megahertz it’s much easier to connect external environments like an Ethernet or USB or PCI Express. You don’t have to slow it down quite as much as you do for emulation. Hardware/software is definitely a big driver for FPGA-based prototyping.”
De Schutter agrees. “Designing a high-performance and low-power device requires a huge amount of software that needs to run on that device. You need a high-speed development system to do all that.”
The sheer complexity of what software is being called on to do raises a number of questions that need to be considered up front. “How do you drive the different power states, how do you make sure that you minimize the power consumption, but also make sure you get to the high performance?” De Schutter says. “When it comes to designing advanced driver-assistance systems (ADAS), with their cameras and radars, software needs to make decisions. How many test cases, how many corner cases can you cover?” Prototyping enables many more to be considered than would be possible using only emulation.
And all of that has to be done in sync with the hardware. “FPGA-based prototyping brings together hardware and software early,” says Schirrmeister. “It allows you to accelerate your schedule just like emulation does by running more cycles. But it’s much preferred for software developers because of its higher speed.”
Dotted lines
Where one tool gets used versus another isn’t always clear, however. Some design teams will stretch the boundaries of each, particularly when resources are scarce. In addition to the differences in debugging capabilities, there is also a significant difference in turn-around time once a design change is made. Emulation can often make these changes incrementally and you can get back to the task of verification in a short space of time. While prototyping systems also contain incremental compile capabilities, the time necessary to go from a change in RTL to having the prototype up and running again is generally a lot longer. Prototyping thus makes more sense when the design is becoming stable rather than when it contains a lot of bugs.
EDA vendors have responded with hybrid approaches that provide not only consistent interfaces, but some overlapping capabilities. Aldec, for instance, uses the same software tools for FPGA-based platform and emulation, acceleration and different flavors of co-emulation. Similarly Synopsys, Mentor, and Cadence all offer virtual prototyping, formal verification, simulation, emulation, and FPGA prototyping using some common infrastructure.
So while each has its primary application, use cases vary greatly, along with the advantages and limitations of using each of those approaches. FPGA prototyping has come a long way and is getting far more attention, and hybrid approaches and common infrastructure have smoothed out some of the rough edges for each of these platforms. But as with cars, mileage can vary from one design to the next, and from one application to the next.
Leave a Reply