Is Verification At A Crossroads?

As verification technologies have matured, more viable options have emerged to make sure an SoC will function correctly. This brings new challenges to engineering teams as they weigh the options.

popularity

As SoC verification methodologies and technologies have continued to mature, it’s an interesting time for engineering teams as they look to meet time to market goals and cut costs in an environment of cutthroat profit margins.

Whether it is hardware emulation, FPGA prototyping, virtual prototyping or traditional software simulation, each platform has its strengths and drawbacks, with overlap occurring in some cases. Making the right decision of technology used to verify an SoC is not as straightforward as it might have been in the past.

Kurt Shuler, vice president of marketing at Arteris, observed most customers seem to have a tiered strategy, based on cost and time. Traditional emulation [Mentor Graphics’ Veloce or Cadence’s Palladium] is the most expensive option, but provide the fastest results. Synopsys’ EVE/multi-FPGA emulation hardware is in the midrange in terms of cost and speed. And finally, traditional software simulators are the least expensive option but also the slowest.

While this seems straightforward, does it mean certain steps can be skipped, i.e., can emulation be used in replace of FPGA prototyping as new features have been added to emulation and UVM methodologies have become more sophisticated? The answer is no, according to Pranav Ashar, CTO at Real Intent.

“FPGA prototyping still has a role in the verification flow, but given that the FPGA-implementable RTL is many steps below the starting point, the prototyped version of the RTL had better be very clean,” said Ashar. “FPGA prototyping should be deployed to catch true system-level issues near the end of the verification process, rather than looking for implementation errors and basic conceptual bugs that appear earlier in verification. Also, certain kinds of bugs like CDC errors are hard to catch even in the lab with FPGA prototyping. One of Real Intent’s early customers had a CDC bug in an FPGA-based system that manifested only for a particular batch of FPGAs. FPGA-prototyping has a role, but it needs to be preceded by a heavy dose of static verification (from tools that use structural and formal methods and often hidden under the hood) to find those items that are not easily found with simulation. The other aspect is that if there is a bug found, how easy is it to find the relevant RTL code that caused the error, and to understand what need to be done to fix the problem? Software solutions are suited to provide the traceback and debug data that is needed in your design environment.”

As such, are we at a crossroads with verification? Drew Wingard, CTO of Sonics, says yes. “FPGAs have gotten to be so capable, and while they are expensive if you need to buy many of them, for verification you usually don’t need to buy that many of them. As such, the FPGA model of ‘don’t verify it by simulating it, verify it by building it,’ is very, very attractive. It fights a bit with this approach the rest of the verification industry has of trying to get to higher and higher levels of abstraction, trying to increase productivity that way, but relying upon all these translation steps along the way that probably move us further away from being able to leverage the FPGAs.”

From his vantage point, the biggest semiconductor companies end up investing down both paths (FPGA prototyping plus UVM-based simulation), but there are not that many of those companies around anymore spending that kind of money on digital chips. The question then becomes, what are the other guys who are going to try to compete end up doing? Wingard believes they may have to pick, which provides an opportunity for looking at this in a different way.

But these techniques are not for just the large system companies, Ashar pointed out. “Every SoC company big or small must budget for an FPGA version (may be a scaled down version) of their chip. The other point here is that the number of System (SoC) companies that can afford FPGA prototyping is not small. In my experience, chip startups routinely budget for an FPGA prototype in their business plan. While prototyping is useful, a test plan and test suite that delivers high coverage is still needed to ensure the design is robust across a range of corner conditions.”

FPGA prototyping versus emulation
Then, when it comes to deciding between FPGA prototyping and emulation, Harry Foster, principal engineer in the Design Verification Technology Division at Mentor Graphics, said the decision is not always so black and white.

Historically, he noted, the distinction between an emulator and an FPGA prototype has been the rich debugging environment that is available in emulation, which is lacking in FPGA prototyping. However, today the decision to go with either emulation—or FPGA prototyping—or even both is often driven by other factors.

“For example, some companies (regardless of their size) might be required to deliver an FPGA prototype to their external partners that will be used as a software development platform by these partners,” Foster said. “While other companies (regardless of size), might have a significant amount of software that is being developed through a large internal software development team, and will be delivered with the SoC to external customers (e.g., drivers, HAL, etc.). Emulation can offer a significant benefit to these teams because the emulator can act like a software development server, where various members of the software team submits their new code as a batch job to be run on the emulator, and then debugging can occur offline. This can be extremely productive when multiple software developers exist in an organization, versus the bottleneck that can occur when a software developer is both running and debugging software interactively on an FPGA prototype.”

Another factor that can influence the decision of emulation versus FPGA prototyping can be design size. “Although the capacity of FPGAs has grown significantly over the years, they have not kept up with non-FPGA-based targeted ICs (e.g., custom ICs and ASICs). There are many small startups creating extremely large SoC designs that will not fit into a single FPGA. Hence, to create an FPGA prototype the engineering team must partition the design across multiple FPGAs. This is a major engineering effort, which means the partitioned FPGA prototype might not be available until very late in a project’s schedule. In addition, partitioning the design across multiple FPGAs introduces extra latency into the design that will not exist in the final SoC. This means that any software that is developed on the partitioned FPGA prototype will need to be reworked, and then debugged on the final targeted platform due to the difference in latency. This late changes in the software can introduce schedule risk into a project,” he added.

Virtual prototypes take their place
Girish Gaikwad, director of SoC Engineering at Open-Silicon, knows these tradeoffs very well. For design house such as Open-Silicon and others that play in the SoC business where margins are cut-throat, it is a daily task to try to determine, given all of the parameters, what is the best way to compete, keep costs under control yet still make sure the silicon works. This means design teams cannot compromise on the technology, he said. To address these challenges, for the last year and a half, Open-Silicon has explored virtual prototyping where the SoC prototype is created in a virtual format, as compared to an FPGA prototype where the SoC is implemented in an FPGA.

Gaikwad explained by using this approach, during the specification of the architecture phase, they are able to do a lot of architecture analysis. “All the architecture problems that might get opened up during the verification phase…you’re able to reveal all of them at the specification of architecture phase.” Second, a virtual prototype allows performance analysis to be performed at the same time as the architecture analysis. And third, on the prototype where all of the processor and IP models sit, the interconnect can be modeled. This means from day one, software development can begin with those models. Then, the C-based test cases can be reused across various verification platforms including the simulation, FPGA prototype and emulation platforms – which also enable a focused FPGA verification.

However, on the other hand, design services company Synapse Design has not seen much uptake for virtual platforms, according to CTO Charlie Kahle. “In terms of virtual platforms, we don’t see a lot of companies really embracing that as much as we might have expected that they would a few years ago when SystemC and TLM first came out. We expected to see a lot of people doing a lot more effort in that area. Certainly there are people doing things there and I think we are going to see continued activity in that area because it seems to be the thing that’s going to tie all of this together in a very nice way. You can capture things early on, you can do some architectural tradeoffs and analysis at a very early point in the project and then you can start using that to be the golden reference model that you use for simulation and drive that into the simulation.”

More is better
Tom Borgstrom, director of marketing for verification group at Synopsys, pointed out that today it’s a matter of adding more verification techniques to what’s been tried and true. “It’s a question of ‘and’ not ‘either/or.’ What we don’t see is people giving up simulation and doing FPGA prototyping only. We see people do everything—they simulate both their IPs and SoCs and that’s where they find the vast majority of the bugs in the design. But then there are certain categories of functionality that really need much, much higher performance to verify and this typically involves a lot of software running on embedded processors. So just the sheer number of cycles needed to exercise those tests, that functionality calls for a different verification platform and that’s when you really see the move towards a hardware-based verification and also with FPGA prototypes. You get much, much higher execution speed that lets you execute much more software to verify your SoC as well as your system.”

At the end of the day, Frank Schirrmeister, group director for product marketing of the System Development Suite at Cadence, said when it comes to choosing, the concept of ‘good enough,’ may come into play. “The FPGA-based technologies may be good enough for some of the aspects because the downside of taking longer to debug and being more complicated to bring up may be something a company can take into consideration. It’s cheaper than buying an emulator but it may not get them to market as fast. For a company where every week counts, if it costs $10 million for a week delay (as Qualcomm has said in the past) then an emulator makes perfect sense because it pulls your schedule in a couple of weeks.”

Those are some of the considerations taken into account on a project by project basis depending on the design size, time to market requirements, how much debug is needed on the hardware side and how many software developers are being equipped. “If you have a smaller number of software developers, they can potentially use the emulator as well. Where if you have 200 software developers, if it’s a big design, then doing an FPGA-based prototype makes sense because it’s cheaper and at that point you’re stable enough to give it to that many software developers,” he added.



1 comments

Is Verification At A Crossroads? | The Best Of ... says:

[…] Is Verification At A Crossroads? As verification technologies have matured, more viable options have emerged to make sure an SoC will function correctly. This brings new challenges to engineering teams as they weigh the options.  […]

Leave a Reply


(Note: This name will be displayed publicly)