Verification Engine Disconnects

Moving seamlessly from one verification engine to another is a good goal, but it’s harder than it looks.

popularity

Moving verification data seamlessly between emulation, simulation, FPGA prototyping and formal verification engines may be possible on paper, but it is proving more difficult to implement in the real world.

Verification still consumes the most time and money in the design process. And while the amount of time spent on verification in complex designs has held relatively steady over the years, mainly due to hardware acceleration and better education, it still requires a number of different tools because each has unique benefits and capabilities.

So while emulation provides more cycles and capacity than simulation, for example, FPGA prototyping is faster for certain jobs. The downside of FPGA prototyping is that it takes longer to create benchmarks.

Formal verification excels at following one narrow thread such as the integrity of a signal as it moves through a complex design, but it doesn’t work well for spotting problems across multiple functions on an SoC. Simulation, meanwhile, is much more limited in capability than , but simulators provide fast turnaround and unparalleled debug capabilities and are so widespread that replacing every machine with an emulator would be price prohibitive, and in many cases overkill.

That has given rise to the idea of a continuum across all of these devices, and some vendors have made a point of integrating their various engines. But for most chipmakers, this is harder than it looks. Some of their tools were developed by different vendors or were purchased at different times. A tool that is paid for and fully depreciated may be the go-to tool of choice for some organizations. And even one that is new may be stretched beyond what it was designed to do.

“When you look at a continuum from the user perspective, it’s all about how can they save time as they progress through a flow,” said Steve Bailey, director of emerging technologies for Mentor Graphics’ design verification technology group. “What is more challenging is in the validation space, where you’re looking at system-level and chip-level types of activities. Even if you’re focused on just an IP block or a subsystem, but you’re trying to do it within the context of a full chip, the goal is to take all that effort that they’ve put into creating that verification environment and be able to re-use it between engines as appropriate. When they have to start over to adapt from one engine to the next, you still have to maintain all of that, and that becomes very painful. The less you have to do to create verification content that can be used across multiple engines, the better.”

Each tool used in verification has a unique value proposition. Some provide more granularity than others, others provide faster throughput at a higher level. And while some can be extended to move from more granular (slower or more limited scope) to less granular (faster), each provides a different glimpse into a design.

“There is different visibility with each,” said Frank Schirrmeister, group director for product marketing of the System Development Suite at Cadence. “We’re just getting to the point where the same database can be used for emulation and FPGA prototyping. The challenge in prototyping is that the use-case switches from hardware enablement to software, so you don’t have the same need for visibility. We do have people going back to the earlier engines for debug purposes. If you want to optimize it to run at full speed, you have to do manual intervention, so your database is no longer the same between FPGA, emulation and simulation. So now the question is, ‘Did you introduce this bug, or was the bug there in the first place?’ That’s why switching between engines is still a problem.”

Art vs. science

But transitioning between one tool and another, and knowing when to use one versus another is more art than science. For one thing, engineers get comfortable with one tool and they tend to stick with it. As a result, they also get more out of it because they become more adept at using it. In addition, not all tools are available at all times to all engineers within an organization.

“One of the kernels in this verification continuum is FPGA prototyping, where debug is much harder,” said Krzysztof Szczur, technical support manager in Aldec’s Hardware Product Division. “FPGA prototyping is mostly used for software and firmware development. The software and firmware developers are quite different than hardware designers. In the big companies, they don’t talk to each other very often. When the software developers discover a bug, they report it to hardware designers. They just capture a sequence of transactions and report it back to the design team, who use a simulator or emulator to debug.”

That may sound like a straightforward exchange of data, but it isn’t always. And the choice of which tool to use, and when, isn’t always so clear-cut, either.

“The continuum of engines is pretty well established, but the workloads that run across it are the next layer,” said , director of models technology at ARM. “Our approach is that we have a set of workloads and migrate it to whatever engine is most cost-effective, given their state in the design process. Is it immature enough where you just need design visibility, or is it mature enough where you just want raw cycles? At ARM we can make a lot of investment to do a lot of that, but other customers don’t have the ability to progress their workload from engine to engine to engine. Sometimes you stay in the same EDA tool. Not everyone has the luxury or money to do that. The portable stimulus stuff will address some of the issues associated with migrating a test from one engine to another, but it’s not yet mature.”

There have been efforts by major EDA companies to combine some of these capabilities into a single tool, notably in emulation where tools are more expensive as well as more profitable—meaning there is more incentive to add more features and capabilities. But simulator sales continue to ramp even though emulation provides more cycles and capacity.

“Once you have a tool that does more than one thing, it impacts schedule, budget and there are questions about whether you have the skill set,” said Rajesh Ramanujam, product marketing manager at NetSpeed Systems. “Trying to solve everything in one place is difficult. So even if you have an emulation team, it’s hard to get engineers to change tools. The topic of re-use is very important to the industry. It has to start at the beginning. Every company can come to their own scheme, but it has to be able to run on multiple different tools. Simulation is used for low-hanging fruit. Once you do that, you move to emulation, which is faster but more expensive. But it’s hard to debug in emulation. You want to be able to run it cycle-accurate.”

Consistent views of data
One approach to solving this is to create common graphic user interfaces, which major EDA companies have done for emulation and simulation.

“There are different angles here,” said Schirrmeister. “There are simulation, emulation, FPGA prototyping and formal. Those are the four pieces. On the GUI side, the question is what are you visualizing. Is it one tool for everything? Maybe not. But if you look at specific aspects, if you have different GUIs for everything, it becomes really difficult to compare things. You need a common database. But the software guys don’t do waveforms and the hardware guys don’t do software code. You need a way to bring them together, and if you have too many things going together it becomes very difficult. Then you need a way to serve the users in a way they can look at it. You need different viewpoints, but you move the commonalities forward.”

A second approach is with a common database. “The design database is what integrates different views,” said Aldec’s Szczur. “The engines should be able to use a single database that is standardized. The big three will create their own tools for the biggest ASIC companies, but there are a lot more design houses and developers.”

There are a lot of files in those databases and a number of different approaches to managing that data. “It’s data, but it’s managed by the tool in some way so the user doesn’t have to deal with it all,” said Mentor’s Bailey. “You get data in and out using an API rather than standardize the database. The key is to provide good enough debug strategies so that you don’t have to go back to a slower engine to recreate it.”

While efforts continue to smooth out the interactions between these tools, it will remain a work in progress. New challenges are added at each node, and as advanced packaging begins to creep into design. How much extra time will that add to the verification process is unclear, or how that ultimately will affect the mix of tools. Also not clear is what will happen as new fabless companies enter the market in the IoT space, and whether they will rent tools on EDA vendors’ servers like a shared resource in a cloud, or whether they will buy their own tools.

But one thing is very clear: There is no shortage of upside for verification, regardless of how seamless the flow of data between the various engines.



Leave a Reply


(Note: This name will be displayed publicly)