Carving Up Verification

Cadence’s verification chief discusses current best practices for dealing with design complexity.

popularity

Anirudh Devgan, executive vice president and general manager of Cadence’s System & Verification Group, sat down with Semiconductor Engineering to discuss the evolution of verification. What follows are excerpts of that conversation.

SE: What’s changing in verification?

Devgan: Parallelism, greater capacity and multiple engines are certainly important verification elements. In addition, today’s verification has to be both fast and smart. I’ve been doing EDA commercially for about 10 years now, and being fast has always been part of the equation. Previously speed wasn’t a focus to the extent that it should have been, and that’s definitely changing with the evolution of design complexity. The second critical thing in verification today is how you use the verification engines. You don’t just have one engine—you have formal, simulation, emulation and prototyping—which are the four main engines in verification.

SE: But with rising complexity, not to mention new application areas such as automotive, when are you done?

Devgan: It’s very difficult to know that. You do a bunch of simulations in verification, but did you cover one part of the design too much? Or did you miss a part entirely? There needs to be a smart fabric. There are opportunities for things like machine learning and big data, which are talked about in other fields. These applications apply to verification as well, but they require speed. If you look at simulation, farms now have multiple CPUs, but designers haven’t traditionally used multiple CPUs. It’s one of the last areas to be addressed in EDA. About 60% to 70% of the compute is often simulation. That’s why we bought Rocketick last year. Parallelism is key.

SE: How about FPGAs?

Devgan: We’ve always focused on emulation, which is doing very well, but we historically didn’t focus enough on FPGA prototyping. One value of FPGA-based verification is that it’s fast compared to emulation. However, it can be limiting when it comes to debugging and capacity. Designers need both custom processor-based emulation and FPGA-based prototyping. We’ve launched a very serious effort into FPGA prototyping now, and we do it in a way that is congruent with emulation. That way you can go back and forth between the two. Having the ability to be fast has been possible for a while, but some refinement was needed—one, in general for logic simulation, and two, specific to emulation and prototyping and how they fit together.

SE: What you are looking for here is a methodology that says, ‘I’m going to use this technology for this and this other technology for this.’ But changing the flows inside many of these companies is not so easy. They’ve already established what they do and the related resources they’ve allotted.

Devgan: One-third of my job is visiting customers, and they look to us for recommendations and change. They are looking for change because designs are increasingly more complicated. ‘Do I run formal, or do I run simulation or emulation? And when do I run it?’ The best way to establish credibility with our customers is to have good core engines—a good simulator, a good emulator, a good prototyping and a good formal engine. Then you can have a discussion about the environment and logistics.

SE: What about machine learning?

Devgan: We use the same verification process, but we are automating it more with machine learning. Today, if you run place and route one time, running it the next time uses very little information from the previous run. It’s the same thing in verification. This has been EDA’s weakness for years. If I go around with 80% of compute cycles and run 1 million verification cycles, now I get some amount of coverage. But if the designer wants to get to 95% to 99% coverage, or whatever the metric is, how do you use that and run some other simulations or formal or emulations? That process is very manual and takes up to six months to do, even if you have best-in-class engines. The challenge is reducing that time and quantifying how many verification metrics you are covering for.

SE: How much of your methodology now says, ‘Here’s what you need, here’s the best way to do it, and here’s the available resources and multiple ways to tackle it?’ Maybe that includes cloud-based verification, too.

Devgan: We support that. We want to enable the cloud, whether internal or external. We have the four key engines and the intelligent fabric, or smart environment, to make things better. Then you have four solutions on top of that. One of them is cloud. Another is application-specific, such as verification for automotive versus server. Each is different. Third is throughput. Fourth is metrics. There aren’t enough good metrics in verification. When it comes to the cloud, customers can host emulation from their own datacenter, but we are also working on a cloud-centric architecture for all our engines as more and more customers supplement on-premise computing with off-premise computing. We need to support that.

SE: Is that changing? It’s getting expensive to own everything, and startups in particular might not want to own anything.

Devgan: It may be easier in verification than implementation. With implementation, you have DRCs, and customers are more sensitive about that data. They’re a bit more open on simulation.

SE: They’re also buying a lot more IP these days. That makes you an integral part of what they are developing already, right?

Devgan: There are some companies that have massive compute farms internally. They may now want to go to external clouds. We want to make sure that we enable cloud access from a hardware standpoint. The value of the cloud is with the amortization of hardware. Our business model is already set up for cloud support. In talking with one company, their big transformation was going from perpetual to time-based licensing, which is similar to how companies are starting to think about cloud computing.

SE: We’re seeing more custom design and more hardware-software co-design and verification than in the past. How does that affect what you are doing here, because these are not the massive billion-unit chips? That’s pretty much limited to mobility, and then the numbers drop off.

Devgan: You’re right. What we see is more systems companies creating their own silicon. We’re working with car companies on this. And if you look at aero and defense, those markets are changing, too. Again, this is where the four verification engines come into play. Simulation continues to grow, and emulation, FPGA prototyping and formal have become much more important. Emulation is gaining more momentum, but if you have hundreds of software developers, you don’t necessarily want to give each of them an emulator. That’s not scalable. These companies want a reference platform to give to the software engineers, but a virtual platform is not accurate enough for software development. They want an RTL-based platform with processor fast models. Some of our customers, for example, will boot up Windows before they tape out the chip. The same thing applies to mobile phone companies. The volume may be lower with the systems companies, but the reason they’re designing the chip is because the value is pretty high.

SE: They can also amortize that across the sale of the entire system, right?

Devgan: Yes. A car company may only sell few million cars, but the value is high enough to justify spending more money on designing the chips. With all the mission-critical requirements for safety, security and reliability, engineers have to optimize the design. This is good for our industry. The number of semiconductor companies is shrinking, but system companies think it’s cool to do silicon again. That balances the equation, fueling growth.



Leave a Reply


(Note: This name will be displayed publicly)