Executive Insight: Raik Brinkmann

OneSpin’s CEO looks at why formal verification is suddenly a must-have technology.

popularity

, president and CEO of OneSpin Solutions, sat down with Semiconductor Engineering to discuss where and why formal verification is gaining traction, and how it fits alongside other verification approaches. What follows are excerpts of that conversation.

SE: Formal has been around for a while, but adoption has always been slow because most engineers don’t understand assertions. What’s changed?

raikbrinkmanBrinkmann: What’s changed is that we’ve figured out how to actually make people productive doing the assertions. People think that writing assertions is the big problem. It is not that. It’s understanding what they mean, and making sure you can use the tool in an easy way to prove them or to disprove them or run them for random checks. What’s changed is that we’ve inventing something for formal coverage, so people get feedback if they’ve proven an assertion and what’s behind what they actually achieved. Otherwise it is very difficult to comprehend, ‘I’ve proven it, but what does it actually mean?’

SE: For a long time, it was almost an academic project. Right?

Brinkmann: Yes, and from the academic side you’d have to fiddle around with all the options for all these proof engines to make it work. Now, good tools have a very good out of box experience. For the majority of the assertions that you write, you don’t have to tune anything, but just run it and it will come up with some solution or answer in a reasonable amount of time. You don’t need to understand so deeply as a user how different engines work, what goes together, where to apply what. It’s all hidden behind the scenes, with the tools making the decision for you: which engine to choose. That’s made it a lot easier. People can now focus on writing the assertion and understanding what they mean, but don’t need to deep dive into the technology in order to use it.

SE: Something else has changed on the other side too. People have the reason why they have to do this where in the past they could probably figure it out.

Brinkmann: On the productivity direction on the simulation side, people realize now more than they did before that catching bugs is really important, it is not just something that marketing people explain. It’s just too expensive to find them later in the big simulation environment.

SE: Is that a function of the complexity of 10nm or 7nm, or 16/14nm?

Brinkmann: It is to some extent because you’ve just fit more functionality onto the chip. You integrate much bigger systems. Single bug can kill the whole thing. You eventually find them, probably with simulation or emulation, but it is really painful. People realize it is much more effective, not just necessary, to find the bugs earlier.

SE: And when you do find an error, you know where it is as opposed to finding an error and now let’s go through all the code.

Brinkmann: Exactly. The debug is much easier if you go in smaller units and weed out easy things early. Then you can concentrate on the hard-to-find interconnection directions with whatever testbench you have.

SE: What are the other drivers why people are adopting?

Brinkmann: One thing is automotive. People have to get their chips certified for certain safety standards. That implies people use more verification planning techniques and break down their verification effort upfront into activities and that’s very useful for formal. Formal is very effective if you can plan for it and know upfront where you can and should use it. If you make decision late, you can still use formal very effectively for bug hunting, but it’s a useful to think about formal as the only technique for certain parts of your design. And that’s what people do today. That has changed as well.

SE: Formal has potential uses for when systems are connected to other systems, tracing the flow of the signal, of data, and whatever you have to be doing. Is that happening or will it happen in the future?

Brinkmann: I’m not sure because formal is usually not good at tracing things over long periods of time. If you need to do that, then you also need to also apply some special techniques, abstractions and stuff like that. So you can do it, but it is not yet at the stage where we are with other techniques. Formal is more applicable when you have a lot of things going on at the same time, a lot of good things can happen in different ways, and you want to figure out if it is always the good thing that happens in the end, in a short amount of time. If you are looking to trace things through a long sequence, it’s pretty difficult with formal and will stay so.

SE: Would it work for new architecture, packaging architecture like 2.5D, 3D, or new architecture?

Brinkmann: Yes, formal is still a functional verification technology. Right now we don’t see any of big advantages to doing it on the physical level or analyzing analog or other things that come later in the process.

SE: One of the big problems is that there is so much more that needs to be verified and tested. Now you need a whole suite of tools to solve the problem. It’s just that much more complex. How much is formal growing?

Brinkmann: There are three legs of verification. One is emulation in the end. Before that you use simulation for integration questions, running certain subsections of the system. Before that is formal, which is a technique is best applicable for smaller blocks where you don’t have to write a huge test bench, where you can’t afford to create a UVM environment for a single block. It’s not paying off in terms of agility. People need freedom when designing things and they turn around different microarchitectures and implementations in order to meet performance and other requirements. If you start to use a high-end UVM test bench for that while still developing the function, you’ll spend a lot of timing doing it. Formal is much better at figuring things out on a smaller scale, providing you with traces for scenarios you want to test. It’s much more productive.

SE: Is it being now way up front to see if something works?

Brinkmann: It is applied all across the flow. If you think about applications like connectivity checking or post-silicon debug or other things, you can apply formal across the flow. But if you go for functional verification, then formal has its place earlier in the process versus later. In the functional verification side, it is clearly the dominant procedure for early verification.

SE: Who’s adopting it now? Has the company mix changed from what is was several years ago?

Brinkmann: It’s being adopted in larger companies. Medium-size companies adopt it as well, especially in automotive, and other mission-critical areas such as aerospace and defense.

SE: Aerospace and defense have been using it for a while, correct?

Brinkmann: Not to that extent. They’ve mostly relied on building prototypes and extensively testing them. That’s no longer scaling because it is too complex. We also see it in the FPGA space. Lots of companies don’t just use FPGAs as a prototyping system. They also use it as a production system. You can’t easily verify it by just probing it. You need to build the same methodology around it as you do in ASIC design, including the formal side, the simulation and the test benches. The emulation you don’t need since you have the FPGA system as a prototyping system itself. But in the end, it’s these three stages you go through. We see these companies adopting formal there as well.

SE: How does that do in terms of the speed gain. Everyone looks at formal being another thing you have to do along the way. Now if it becomes part of the flow, where do you save time?

Brinkmann: You save time because it is much quicker to run formal than simulation. You don’t need to write the stimulus or the testbench. You have to constrain it to some extent, or course. There are some good views of verification IP for prepackaged order quotes that you can apply quickly, so you don’t need to spend time on that. It is really hard to come up with the UVM test bench. It’s really a lot of work. Formal is much easier to apply in the beginning. You need less effort. You find bugs quicker. That’s where you get your speed. You still want to run your UVMs later on to find other types of bugs, but you don’t want to find the easy bugs late. It’s much more productive doing formal, meaning you save time.

SE: Is it node-dependent or market-dependent when people are using formal?

Brinkmann: It is more market dependent. The node doesn’t really matter because when you apply formal you work on the RTL level and the scope of what you are trying to achieve is the system that defines what makes sense, rather than the processes you use to implement it.

SE: Is formal being applied to other areas, such as software?

Brinkmann: It is happening to a certain extent. We are deploying our SystemC/C++ verification as formal. That is closer to software than anything else before. It’s one step up. People ask if we can embed a certain function from software into that system in some way, and analyze it too. We are getting there, step by step.

SE What is the outlook for that?

Brinkmann: It’s a convergence. There are already software formal verification tools out there that do some form of formal. They usually try it in a different way. The technology underneath is formal, but still it’s a different model than for hardware verification. These systems get more towards functional verification, so right now they don’t usually do functional verification. They verify security aspects of software or certain bad conditions.

SE: It’s almost like software test, right?

Brinkmann Yes, but it is formal used under the hood. Formal property checking for software is a bit esoteric and not a major use. Moving into SystemC and system-level descriptions, by being more abstract on what the system should do, it’s is moving itself more to software. I can’t make a prediction on when it is going to be there. It will take a while.

SE: It appears to make a lot of sense you can go right from the software to the hardware and the interaction.

Brinkmann: There are still limitations on what you can do on the hardware. As long as things are synthesizable, to some extent you can build a formal model that is similar to what we do for RTL. Once you go for general software verification, you need to apply some special techniques for handling such things as memory that is not fixed in size—dynamic memory allocations, for example. That’s a software-specific issue.

SE: You can also look at adding efficiencies to performance and power and pinpointing where is it going wrong. That’s a new way of looking at formal. What else can we do with this?

Brinkmann. Formal is a good technique to figure out scenarios and to use it as a test generator. That is a very good use for it. It’s not just proving things, but also a way to peek into a certain situation. For power and performance, that’s a good thing to do—it’s much more productive to let the tool figure it out, than coming up with that yourself. You don’t need to have the deeper understanding of how things are working, as the tool will do it for you. You can use formal very well for that. On the power verification side, that is the big thing. On the performance side, if you make changes to the design that impact your microarchitecture but shouldn’t change the function, that’s where formal is really good at and one of the uses of formal at the moment.

SE: Is that a level of granularity in the design that you can’t otherwise get?

Brinkmann: What formal can do well is focus on certain areas of interest. For simulation, you need the whole environment and the whole chip. In formal, you can cut away areas you are not interested in, that are not relevant, and focus on specific part. For example, a state machine that you re-encode, you don’t really need to think about things from the outside.

SE: Are you finding more use for formal as the world shifts from homogeneous to heterogeneous cores in a design?

Brinkmann: In that setting, we see customers using formal for verifying application-specific processors and the firmware on it in order to verify the subsystem has a certain function. That’s been done and that’s where the software is hardware-like, where you have a certain number of steps in the software that drive the processor to do something and then you analyze the subsystem, including some hardware. That’s what formal is good at. With heterogeneous systems, you can’t really analyze a full-scale implementation and see about mechanisms of the system. You need to abstract, and have a model that is more abstract than that, in order to analyze that.

SE: Where are you finding the greatest adoption geographically?

Brinkmann: It is really dependent on what type of formal you are looking at. In Europe it’s automotive. There aren’t too many automotive companies in the U.S. doing chip design. But they are really trying to understand the technology behind it because they have to understand more than in the past for self-driving cars. In the past, they didn’t care about what chip was in there. They bought a control box and there was a processor in it and they modify the software in it, but the hardware itself they never understood to that extent. Now that is changing. We see Audi collaborating with Nvidia. It is becoming an interesting market, with the layers above the chip level supplier.

SE: OneSpin is finding opportunities there, right?

Brinkmann: Yes, we are strong in that area because we have been working with automotive and have a long-term relationship on how formal can be applied to their needs. That’s where we excel at the moment. There’s a lot of push toward for general functional verification. That’s where we see a lot of adoption of formal. Not so much specific for automotive, it’s just general formal that is picking up quite a bit. Also FPGA-use is pretty high in the U.S. There is a lot of interest there. In Asia it’s more high-level synthesis and FPGA. Lots of Asia is fond of high-level synthesis. Business is picking up everywhere but it’s a little bit different in each region.

SE: How about mobile and other markets?

Brinkmann: More infrastructure and aerospace/defense and mission-critical applications of hardware. That’s where formal is really a must-have today.

SE. What about applications like smart-metering, where little thought has gone into security? Can they trace the signal path?

Brinkmann: Safety and security go together to some extent. There’s a lot of room for formal in security. The problem is that security is not just a single point question. It’s a whole stack. There is a whole methodology on how to make things secure. You have to plan for it. You don’t make something secure bottom-up. You have to design it to be secure top-down. It’s very hard to put security into something that already exists. You can apply formal to verify these things very well, but you need to go into architectural level and break it down again.



Leave a Reply


(Note: This name will be displayed publicly)