Experts at the Table: part two. Formal has to live in a simulation world and that can create difficulties. Finding the right talent can also be a challenge.
Semiconductor Engineering sat down to discuss the right and wrong ways to apply formal verification technology with Normando Montecillo, associate technical director at Broadcom; Ashish Darbari, principal engineer at Imagination Technologies; Roger Sabbagh, principal engineer at Huawei; and Stuart Hoad, lead engineer at PMC Sierra/Microsemi. What follows are excerpts of a conversation that took place during Decoding Formal, an event organized by Oski Technology and sponsored by Synopsys.
SE: In simulation we have metrics. We all know how bad they are and yet we trust them. Where are with closure and ?
Darbari: Consider lint. You run it without a testbench. It does find bugs and gives you a good feeling about hitting lines of the RTL, but not in terms of feature coverage that we are used to in simulation. If you are writing assertions then you have thought about requirements, which is what I equate to completeness. Completeness is asking: have we covered the space of the requirements. There is a missing part of in simulation and that is an aspect of mutation to see if you can actually fail the checkers. We should unify it all and come up with one number.
Sabbagh: There is a feeling amongst formal guys that formal is perfect – it provides a proof. And we want a coverage metric which is equivalent. In reality, we don’t have that in simulation and we have been living with imperfect metrics for years. What we have is good enough. If assertions are covering all the functions of the design based on the structural coverage points; if they fan into the assertions; if they are included within the proof of abstraction region – we have bounded proofs that let you know if the structural coverage points are within the depth of the proof – we have the ability to prove if things are reachable, which simulation cannot do. We have a slew of things and we undersell the value of what we have. The ability to sign off with formal is much more rigorous than with simulation.
Hoad: I am not sure if it is more rigorous, it is just different and understanding how it is different is key.
Sabbagh: Unless you use mutation coverage in simulation, which is very expensive, you really have no observability coverage metric. All you have is a controllability metric which says I was able to stimulate some part of the design and cause something to be exercised within the design. Did it produce the right result? I don’t know. It may have produced a bad result that did not propagate to a checker. Even functional coverage does not tell you that the testbench was checking any results at that time. It is very weak and yet we rely on those metrics.
Montecillo: Management is comfortable with simulation coverage today. The holy grail of formal is to get to a point where our coverage can plug into their coverage and the two will merge and we will get the 100% coverage that every wants to see. We are so tied to simulation coverage. I am not sure if we can get there because this is a very difficult problem. For formal we rely heavily on test planning. The testplan tells us which checkers we need to write, what are the constraints that need to be written and then rely on sequential depth analysis to get there and also looks at unreachable and proof of depth.
Darbari: In the short term we can generate metrics that blend with simulation.
Hoad: We are getting there.
Sabbagh: But is that the right thing to chase for? Just because simulation has been doing it that way — why can’t we raise the bar a little? It is time to join hands and really sort it out.
SE: What are the biggest breakthroughs in the past year in terms of usage and deployment of formal?
Hoad: Usability. Increasingly, anyone can use a formal tool and get results quickly. If they are so motivated, they can then try harder to do different things with the tool. The GUIs that modern formal tools have is the biggest development.
Montecillo: In the past five years, the engines have also developed. Five years ago, I had to do abstraction in order to get deeper. With some of the engines I am using today, I don’t have to do that. It does it for me and allows me to get deeper into the design. Coverage is making progress even though it is not yet where we want it to be.
Sabbagh: Methodology is one of the things that makes or breaks the game. The invention of the IC3 algorithm has been a game changer and it has been adoption by almost all of the vendors. Because of this, scalability has increased. Debug is a task that takes a lot of time and the usability has helped with this. Not all EDA tools are quite there yet.
SE: What is missing?
Darbari: Engineers need to make chips and they have to be designed and verified. Somewhere along the line, the debug story got lost and the ability to get to the root cause has been a big problem. People continue to do what they have done in the past.
Sabbagh: It is easy to throw up a counter example, but one of the big challenges for formal is how do you debug an unreachable point. What do you do? The tools don’t provide much support. We have come such a long way. We have standard assertion languages, we have coverage metrics, we have new engines – we have come such a long way. This has made it more practical to use formal.
Montecillo: But it still has a long way to go. We need answers to the scalability problem. We are still looking at small blocks when what we want to accomplish is something bigger. Today we either have to break it up or do abstractions or isolate things.
Sabbagh: It comes back to what do you want to do with formal. If you want to do end-to-end formal where you want to prove the absence of deadlock, then you will have to be smart about it. We need to keep evolving.
Hoad: We need to make them more predictable as well. It is very hard, and comes with experience, but if you can’t have predictable results then you cannot realistically apply it to a new problem because you can’t deal with that level of unpredictability.
Sabbagh: Formal will improve predictability.
Hoad: Only for a well-defined methodology and well-defined problem.
Darbari: It is a matter of using the right thing for the right task
Sabbagh: With simulation-based methods, we are getting to 90% or 95% coverage, and the last few percent is where the unpredictability comes in.
SE: What has been the biggest challenge with adoption?
Darbari: Education. It is about making certain that people understand what it will bring you, setting the expectations correctly and then enabling people to do it, making sure they communicate the right things to upper-level management and handling the information vertical from engineer to manager and have them all see it the same way. It is hard work.
Sabbagh: There are various levels of acceptance. I am part of a central team that deploys formal within our internal customers and different design groups and there are various levels of resistance or acceptance. It depends on who you are working with. Some of the groups are adopting formal themselves and may want help. Others want us to do it for them and for some it is like pushing on rope.
Hoad: Yes, there is inertia and that depends upon the problems people have had with formal in the past.
Montecillo: To overcome inertia, there is the resource issue and a talent issue. Education is definitely up there. The next question is: how do we get the talent into the teams so that they can deploy formal? The biggest resistance I find is that they cannot find the necessary talent. Should we start converting some dynamic verification guys or get new people? If we do get additional resources, where do you find these guys? When I tried to hire people last year, I sent out a request and all the resumes I got back had UVM experience. Not one of them had formal experience. The ones we do find are kids that are just out of college and are willing to do this work. Some of them that have no background in simulation are great. They have not been corrupted.
Hoad: Right. If you think of things in terms of directed test cases, it becomes difficult to think about how to construct formal test.
Montecillo: When we take people from simulation, that is the first thing they try and do – they write a directed test in formal and we have to reverse that thinking which takes some time.
SE: Any advice for verification teams considering formal?
Sabbagh: Go for it.
Montecillo: Attack the low hanging fruit – things such as connectivity, bug hunting, and automatic assertion/property generation. They will show some immediate success. Don’t tackle something bigger than you can handle. Once you fail, your chances of going at it again are limited.
Hoad: Plan it so that you get good value.
Darbari: Go after things with a known value-add.
Related Stories
Formal Confusion Part 1
What is the best way to apply formal verification? Some of the industry’s top users have a difference of opinion.
Open Standards for Verification?
Pressure builds to provide common ways to use verification results for analysis and test purposes.
A Formal Transformation
What happens when you put formal leaders from all over the world in the same room and give them an hour to discuss deployment?
Leave a Reply