Formal Abstraction And Coverage

What experts in formal verification are saying behind closed doors.

popularity

For the past three years, Oski Technology has facilitated a gathering of formal verification experts over dinner to discuss the problems and issues that they face. They discuss techniques they have been attempting with formal verification technologies, along with the results they have been achieving.

Semiconductor Engineering was there to record that conversation and to condense it into the key points. Previous summits were reported in these stories: Can Formal Replace Simulation? and A Formal Transformation.

In order to promote free conversation, the participants, who are listed below, asked not to be quoted directly. So all comments have been paraphrased and made anonymous.

Kamal Sekhon, formal verification engineer for Oski, kicked off the evening. She talked about work they had been doing with Fungible. Fungible is a startup in the networking space whose team members were more familiar with simulation than formal. Oski persuaded them to consider a block for formal verification. While initially reluctant, they agreed, but continued to use simulation after the formal verification had been completed.

Sekhon described the strategy used for the verification and concluded that formal found 23 bugs in the design. So far, they have found no bugs from simulation, except one that was accidentally introduced. It was caught by the formal regression suite. When the block was integrated, emulation did turn up one bug. This was a case where the verification of the block had been over-constrained based on a software use case that was not fully understood.

The group had plenty of questions related to the time savings based on using formal. Sekhon reported that it is not possible to provide this information, given that a traditional approach had not been run in parallel, but noted that formal verification had taken about five weeks. She estimated it would have taken three months with simulation.

Using formal for discovery
A key aspect of the verification approach was to create a formal reference model. Vigyan Singhal, chief Oski, explained that this is very similar to one created for simulation, just implemented differently. “With assertions all you can do is find local bugs. The goal was to find all of the bugs.”

One major difference with this approach is how bugs are found. With simulation, one bug is found at a time. It has to be debugged, fixed and then the simulation repeated. Formal found many of the bugs at the same time, and it was reported that this was a lot more efficient process.

Fungible is not ready to consider signoff based on formal yet. Part of the problem is that downstream verification still has to be performed. “You cannot do signoff based on RTL simulation. You need to know what libraries are going to be used, synthesize it and back annotate, and then run simulation again to make sure that timing constraints are all correct. Given that will still be required for signoff, can you rely on formal for the functional verification?”

Another person commented on a secondary advantage related to the creation of the formal reference model. “Often, there are no good specs at that time or you may be trying to figure out the spec. There are vulnerabilities that we have discovered recently at the architectural level involving security that we may not even think to test for. Formal may help us discover those.”

Model continuity
Model continuity is important. “When you develop a model up front, you want the concept of the model to continue all the way through to implementation,” said one of the participants. “Many companies figure out a way to do that. They may develop different models but then look for ways to verify the correspondence. There are problems with that approach. The first is that these models are at different levels, so there is an abstraction gap. That is often a tough challenge because of transactional differences. Second, as you go down the chain, you may want to use the same models for RTL verification. If you end up creating models at a low level of abstraction you lose the benefit of fast verification. Every company thinks about how to maintain the continuity. Some are successful, other give up. Other just do some analysis using the model and then expect the downstream folks to follow. Maybe there are ways around formal to abstract the core essence of the architectural model.”

The discussion turned to languages and questioned if any language or formalism is capable of describing everything. Among the attendee comments:

  • “To verify something, you have to specify it precisely first.”
  • “In C or C++ you can describe everything, but a more domain specific language would help a lot, just like SystemVerilog helps for verification. What is the better language for architectural modeling?”
  • “Different problems require different formalisms. If you are modeling memory behavior, then Alloy from MIT may be suitable. The problem is as soon as the situation becomes more complex, you have to put more thought into it, but Alloy enables them to create operational specs.”
  • Different companies also have different priorities. “Our architecture team has a modeling team and they work together. Generally, they focus on performance and some of the tradeoffs, such as power, performance and area (PPA). How wide do I make the datapath? Architectural features, such as throughput and bandwidth, are traded off against power. Modeling teams are good at handling that, but they are not effectively modeling other system-level characteristics, such as absence of deadlocks, security issues, cache coherence and safety. Formal could really help. Today they write C models and run simulation, but this provides very low coverage with directed tests.”
  • “Performance models are often queue based or stochastic models. We are not going down that path, we need to look at livelock, deadlock and for that you need a different formalism.”
    Another company had a very different scope. “Firmware authentication is a very important issue for us. How do you guarantee that a malicious device driver cannot load bad firmware? It involves many IPs in the SoC. We started modeling that a few years ago and discovered some serious bugs.”
  • While formalisms are nice, there is a larger problem. “The hardest part is getting access to solvers and associated tools. I don’t think the language really matters. What is interesting about architectural modeling is to figure out protocols. Don’t think about designing the protocol, then implementing it. Instead, turn formal on while you are designing it. It provides a lot of insights into where you may have performance problems. Using it to design the protocol is a lot more interesting, and verification comes for free.”
  • Another person recommended using Verilog. “Eventually you want to link it back to the implementation. Leverage the infrastructure that surrounds the industry standard tools including debug. Eventually you need to test all of the assumptions you made at the architectural level against the implementation.”
  • The abstraction gap remains. “You have to put some thought to it. There may be decision points in the RTL that cannot be captured at the higher-level because of event timing, so you cheat by reaching inside to monitoring certain signal to provide a hint. That often provides enough.”

Formalizing the behavior often flushes out many of the issues. Models can be refined over time.

Coverage
The emerging Portable Stimulus (PS) standard was introduced. “PS is being defined by dynamic verification people and thus the potential overlap with formal is not being maximized,” noted one participant. Most of the participants were not familiar with the details about this standard or the way in which formal may be affected. What is clear is that PS is based on paths from outputs to inputs, which is very similar to formal.

The subject quickly moved to one of coverage and the merging of results from dynamic and formal verification:

• “You write a testplan because it defines the what and not the how. Then you decide how to check each of the line items in the testplan and ask if you can check it formally. Until now, the testplan was driven by dynamic simulation. Are we missing something by not having formal drive some of that? Can I write a checker for this is part of the testplan? If I can write a checker formally, and if reachability is not a problem then I can target formal.”

• “Outside of reachability, we have a plan for dynamic simulation, coverage-driven checkers etc, but then we have some pieces specific for formal. There is a separation. Even though we attempt to run all of the assertions in simulation and formal, we make a differentiation when they are specific to formal. Different coding guidelines apply.”

• “You can have a plan that blends many strategies and technologies or engines. To get closure, you have to merge coverage from the different sources. If you target features of a design with different engines, how do you combine coverage?”

• “Today, it is separate. You do simulation and check your functional coverage. Formal is separate. We can merge them only if we use tools from the same vendor. So, it is somewhat manual. You have functional coverage from simulation and you have formal coverage, but they come together in the plan.”

• One project was described. “Many assertions would require simulation to run for weeks and we did not use emulation on this project. Formal was a replacement for emulation. It is white box testing. The testplan for simulation is essentially black box. We dive into the microarchitecture and go really deep, talk to the designers and add the assertions – specifically for formal. When we do the reporting, there is some manual merging.”

Participants talked about the ways in which assertion coverage from simulation and formal is different and the value of rerunning assertions in simulation that have already been proven in formal.

ProofCore
Coverage closure has been an issue for formal. Even if every property has been proven, have you defined enough properties? “The concept of ProofCore is that you have a design and a list of checkers,” noted one attendee. “What portion of the logic has to be included to prove the correctness of the checkers? If there is a portion of the logic that was not used in the proof, then you do not have enough checkers.”

The value in this metric, just like any metric, is in the holes that are uncovered. When it says something is uncovered it means that it has not been tested, but if it says it is covered, then it gives you a good feeling, but it does not guarantee anything.

This compares positively to Cone of Influence (CoI) analysis. “COI is very simplistic in that it is structural,” said one participant. “How much structure fans in to the assertions. This is very optimistic. ProofCore goes a lot farther. As formal engines do their abstraction refinement, they are pulling in bits of logic, one bit at a time and try and prove the property with a tiny cone of logic without any constraints onto the inputs of that cone of logic. When that becomes possible, you are done. It no longer matters what logic is driving it. You know that the logic does not violate the property under any circumstance. If it does fail, then you need to pull in more logic to constrain the inputs. As you pull in more logic – eventually you will get a proof – so how much logic of the design did you use to get the proof? Anything outside of that is untested.”

Participants included Da Chuang and Anatoli Sokhatski, Cisco; Ambar Sarkar, Nvidia; Chirag Dhruv and Jian Wang, AMD; Sava Krstic, Intel; Kamel Belhous, Teradyne; and Chak Kosaraju, Fungible. Oski partipants included Rob van Blommestein, Vigyan Singhal, Dave Parry, Roger Sabbagh and Kamal Sekhon.



Leave a Reply


(Note: This name will be displayed publicly)