Experts at the Table, part 2: The need for a formal specification, coverage and progress toward formal verification for analog.
Formal verification has come a long way in the past five years as it focused on narrow tasks within the verification flow. Semiconductor Engineering sat down to discuss that progress, and the future of formal technologies, with , president and CEO of OneSpin Solutions; Harry Foster, chief verification scientist at Mentor Graphics; Pete Hardee, product management director at Cadence; Manish Pandey, chief architect for new technologies at Synopsys; , CEO of Agnisys; and , president and CEO at Oski Technology. What follows are excerpts of that conversation. Part one can be found here.
SE: Are we getting any closer to specifications that are more formal?
Bakshi: How do you represent the connectivity and how do you specify it formally? How do you specify what needs to be checked? You have to start somewhere, and someone has to create the assertions. But it all starts with a formal specification. Then you can bring in an engine to prove it.
Foster: Beyond assertions, we have a problem with specification. A lot of the analog/mixed-signal problems could be solved today if we had a formal specification between those two worlds. That does not require assertions. It is just an executable specification.
Pandey: In terms of specifications, things are a lot more mechanized in certain domains. Having formal specifications, where the semantics are very clearly defined — we are a long way from that.
Foster: I agree. One of the problems with a formal spec is that you narrow down the degrees of freedom immediately. In some domains, such as protocol specification, it is applicable, but across the board I am opposed to a formal spec. It is intractable, and there is nothing wrong with a natural language spec.
Bakshi: There are areas where it can be applied.
Brinkmann: It requires something that is clean, well defined and tractable. But it is not a general purpose solution.
Foster: At times it can get in the way.
Hardee: For connectivity and status control registers it can be specified in IP-XACT, and that is taking off.
Foster: I am an assertion bigot and I accept that there are perhaps only 10 people in the world who like them.
Hardee: But that has changed. We see a lot more people these days that are comfortable with assertions.
Brinkmann: I agree.
Bakshi: If you create assertions by hand, then you have the problem of making sure the assertion is correct. It is just two representations of the same thing.
Pandey: You are right, it is the same thing.
Foster: It doesn’t matter. It is a specification.
Singhal: Specification or not, the process is key. You have to decide if you want something done automatically or manually, but the goal is how it fits into the design cycle and schedule, and with the expertise that you have. You have to plan where you want to have automated specs, where you want to manually write assertions, what you want to do for system-level checks — and once you have a plan, be able to measure against it. Then you know when you can say you are done.
Foster: That is key because it is looking from the problem down versus saying, ‘I have a tool, what can I use it for?’
Brinkmann: That happens a lot for automotive users. People are adopting methodologies where they start from a verification plan and break down the system and define their verification targets, and then break it down into what they will use for and what will be done in simulation. It’s not just because it is a good idea, but because they have to and need to provide the flexibility — and to be able to prove they have done a good job of verification. They cannot just say, ‘Here is my hot spot,’ and then try to do some verification around it. That will not work anymore. It has to be broken down in a systematic way.
Hardee: You need both. One of the things that has been a big mindset change, which has contributed to the adoption of formal, is that people are happy to just find bugs and generate counter examples. Bug hunting is a big thing now. We are no longer always fixated with formal being complete. It is okay, and we have a bunch of engines dedicated specifically to this that are not aimed at completing the proof, but at deep state exploration and the generation of counter examples. This is contributing to formal’s success. When you can take a block that is supposedly completely verified by simulation, give it to a formal expert or someone who is familiar enough with the tools, use these bug-hunting methodologies and find more critical bugs, that is extremely powerful. Verification management is starting to wake up to the power of formal.
Brinkmann: It is all about in the end. They have to provide it and it doesn’t matter how they do, be it complete proofs, partial proof or bug hunting. You have to be able to prove to someone that you have done a good job, independent of how you did it. We also see customers who take the properties and analyze them for completeness and check if they have covered all of the corner cases. So they are analyzing their input to find gaps. Gap analysis takes a list of properties and says, if you chain this property with that property, what if you don’t care about an input? So something must be missing, telling you what to do in this case.
Hardee: You have to provide that coverage data in a way that can be consumed by the verification managers who are looking at the coverage data from all of the available verification engines. So you have to make sure that the contribution of coverage due to formal can show up in the overall verification metrics against the verification plan. That is the only way to be accepted and to see the full worth of formal.
Bakshi: Why isn’t formal being used as a first step rather than after verification?
Singhal: It is. Different companies use it in different ways.
Hardee: Absolutely. One of the problems with getting it used as a first step is that while a lot of verification people are comfortable with assertions, that’s not so much the designers. So techniques such as automatic formal and generating properties directly from the RTL help. So does being able to use assertion-based verification IP, so that if I have an AMBA protocol, I have a bunch of assertions already. Those things can automate early formal, as opposed to the dream of having designers put assertions all over the code from the outset. These automatic verification techniques help enormously.
Singhal: Bringing up formal is such an exciting area. What they used to do in sandbox verification they are now doing in formal.
Brinkmann: They are a lot more productive. People simply use the verification IP to do bring up and verify that the chip is doing something useful, or at least the block they are working on. They don’t want to be embarrassed when it comes to integration.
Foster: One of the problems is that we have moved to UVM and so much verification work goes into building the infrastructure that we have missed the unit test. We are doing too much debug at the higher level and companies are starting to recognize that. Going back to unit test, be it in formal or sandbox testing, that is starting to happen again and this is a good thing.
Brinkmann: This is one ideal application for formal – unit testing. You don’t have to think about vectors, you just say what you want to see.
Foster: And it doesn’t have to be exhaustive.
SE: What is the status of formal being used for analog/mixed-signal?
Hardee: It is an area of development. It is fairly straightforward to apply connectivity because you can do real to integer conversions. If you can apply register verification, fundamentally, all analog blocks communicate with the digital compute system via registers and you need to be able to verify that. A lot depends upon how the design is partitioned and if the register is visible at the level of hierarchy you can get at. So there are improvements that can be made. Full formal verification of analog blocks, to the degree in which you can capture it in transfer functions and express it in a different way, is possible, but we are some way off from generalized analog formal.
Foster: The analog world depends upon superstars, and that is a problem. They have their way of doing things. They have their SPICE decks and they don’t want to change. They have to create a model that works in a digital world, but they don’t even run them. They pass this model on and they make stupid mistakes such as reverse polarity and it often doesn’t work. That is a simple problem to solve just with an executable spec between the two groups. You don’t need to do formal.
Pandey: Maybe there will be transfer functions out there, but if people want to get into the analog domain, there is still a lot of work in the structural aspect of it. You want to make sure that the basic structure checks can get you a long way there. The best we can expect in the next five to ten years is if you have models of the systems through non formal techniques you can show that they correspond to what the analog blocks are.
Bakshi: Some people are defining the register interface to the analog, and the only option is to use simulation. Maybe there are some things that can be done at the periphery.
Brinkmann: There is also an economic question here. There is always one guy who does the analog connectivity and he would love to have a formal tool, but one guy in a large company means you won’t sell too many copies so it is not an important goal.
Pandey: If you look at industrial control systems, a lot of it is analog. You have this digital interface to the actuators, but I haven’t seen this work scale up, so it requires some work. There are maybe half a dozen researchers in the world who are looking at this area. With more systems containing sensors, it may become a more popular area and more pressing need in the market. For the foreseeable future it will remain a research area.
Hardee: The key to success is good partitioning and separation and the ability to gracefully ignore rather than have big issues with the things that formal cannot ignore.
Singhal: If you have models, you can verify models of analog designs. They may want to ensure that X propagates correctly and that the models are faithful to the analog behavior. If there is a bug in the model you may get a system level bug, but true analog verification is a long way off.
Brinkmann: You have to ensure that the models have the basic functionality of what is needed.
Related Stories
Formal’s Roadmap (Part 1)
Experts look at the progress made by formal in the past five years and where we can expect to see new capabilities emerge.
Gaps In The Verification Flow
Experts at the Table, part 3: Panelists discuss software verification, SystemC and future technologies that will help verification keep up.
Choosing Verification Engines
What tool works best for a specific verification task may be clearer in the marketing literature than in the real world.
Leave a Reply