The New Face of Formal

Experts at the table, part 2: Which use models will see increasing adoption and how formal relates to security and securing the supply chain.

popularity

Semiconductor engineering sat down to discuss the recent growth in adoption of formal technologies and tools with Lawrence Loh, product engineering group director at Cadence, Praveen Tiwari, senior manager R&D, verification group at Synopsys, Harry Foster, chief scientist at Mentor Graphics, Normando Montecillo, associate technical director at Broadcom and Pranav Ashar, chief technology officer at Real Intent. In part one, panelists discussed the changes that had caused the increase in adoption and likelihood of that continuing. What follows are excerpts from that conversation. To view part one of this discussion, click here.

roundtable

SE: How is usage of formal going to grow?

Ashar: People have traditionally thought of formal as being one thing, but it has two important and distinct parts. First is the specification side and the other is the analysis. People club them together but they are distinct and just using formal as a specification has a benefit. There are aids to writing assertion and this is making it easier but the additional layers in an SoC create their own collateral, such as the environment, reset, dft, UPF. There are also companies helping you capture information about registers in the design. This is collateral that is being written to help with the design of the chip and you can extract a lot of assertions from this collateral. This means that a substantial part of the information you need is implicit in the design description itself and there are other that are implicit in the RTL. Aids for formal assertion writing have progressed in terms of making it easier and there are a lot more implicit assertions. These have helped make the specification part easier. For the analysis, it can be considered as more sophisticated simulation. Those algorithms have been worked on for a number of years and combined they provide a complete solution.

Loh: What I have seen is somewhat different. The most popular app is formal property verification by far. Why do people say this is hard to use when we don’t see that? I realized that formal property verification (FPV) means more than a formal expert running a tough problem. That is not the majority usage. It is not about verifying that something is actually correct, it is about finding a counter example and seeing the design in action. Give me an idea of where I need to write checkers and providing a way for the verification team to pick the designers brain using formal to illustrate what the design does. Maybe it is the designer trying to indicate with a verification planning tool.

Foster: I don’t think any of us disagree. The point is that when using FPV there are two use models and they are bug hunting and assurance. They are radically different, and assurance takes a significant amount of expertise in how abstractions are written and how I create my properties. Now in bug hunting, there is a lot of value for designers writing assertions to see if they find bugs. The difficult thing is how do you quantify this and what is the return? This is a management issue and showing that we have been productive by using formal property checking.

Loh: Yes, bug hunting is one of the prime use models. Formal, in my experience, can find bugs so much faster and earlier. Let me build on the assurance part. Traditionally if you got a full proof that was good, but if you did not get a full proof then you did not get any measurable assurance. I have been trying to steer away from this. A full proof is not the only way to get assurance and we have worked hard to get accurate visibility into what has been verified, with defined cycle bounds, with this set of bounds. We need a way that people can understand, and that is closely related to how simulation coverage is measured. Formal is not identical, but they can look side by side and refer back to the verification plan.

Foster: This is significant. A design engineer can do analysis and realize that if I do 50 cycles, that would be sufficient and would cover everything. There may still be a risk but there are techniques to figure out what that boundary is. Typically, the processors guys have been doing bounded model checking and reaching adequate confidence.

Tiwari: I understand that designers are the best-placed people to write assertions, but where are the bugs? They happen most of the time when two blocks interact with one another. You can have back pressure, you can have pipelining, you can have complicated scenarios that cannot be thought of very easily. The beauty of formal is that these scenarios can be extracted. Now how do you get to that fast, without fully understanding the design? It is like a roll call where someone gets to say that 50 cycles is enough for me. Maybe it is 55 or 52. How do we come to a complete solution? There are many analysis techniques that help the user but it still requires some design knowledge. It is not a verification activity. It requires designer driven analysis to come up with the right bound. One of the benefits of formal is that it makes the designers and verification engineer become a lot closer. They have to interact. This is a good synergy.

Loh: I agree except for one thing. The designers would have an educated guess about if it is 50 or 60. Same with coverage. A good verification engineer should be able to think about all of the corner cases or review the test plan and decide that this hole needs to be filled but this one is okay. The tool should be able to help them achieve that. In formal, if they say fifty is good, what if the tool can tell what it covers logic wise and fifty one will add this extra, and if it does not provide anything extra, then this is good to know. At the end of the day, the designer still has to sign off on it.

Ashar: Formal has become actionable and provides something to do next.

Foster: And that was a problem with early technology and tools, which put the burden on the user to figure out what is actionable and what is the next step.

Tiwari: Think of a scientific calculator. You give it to a person and ask them to solve something, but you haven’t told him if it is a differential equation or calculus. You go figure it out. But with the evolution of formal technology there is now guidance that can help them figure out the approach to take. More of these analysis techniques and extracted properties are helping users to think about their designs and what areas to validate.

SE: What can formal do to help with the increasing concerns about security? Can it be used to identify attacks?

Foster: We all provide a security app, but the reality is that if you have a well-defined problem then it is easy to come up with a formal solution. The problem with security is that it is a huge domain and to solve all known problems is not tractable. There are many that can be solved. You can have protection on certain registers that only someone with the right credentials can access as an example.

Ashar: Just like in SoC design, there are two kinds of failures in the area of security. One is an implementation error by a user and semantic analysis of the code would help you to find it. To expect formal to be able to deal with complex protocols and tell you if they are implemented correctly is a hard problem and would be setting expectations too high to expect formal to be able to solve that.

Foster: This is looking at it as an architectural perspective and an implementation perspective. You can use formal for both, but architectural often means theorem proving.

Tiwari: If the problem is very global in nature, such that from point a to point b no data should flow, that can be modeled and checked with formal analysis. But complications arise, and when there is pipelining you need to check that certain things cannot be changed within the system. This becomes complicated because there is a software angle to this. In low power, if the low power scheme is implemented in software and hardware is passive, which is the way most systems are going, it is very difficult to validate this because the real intelligence is in the software and not the hardware. So what kind of formal analysis can you really do on the hardware? To do this requires significant modeling of the software intent, and that in turn presents another challenge, which is how do you validate that the software and the intent are exactly the same? This is raising the level of abstraction and going towards the architecture validation rather than hardware validation.

Montecillo: A lot of our security is embedded in the hardware itself, so the problem is very simple. We have one or multiple bits that we do not want to leak out under any condition except for the defined scenarios. That is very hard for us to describe even in simulation today. We need a new language that would enable us to describe that kind of behavior. We do not have any good solutions today. Most of the work we do is based on simulation.

Loh: Security apps have been around for several years, but it is not a single problem, it is a class of problems. It is the same as saying, ‘I want my chip to work. Can we have formal do that?’ Asking, ‘Is my system secure?’ is not a defined problem. Everyone has to do their part, hardware and software, handshakes have to be well-defined so that one does not compromise the other, and software needs to have a good encryption algorithm. There are many things that have to happen. Pick the part where you can contribute a solution. Make sure a secure register does not get accessed. Make it clear in the software spec that this is a software requirement that has to be verified in the software because otherwise the system is not secure. Security is something people don’t like to discuss in an open forum, and companies don’t want to talk about how they are verifying security because the more you say, the more you may show your vulnerabilities. Customers often don’t want to tell us what they are doing.

Ashar: If you limit the scope of the security problem you are addressing, such as defining privileged and non-privileged parts of your chip, that is complex but narrowly scoped and looks like a number of the other apps. But there is a caveat if there is software involvement in the system.

Foster: That is what I mean by setting correct expectations. If you just say that formal can solve security, then that will create problems. My background is equivalence checking and back in the ’90s people asked, ‘Is this a Pentium? Prove equivalence!’

SE: Securing the supply chain is another related area. How can you determine if a Trojan has been inserted in the design? Is equivalence checking a good tool for this?

Foster: There are a lot of techniques that are being explored. If I do a chip and send it to a fab, can I find out if something malicious has been inserted? One idea is looking for a voltage or power difference from what was expected. You’re looking for differences between the design that was characterized and the chip that comes back to see if it is not behaving in the expected way. Equivalence checking is not that easy when what you get back is lots of transistors.

Tiwari: It depends upon where and how it got inserted. For anyone to insert it, they have to be intelligent, so it may not be easily observable.

Foster: DARPA is spending a lot of money looking into this and it should help.

Ashar: It can often be the confluence of many things that would trigger the Trojan, which is a tough problem to solve.

Loh: It is amazing how the hackers always seem to be ahead of the game.

Ashar: Mining of the voltage and current data is one way. There is a company that analyzes the RF output of a chip and if there are places that can be taken advantage of, then you can learn things about what is happening in the chip based on this profile. So you can characterize a known good chip and then compare.

Foster: Really exotic techniques are being explored to solve this problem and many do not fall into the formal domain.



Leave a Reply


(Note: This name will be displayed publicly)