Gaps In The Verification Flow

Experts at the table, Part Two: Panelists look towards new coverage models, safety and security, and the hardware software interface.

popularity

Semiconductor Engineering sat down to discuss the state of the functional verification flow with Stephen Bailey, director of emerging companies at Mentor Graphics; Anupam Bakshi, CEO of Agnisys; Mike Bartley, CEO of Test and Verification Solutions; Dave Kelf, vice president of marketing for OneSpin Solutions; and Mike Stellfox, CadenceFellow. Part one of that conversations can be found here. What follows are excerpts of that conversation.

SE: Are we in need of a new Coverage model?

Bakshi: It should be automatically generated, and that is where tools can play a role. From design intent, the coverage model should be generated. That should help in solving the verification problem.

Bailey: Could you do that from a system-level specification? If we could, that would be great.

Bakshi: Shift-left is a question about how left you can go. When you go extreme left you get to the specification itself. That is where everything should start from.

SE: Unless you can extract semantics about what a design is meant to do, do you have anything better than code coverage?

Bakshi: Code coverage is a given. We need it. But it does not give you anything about functionality. That is embedded in the spec in some form.

Bailey: The challenge is that it is in a human language.

Bakshi: We have solved that for at least two areas, such as registers and sequences. Those are nicely transformed into code coverage and RTL code and register models and a verification environment. This can be done because you have gone back and created the specification in a format that is easy to parse and a formal way of specifying it.

Bailey: It is formal, but I would argue that with those, they are also easy for a human being to read, consume and understand. It is more concise to specify them in that manner. But when you get out of those areas and go to a full formal specification at the functional level, think of Zed.

Bartley: I looked at Zed back in 1992. It has still not happened. With registers, the solution is still limited. What you are doing is saying can I read and write to this register. You are not verifying what that register is really used for or its use case.

Bailey: No, it is a little bit more than that.

Bakshi: Yes, we can go to that level. The success we have had in the register space and in dealing with the behavior of the registers could be extended to other domains.

Bailey: Yes, it could be like formal with its applications, and bit by bit you address parts of the issue.

SE: The problem is getting broader. Now we have power, safety and security, performance. How are we going to attack those issues?

Bailey: All of those issues have always existed. Security has been an issue for a long time. Companies have wanted encrypted versions of firmware and kept tight control of these kinds of things. It is just being raised in priority. Power has always been an issue. If you just did a functional verification and never did a simulation of your power rails, you would get a chip back that you couldn’t boot. People found this stuff out the hard way. When it became necessary for, in the case of power, to affect functional behavior then we started looking at UPF and how to capture that information and provide better automation of it. Things rise up in priority and that results in solutions in different areas.

Bartley: The markets that people are getting into mean that more people care about them. Not many people cared about safety in the past, but a lot more do today. The same with security. If you are not connected, then you are not going to that bothered with security.

Bailey: For safety in the past, the biggest example would be medical. The complexity of those is a lot lower than ADAS.

Kelf: When you have an application such as a car, with all of the complexity it has, then the problem explodes. Security is driven by that, as well. You have the confidence type of security, such as a key and you want to make sure it does not propagate, and then you have integrity where you want to make sure that nobody can make that chip do something it is not supposed to do and this is much harder to test. There are techniques, such as formal, where you can do a lot of testing that is not possible with simulation because of its exhaustive nature.

Bartley: The impact that safety will have has mostly been dealt with using methodologies and techniques. Safety is a process. That has a bigger impact than some of the others. With safety, you have to demonstrate that you followed the process and provided the documentation.

Bailey: That is the liability side of it, and it is an important business driver.

Kelf: The day that someone sent a car off the road by hacking it was the day everyone took notice.

Bartley: Security is part of safety. You have to do hazard analysis.

Bailey: At this point in time, there is still a long way to go. They plan to have self-driving cars on the road by 2020. I was just visiting Israel and rental cars typically have the mobile-eye. Extremely annoying and constantly telling you that you are too close to the car in front. The last one also had the lane departure warning as well and you go to an exit ramp and it starts beeping at you. It is confused. People in India laugh at the thought of self-driving cars. The objective there is to maximize the effectiveness of every square inch of road.

Bakshi: If you really want to test your system you will do it in India.

SE: Another issue that is growing in importance is software and the verification of software in the presence of hardware.

Stellfox: What we are seeing is that people who are building SoCs say that in the past they designed a chip and delivered it. But today more than half of what they deliver is the associated firmware. This is especially true for automotive, and we see that the level of verification and traceability that you need to provide to show that you have done the verification of the firmware in operation on the hardware is extremely high. We are seeing that and the notion of use-case verification, which Portable Stimulus Working Group is attempting to attack, is one of the biggest areas where you can ask, ‘How can you more systematically target use-case verification so that you can stress the system under the system use-cases before you put in the piles of software on top of that?’ That is the biggest area I see happening right now.

Bakshi: The hardware software interface is a prime candidate for spec-based development. We are seeing the benefits of that. I feel that not having the ability to describe the design in a formal way is the biggest gap in verification. If we had that, we wouldn’t need verification at all.

Bailey: You cannot be serious. You cannot even take a formal proof and say it is exactly what I wanted it to do.

Bakshi: I am not saying formal, I am saying formal specification.

Bartley: But then your verification problem has been moved to a validation problem.

Bakshi: I am saying, generate it correct by construction

Bartley: I worked in software many years ago. They tried this 30 years ago and it didn’t work then.

Bailey: The market has shown over many years that the investment required to have that level of formal specification is not worth it. The cost is too high.

Bakshi: There is a definite success on the table which is a hardware software interface. We could extend that beyond into other areas.

Bailey: There are many examples where formal has been useful, but those are in areas where the investment in the specification is worth it.

Kelf: That is right. It is a huge amount of work and a true formal spec isn’t there.

Bailey: And the automated applications are ways to expand that. You unburden the user from having to come up with all of the properties and in essence the verification IP is packaged with it and in the formal engines themselves for a complete solution. So they just have to run it.

Kelf: It is interesting. There is software, and then there is software. You talk about the interface, the operating systems and drivers, and then there are the apps built on top of that in the case of a phone or PC. In the case of a communications device, you have the protocol stack, which is all layered on top of the hardware. If you look at those different layers you wonder if you can carefully and solidly verify each layer and carefully abstract away from the hardware to go to the layers above. Can you consider virtual platforms as being the way to go, or do you just put it all on the emulator and run it?

Bailey: At a verification conference last fall, Microsoft presented. They are a big user of formal for software. They started with the driver software probably because they were tired of having so many crashes that resulted from the drivers. It made a lot of sense to do that. There is still the issue that there is nothing that crosses the two domains – hardware and software. So you do it in both, but you still have the interface and the interaction between them.

Kelf: Could you take that layer immediately between the hardware and software and come up with a formal description of that?

Bailey: Maybe, but the question is who would do that?

Related Stories
Gaps In The Verification Flow (Part 1)
The verification task is changing and tools are struggling to keep up with them and the increases in complexity. More verification reuse is required.
Debug Becomes A Bigger Problem
EDA companies have been developing more integrated debug flows that bring execution engines and hardware and software closer together, but is that enough?
System-Level Verification Tackles New Role Part 3
Automotive reliability and coverage; the real value of portable stimulus.
Verification Engine Disconnects
Moving seamlessly from one verification engine to another is a good goal, but it’s harder than it looks.