Gaps In The Verification Flow

Experts at the Table, part 3: Panelists discuss software verification, SystemC and future technologies that will help verification keep up.

popularity

Semiconductor Engineering sat down to discuss the state of the functional verification flow with Stephen Bailey, director of emerging companies at Mentor Graphics; , CEO of Agnisys; , CEO of Test and Verification Solutions; Dave Kelf, vice president of marketing for OneSpin Solutions; and Mike Stellfox, Cadence Fellow. Parts one and two of that conversations can be found here and here. What follows are excerpts of that conversation.

SE: Will EDA take responsibility for verification of the lower levels of software, or will that remain an embedded software issue?

Bartley: I don’t think so. We provide platforms and software people are good at looking at, and abstracting software into different layers. As soon as you get above the hardware layer and know what the API is, then they verify everything above that. I don’t see the verification engineers having to get involved in the application layers.

Bakshi: But we have to grow EDA.

Bartley: Software developers will not pay the prices for EDA tools.

Stellfox: They will when you get into safety-critical applications. But not all software. The firmware and drivers, and anything that is that closely associated with hardware, I already see as being in the domain of the SoC. Above that, am I just building a pure application on some API? Yes. Could there be improvements? Yes. But will it be the traditional hardware guys doing that?

Kelf: It won’t be an emulator—more like a virtual platform or high-speed model of some sort, abstracting out the timing and the details and just providing enough of a hardware platform so that you can run it.

Stellfox: But that is just the engine. Take cars again. There is artificial intelligence there. How do you verify that? It is a big problem. They learn by driving. The algorithm is changing the more it learns. That is a bunch of software. You can abstract away the entire hardware platform. But still, how are you going to verify that? The level of verification that has to be done and shown to be valid is very significantly different from an app that I am using to play a game.

Bartley: We are seeing that problem with several applications. But they are not looking for traditional EDA solutions. They know there is a problem and they know that the way they test that software is like the way we tested hardware 20 years ago. It is all directed test. They know there is a problem, but I am not sure they are looking to EDA for solutions.

Bailey: Here is the big challenge—I started my career as an embedded software developer in the avionics industry, and about four or five years ago I went to the embedded systems conference and walked around the exhibit hall, and in 20 years nothing had changed. IBM has bought up a lot of the tools, and when I asked them about randomization they were lost. That is the negative side. The other side is that we in EDA don’t really know how to talk to the software folks. But we have a lot that they could benefit from. The biggest challenge is the business model. One of the companies at the conference said that if you hire a truck driver, you are going to buy him a $100,000 truck. Otherwise he is useless to you. But we hire software engineers and all we want to give them is freeware to develop their software. And we expect them to be productive.

Kelf: We don’t often talk to the software guys, and we don’t often understand their space. I was involved with some guys from the virtual platform space and they thought it could be a cheap tool because there are so many software guys. They leapt at the market with something that was really for the OS and driver guys and found out that there weren’t many of them. Another interesting thing is that we have learned some things from the software guys – for example, formal verification. This was prevalent in the software space. The idea of mutation coverage started there. So there has been some cross pollination, and there probably are more things such as random stimulus that could be brought to the software world. That could be the way that we start talking to each other.

SE: Where is SystemC in all of this? Why has it failed to provide the connection between hardware and software?

Bartley: We have some customers who are using SystemC.

Stellfox: The original idea was to model everything so you can build a virtual platform of the system and then you will do that again in RTL. People don’t have the time. What I do see is that everyone needs to start software development earlier and so then you may only need a few key components such as the compute sub-system. We have seen success when we take a hybrid approach where we take some small amount of virtual models, to provide the ability to run software at a pretty reasonable speed connected to the real design. The other area I see SystemC is in (HLS) where a lot of people are designing new IP.

Kelf: We are focused more on high-level synthesis, and people are using HLS to take algorithms written in C++ and they need a more efficient way to get those into RTL than rewriting it. That is a niche. People use it there. Because of the poor tools at this level, they continue to do most of their verification at the RT level—finding problems and going back to the SystemC to fix them. Formal verification is coming to SystemC. SystemC does have some language problems like the lack of an X state, race conditions between threads, etc. The other problem is that you have HLS and you have the people doing hardware platforms and the hope is that you will have these models in SystemC that can be used in both environments, and that will create a flow. That doesn’t work because virtual platforms are abstract and high performance and HLS needs to have more timing to direct the flow. That creates a disconnect.

Bailey: I would not say SystemC, which is a specific language, and is not required to do the job that needs to be done. The second point is connected with why HLS has not really taken off. There is so much reused IP that is written in RTL, that only new stuff is considered for HLS. If you are designing at the C-level, you are so abstracted away from the actual hardware implementation that people have a difficult time doing it because they can’t relate what they are writing to what it implies in the hardware. This was also a problem initially when going from gate to RTL, where the people doing schematics for twenty years could not make the adjustment, but it was proven that within one generation that new kids with the right training could do it. That is not the case today. One of the drivers for HLS is to raise the abstraction level so that they can simulate more cycles than is possible at RTL. That is OK because you are not creating another model. The reason why models work for an ISS is because the effort put into them is amortized over a huge customer base. You can’t do that for every IP block. They will pick and choose the key ones. The other people using HLS are those who are accelerating algorithms into hardware alongside the servers. Search engine algorithms in FPGAs for example. They are software people and they write them in C and get them into an FPGA. But that is still not standard practice.

Bartley: We have seen projects done in SystemC where things work fairly efficiently. The issue is that some of those algorithms are not fast enough at the RT level and have to be rewritten.

Kelf: It is not really a problem with the language. If you need to have timing, then it is a simulator problem and it will slow down.

Bartley: And another problem is that once you have the model you can run them without having to pay for a license.

Bailey: Yes and that is a problem. We have talked about what happened to the software industry. There is investment associated with synthesis, and nothing on the verification side because you can’t make any money from it.

Stellfox: And there is no real advantage. People get so hung up on language but it doesn’t solve the problem by itself. Doing UVM-style verification in yet another language – what is the advantage? There is none. Will you bring a new way to solve constraints, make it easier to define coverage? Just having a new language and repeating what has already been done in e or SystemVerilog

Bakshi: Some companies are promoting it because it is free. I see a lot of interest from Europe.

Kelf: I don’t really see that.

Stellfox: Yes – there are pockets, but the reason I don’t think it will be a big savior is because SystemVerilog already has everything necessary.

Bailey: One of the biggest challenges for us in the field of verification is what will enable more verification cycles in a period of time.

SE: What breakthroughs can we expect to see that will help us catch up?

Bailey: You have software simulation, you have emulation which is 500 to 1000X faster, and then you have prototyping which is 10 to 50X faster than emulation. What comes next? I don’t know. I have some thoughts on the software simulation side, but that will not help the folks doing module and block-level stuff that they could do in simulation. For that you may have to throw out Verilog and RTL simulation semantics because they were written when everything was single threaded. We should get semantics that are more reflective of the actual hardware because it would match the parallel nature of it. I have heard of someone who looked at the feasibility of using an old fab to make silicon for the purpose of verification and validation so that they can get more cycles done. They decided it was not economically feasible.

Kelf: You can’t just speed up simulation or emulation, which is just using the same approach. This is not keeping up. Formal is a different way to look at it and we can satisfy certain things with that. There is a lot more we can do, but it is not an answer, either. There is a lot we have not yet tapped in formal, or using simulation and formal together to effectively explore issues in the design.

Stellfox: Everyone has been looking for a silver bullet ever since I was in this space. There hasn’t been one and I don’t think there ever will be. The state of the art is pretty good, and we do have a range of engines from simulation, emulation and formal. The big thing I see is being able to pull those together in an effective flow. That will have the biggest impact for customers. Take those technologies and leverage them in the right way and allow people to do the work effectively. This is different depending on if you are doing low-power or safety verification, SoC or IP.

Bartley: Going single-model does tend to tie you into single vendor. High-level models – it is not always about how fast we can go, that is one aspect. Formal is part of it because of this.

Bakshi: We don’t need verification if we focus on creating automatic correct-by-construction flows.



Leave a Reply


(Note: This name will be displayed publicly)