Experts at the Table, Part Two: The industry has long considered verification to be a bottom-up process, but there is now a huge push to develop standards for top-down verification. Will they meet comfortably in the middle?
Semiconductor Engineering sat down to discuss these issues with; Stan Sokorac, senior principal design engineer for ARM; Frank Schirrmeister, senior group director for product marketing for the system development suite of Cadence; Harry Foster, chief verification scientist at Mentor Graphics, Bernie DeLay, group director for verification IP R&D at Synopsys; and Anupam Bakshi, CEO of Agnisys. What follows are excerpts from that conversation. To view part one of this discussion, click here.
Bakshi: A model developed and verified at the top level shows that you meet the performance number, and that should be successfully broken down. It should become the verification blue print.
DeLay: Ultimately, if it meets or not, it is a verification task. Verification actually proves that you met the design intent.
Schirrmeister: Is it really verification or is that validation?
DeLay: That is not a bad way to look at it, especially at the SoC level.
Schirrmeister: I am trying to make sure that the intent as specified at the top level can actually be met by using this IP. I am not verifying the IP itself.
Foster: We are seeing more and more projects where they are just integrators. All they do is validation.
Schirrmeister: At the low end and in particular for IoT, people want it to be drag and drop.
DeLay: It goes back to the metrics you decide to use at the SoC level. Not only is there an IP sign-off criteria, but there is also an SoC criteria that is just as important. You need to define this up front and have the tool manage that. It is the intent of the SoC that you are capturing with those metrics.
Schirrmeister: When someone builds an IP sub-system, we add can add assertions for the module to raise its hand when it is misused or used beyond the specified intent. Then there is also a reverse element which is performance validation where you need to present a view of the IP to the integrator to articulate the parameters within which you have to stay for me to work correctly. That is a new set of challenges and, on the tools side, we can help with some of it but it is not a totally figured out process.
Sokorac: As an IP verification guy it is really hard to define what are the potential performance problems that we will have when we get plugged into other IPs. We are spending increasing effort putting together combinations of IPs and running performance analysis so that we can learn more. Once we develop the experience to understand where the bottlenecks could happen, we can start thinking about how to spec out those kinds of things and catching them without even having to run them together.
SE: They inherently trust your IP but when through integration, a problem is suspected in the IP, how do you find out where the problem is, how do I convey that to the IP developer?
Foster: This is a problem particularly as we see more integration and a lack of knowledge of what is happening inside the IP. The lack of collateral that is delivered is an issue. They may not have configured it right.
Schirrmeister: The question is how formalized is it and how much tool support there is.
DeLay: Consider the guy doing the top-level system architecture who defines the performance constraints that are expected. In the ideal world, we are interested in how those constraints are passed down into the verification environment and to make sure you are actually still passing those initial performance characteristics. There is stuff happening in this area, and performance is just an example, where you are passing information down from the SoC level. There are other areas that could be worked on, but each area that is important has to be looked at individually.
Bakshi: There has to be something from specification and that becomes the metric for the IP at the bottom.
Schirrmeister: This makes it very difficult for IP to be developed. That is why meet in the middle exists. I would augment it to state that it is not really a metric for the IP, it is a metric for the system. Then for the IP provider, were they able to predict…
Bakshi: You are thinking in backward terms. Forget about the IP and who is providing them. If you are the system designer, how do you think? Then you will figure out where the IP is and can it fit.
Foster: Part of that thought process is going through what is available and what is understood and asking if it will fit in. But will it really do that? Part of the problem is that we create these highly configurable IPs and we cannot verify all of it. When it does get integrated, we make the assumption that it will work within this range, but then we find out that it doesn’t quite get there.
SE: Where does High Level Synthesis fit in this meet-in-the-middle flow?
Schirrmeister: This is becoming mainstream, and there have been more than a thousand chips taped out, but you have to be very careful—and this is part of what meet-in-the-middle is all about. Pure top-down flow is no longer done. It is just not possible. High-level synthesis is very successful in the domain where you have a spec and you refine it and figure out which blocks fit in, such as the processing cores and the peripherals, and then for the three to five blocks remaining, I have to do a make versus buy decision. Within this environment, I can use high-level synthesis and the tools understand the interfaces and can synthesize the block to be compatible with the communications infrastructure around it. But this is not all of the design.
DeLay: Regardless of it being behavioral or not, you still have to perform a verification task to ensure that the initial behavioral description is met. You still end up with the same problem, which is how do you verify the design intent at the IP and SoC levels. It is nothing to do with the source of the design. It is about how you validate that it does what was intended and what you use to verify that.
Sokorac: The IP was that was designed through behavior might be a little easier to verify, but that is about it. The problem of putting them together still exists. The middle still exists.
Schirrmeister: IP reuse is a key driver today for high-level synthesis. A key benefit of high-level synthesis is that when you make changes at the higher level, to fit into a new technology, or changes in performance characteristics, these are easier to do top-down rather than bottom-up.
Bakshi: What we need is a formalized verification intent specification language.
Foster: This is something that we have discussed for years and I am a formal guy and love formal languages, but there is a problem with over-specifying a design when you are thinking about it. You are limiting the degrees of freedom in such a way that it becomes an implementation. The best formal language may be English. I am very skeptical that we could ever achieve anything better.
Sokorac: It may be in the achievable realm for the block level.
Foster: Yes, there are things that we can do
Schirrmeister: There are formal techniques and there apps that deal with interconnect, and this is a subset of the overall problem. In the past we were under the impression that you can build an executable working specification of the complete thing and that everything derives from that. At a certain level you can, but we have never been able to build a formal spec – except in English.
DeLay: We can do certain things that make a lot of sense. Connectivity, power, reset circuits, clocks, etc.
Bakshi: These are all examples of design and not verification intent.
Schirrmeister: Is this the use-cases, the scenarios that need to be verified?
Bakshi: Yes, and these should be successively refined from the top to the lower levels. So a performance directive should translate into a coverage bin somewhere.
DeLay: You have to step back and look at what the original architects are using to specify and design. They are using a language that they are very familiar with and can describe the overall system. To try and change the overall system design is difficult.
Foster: I do believe that there are a lot of opportunities for better formal specification. For example, we lack a good formal specification between analog and digital and this causes so many problems such as reverse polarity.
Sokorac: Interconnect and protocols.
Schirrmeister: Register specs.
Bakshi: Yes – this is a very small area, but it has been successful. You specify things at a high level and then everything comes out of it.
Foster: When you can narrow the scope of the problem in such a way that you can define it formally, then it works great. The problem is when you try and apply it to the whole problem.
Sokorac: You will have goals such as my mobile needs to achieve a certain benchmark, but we don’t need to look for formal tools to break that down into coverage bins. That is part of the system design task to figure things like that out and what each piece needs to do to achieve that. Then the role of verification is to run that benchmark. That will not come out of automated tools.
Bakshi: It might not apply equally, but can it provide the boundaries for the lower-level IP?
DeLay: There is a subset of the tasks for which that may be a good solution. We may be able to expand the scope of them but the pie-in-the-sky is a large leap and that is actually what we are trying to address here. We have to verify everything and we are right in the middle of it.
Schirrmeister: For meet-in-the-middle to work, there are some characteristics that I have to define—metadata for the interface, metadata for performance, and then also for power intent, etc. This is where all of the IPs that are being integrated have to be exposed to the higher-level modeling tools.
DeLay: We provide IP-XACT description of our IP and they define the registers and other stuff that enables us to meet-in-the-middle.
Sokorac: An increasing number of our partners want to know how our IPs will perform when integrated. We try and show them information about what we tried and what we got.
Foster: I see about 20% of designs being nothing other than assembly, and that is growing. There is no custom development.
DeLay: Especially in the IoT space.
Foster: If the IPs were independent, then the system level would be easy, but once they have any shared state, you have a new set of problems.
DeLay: The verification task will continue to increase.
Verification Engine Disconnects
Moving seamlessly from one verification engine to another is a good goal, but it’s harder than it looks.”>
Open Standards For Verification
Pressure builds to provide common ways to use verification results for analysis and test purposes
Can Verification Meet In The Middle? Part 1
Experts at the Table, Part 1: The industry has long considered verification to be a bottom-up process, but there is now a huge push to develop standards for top-down verification. Will they meet comfortably in the middle?