Experts at the Table, Part 1: The industry has long considered verification to be a bottom-up process, but there is now a huge push to develop standards for top-down verification. Will they meet comfortably in the middle?
Since the dawn of time for the EDA industry, the classic V diagram has defined the primary design flow. On the left hand side of the V, the design is progressively refined and partitioned into smaller pieces. At the bottom of the V, verification takes over and as you travel up the right-hand side of the V, verification and integration happens until the entire design has been assembled and validation can be performed.
But along the way, as designs became larger, the V started to break down and increasing numbers of companies wanted to start verification on the left-hand side when the design decisions were being made and problems could be found closer to the point where they could be rectified before detailed design had been conducted.
At the same time, logic simulation became unable to process the whole design, meaning that additional tools had to be brought in to solve pieces of the verification challenge. This created an opportunity to rethink about the way in which the verification flow should be defined.
Semiconductor Engineering sat down to discuss these issues with; Stan Sokorac, senior principal design engineer for ARM; [Frank Schirrmeister, senior group director for product marketing for the system development suite of Cadence; Harry Foster, chief verification scientist at Mentor Graphics, Bernie DeLay, group director for verification IP R&D at Synopsys; and , CEO of Agnisys. What follows are excerpts from that conversation.
SE: Block-level verification has become pretty mature in terms of tools and methodologies. Top-down verification is in its formative stages. Meet-in-the-middle is about making sure that the two verification flows actually stand a chance of working together. Will they succeed?
Sokorac: You have hit the nail on the head. When I talk to a lot of customers, it is amazing how many have problems when they go from the IP level to the SoC level, and the kind of issues are quite fundamental. A lot of them go back to the notion of the IP sign-off criteria. They do not appear to have standard criteria for everything needed for handoff, such as design verification closure, what did they do for lint, etc. What have they defined and handed off? It is more than that is needed. Have they embedded assertions?
Schirrmeister: If you look at the charter of the Accellera group, they have three items that they are looking at: horizontal reuse – how to make the stimulus portable, vertical reuse and by that they mean reuse from IP verification to the SoC level, and third, the reuse between disciplines. Can a coherency expert talk to a power expert and can they exchange data efficiently? Going back to vertical reuse, the key that they are starting to talk about is that if you have IP verified already, then what can you reuse at the SoC level without just exhaustively doing everything again that was done at the IP level? That is what meet-in-the-middle is all about.
Foster: Another piece of the problem is related to reuse in terms of the collateral. For example, at the IP level, a lot for software is created just for configuration. The challenge is that as we move up to integration, that information has to be relearned because it is not packaged in a way that it can be re-used. That is just one of the struggles associated with meeting in the middle. What is the process in terms of the integration? You find people fail when they jump from putting things together to running use-cases. They are not finding the problems efficiently. It needs to be done in a systematic manner. ‘Let’s check the connectivity, let’s check that the processor can talk to each IP.’ They first need to be done independently before jumping into the more complex use cases. In terms of coverage, we have different objectives at the system level.
Schirrmeister: We try to avoid the term coverage at the SoC level because it is different.
DeLay: It is a metric and not coverage. There is a lot of confusion when you say coverage.
Foster: Yes – people immediately think you are talking about code coverage.
DeLay: I like to call them verification metrics. It is a metric that can be looked at to tell you how far you have gotten. And I use plural because there is more than one. There is no one single metric. Connectivity, stuff associated with interrupt scenarios, etc.
Foster: I like the term that ARM uses – statistical coverage.
Sokorac: Right, it is not simply a case of have you done this or that. It is about what you are doing across the whole system. What is the profile of the stimulus, etc.
Bakshi: I feel that there is a complete disconnect between the top and bottom level guys. SystemC and MATLAB are used at the top level where you have architecture and people are exploring the possible solutions. Then there is successive refinement and then a different set of people who are creating the IP. They are good at it, but they are disconnected. The solution is a common specification, which can be refined from the top to the bottom. There are metrics that can prove if they have both developed to the same specification. It has to be specification-driven. The people doing system-level modeling are not the same guys doing the IP development.
Schirrmeister: That is another layer to the onion. What we were talking about was chip-level verification. Use-cases put upon the IP. What you are talking about is a top-down flow with refinement, making the IP fit into it. And yes, there is a disconnect there. There is very little reuse of the system modeling going down into the flow apart from some of the testbench data generation.
Bakshi: I feel that the verification issues come in because there is a disconnect. This causes a bigger focus on the verification. If we don’t have the disconnect, in an ideal world, then you don’t need the verification.
DeLay: But we are living in an ecosystem where there are a lot of different IP providers and most of the IP is not even internal.
Foster: A lot of the IP may not have even been verified in the configuration that I am using.
DeLay: Yes, that is an issue. In the ideal world where everything is in-house and everything gets handed down, that would be great. That may be correct for some key functional parts. But when you get your IP from everywhere – now I have to figure out how I am going to maximize efficiency of my verification and everything associated with that.
Foster: When I start integrating IP, particularly third-party IP, or even IP that has been developed internally but you no longer have access to the original development team, I have lost all of the collateral, all of the knowledge about how to configure it, and that has to be relearned. The integration step is becoming a humongous problem.
Schirrmeister: Let’s take a specific example. UVM versus top-down and consider what portable stimulus does with UML definitions. Do you define sequences in UVM and have people coming from an IP-centric view do that? There are some customers who are very conscious of the need to reuse those sequences at the higher level, so they build a layer, written in C. Then they know that it will run on an ARM sub-system or any other process sub-system, and so they build a small abstraction layer on top of UVM with the intent that they can be re-used. Then the key question becomes, ‘What can I avoid doing again that is a repeat of what was done for IP verification?’ If you don’t do this, you are just broadening the problem and you will use an infinite number of cycles. So, this brings up language issues. Do I write it in C so that I can bring it up on a processor? What does coverage mean and what do I consider as being done? If, at the SoC integration level, I find issues within the IP, then someone isn’t doing their job on the IP side because I shouldn’t find issues there.
Foster: You always have to ask the question, ‘Why didn’t I find this problem before?’ If you don’t answer this question, you will never get to optimize your flow.
Sokorac: That is really not figured out very well. As an IP verification guy, I know that we will not find every bug at the IP level. Something will be missed. So you can start to verify way too much because you are worried about what you missed. That can waste a huge amount of resources. What are the assumptions that you made at the unit level?
DeLay: Assertions do play a key part in that. If you are delivering a proper set of assertions, probably auto generated from design intent, you can verify that the design has not gone outside of the criteria. It may not be nirvana, but at least it does capture design intent.
Sokorac: It is a start. It gives you some of the interface assumptions but you are putting a lot of assumptions into the model.
DeLay: Assertions have to go beyond the I/O, they have to go into control logic. Traditionally today, it is only done on the I/O.
Sokorac: A lot of the bugs tend to be hiding where you made an assumption in the bus functional model. This happens when a dependency isn’t described that exists in the real system. There is no real way to verify that the model behaves in the same way as the real system on the next level.
Foster: The issue is really about the environment which is where the assumptions are made. Then you see the interactions between them when you start the integration.
Bakshi: What you are talking about is verification, but we should also be talking about design. Verification is an afterthought.
Foster: Being a verification bigot, the question is about what is between verification and validation? What is that middle ground? There are different objectives.
Schirrmeister: To me it is both. There is a big disconnect organizationally. In the extreme case, you have 300 blocks that are re-used, some IP is custom designed. So how do you even specify it? In reality there is no such thing as a spec that is frozen and from which you can do refinement. Yes, there may be good intentions, but the product manager will insert himself and make changes. Remember the platform triangles where you have the design space at the top. You make decisions and refine your combination of hardware and software, and then you implement and verify against that. Then there is the validation aspect where you need to ensure that what you are implementing does what you intended it to be. Lots of challenges.
Related Stories
Open Standards For Verification
Pressure builds to provide common ways to use verification results for analysis and test purposes.
Verification Facing Unique Inflection Point
Leave a Reply