Experts at the table, part 3: panelists discuss verification processes and the need for a system-level assembly process.
First time success has been the ultimate goal for semiconductor companies due to escalating mask costs, as well as a guiding objective for the development of EDA tools, especially in the systems and verification space. These pressures are magnified for the (IoT), especially the edge devices. Have system-level tools been able to contribute to first time success and control costs or are they an additional investment for which the impact is still being debated? Semiconductor Engineering sat down with Frank Schirrmeister, group director, product marketing for System Development Suite at Cadence; Eshel Haritan, vice president of engineering at Synopsys; Jon McDonald, strategic program manager in the design creation business unit of Mentor Graphics; ”Grant and Alex Starr, AMD Fellow and pre-silicon solutions architect at AMD. In part one the panelists discussed the impact that IoT will have on tools and methodologies and the role of system-level tools. In part two, the discussion focused on ways to differentiate products in an IoT market and the need for tools that help speed decision making and assembly. What follows are excerpts of that conversation.
SE: How does system-level verification reduce the amount of verification I now have to do at RTL?
Haritan: The first consideration is tools. You have to have efficient tools to create the models. There is a significant amount of time when you don’t have RTL. If you could have verified RTL on day one and be running on an emulator or FPGA prototype, then it may be a different discussion. But there is a time period at the beginning when all you have are specs and RTL is probably missing for some of the blocks. You need tools that can create the models efficiently, in a standards based environment, and then after that comes assembly. Then we can look at sub-system based design and derivative design and getting that to be as fast as possible. It is an investment. From the companies we work with, we can get software working in three days to three weeks after silicon is available. This used to take six months and that is a huge return on investment. How many companies can invest to get there – not everyone, but the leading semis are doing that. Most of the other companies are just making small modifications.
Pierce: We have spent a lot of time as a company focusing on the fact that we are involved as soon as a chip is conceived. Until you add interconnect, you just have a bunch of getkc id=”43″ kc_name=”IP”] cores laying on the table. Different parts of the design process care about a different level of accuracy. The earliest guys want to be able to identify what IP cores that are needed. They do not care about the micro-architectural tradeoffs that get you into cycle accuracy. What is wrong is the idea that one person works on it and says I am done and hands it off to the next guy. It is thrown over a wall. We need more methodology for hand-off. Even if the IP is perfect, and the interconnect is perfect, and the tools are perfect – what is screwed up is the methodology that allows them to use everything together. The intent often gets lost.
Starr: When we are enabling software teams using or other types of environment, they will have a list of prerequisites about what they want working. You give that to the design and verification guys and they say I don’t know what any of that means. At the hand-off they speak different languages.
SE: Does this mean that we have to make everything one tool?
McDonald: I don’t think so. At the system level you have languages, SystemC and TLM, so you can create a platform that can be used to verify function and some levels of performance. That model can become the predictor…
SE: But Gary Smith says there is no connection point. He says when you come down from the top level you have lost all ability to back annotate to see the high-level impacts.
Schirrmeister: The flow has to get closer together. When you add two things, you have to make sure that the sum gets smaller than the parts. The interconnect guys have figured this out quite well. From a definition, they spew out every model that they can. Cycle accurate, high level… It is automated in the flow. That is one piece of the puzzle. The IP, in a perfect world, will provide a transaction-level model and it would be plug and play instead of plug and pray. IP providers that really care provide these high-level models. For some IP you don’t get the transaction-level model but it really breaks down when you plug it all together at the RT level. We have it for some pieces and high-level synthesis is getting there so that you have both a high-level description and the RTL.
Haritan: On the verification side and the software development side, to address risk and ROI, companies need to build a good starting point. Nobody starts from a blank slate. They take the previous program and build on it. We did the same for virtual prototyping. We started building kits, for each architecture that we think we can succeed with in the market, you build the “motherboard” and allow blocks to be added. This gives you a good starting point and out of the box you can boot Linux or Android. Then you can change the interrupt controller or the DMA to your needs – you start making incremental changes. To add your own blocks, you need to learn how to do that, but it is not like in the past where we just gave them SystemC and TLM and said good luck. The core model has been written by people who have been doing it for many years, so it is efficient.
Schirrmeister: I think there has been progress to reduce costs. There is still more than enough to do that it will probably take the next 10 or 20 years. We need to bring this up to the level where we have the silicon virtual platform which is fully consistent with RTL and from which everything can be derived. I think we will get there. Interconnect is there, at the next level some of the IP blocks are there, topology interconnect isn’t there, but I think we can get there and it may take another 5 years, but once we are there the impact is significant. The results will be measureable.
Pierce: What we need is rapid system-level assembly of a design, using a successive refinement process, to get to cycle-accurate RTL with a true tie in to the IP so that it can be used without change. We also have to take into account that we don’t want to change the software because there is ten times more of them than hardware people. We see it being done and the real issue is how we get this into the mainstream. The is where the rubber will meet the road. If you don’t use these techniques, you will lose a lot of money.
Haritan: The design of complex chips is moving to be sub-system based. This is because it is so complex and you don’t want to make big changes and you don’t want everything being tightly coupled. You want things to be as disjoint as possible so that when you do a revision, you don’t have to change all of the IPs. I see system-level design becoming more mainstream and the market will provide some of the models for IP. There is no alternative because otherwise there is no ROI. If the architecture is not right and you don’t squeeze the design cycle by doing software early you cannot succeed.
McDonald: The original premise for ESL was that if you get it wrong you are dead. We are seeing customers that understand that you cannot afford to build the wrong thing and the only way to verify that you are building the right thing is to model it up front. Model it before you build it.
Starr: There are two things: one is having the architecture of the SoC and the platform so that it is amenable to a plug and play approach. That means clean bus interfaces. Second, the tools and the capabilities that enable plug and play of different models, at different levels of abstraction, need to exist so that it becomes a non-work item. You want to spend the time working on the differentiation and not waste time on SoC construction which adds no value.
Leave a Reply