First Time Success And Cost Control

Experts at the table, part 2: Panelists discuss ways to differentiate products in an IoT market and the need for tools that help speed decision making and assembly.


First time success has been the ultimate goal for semiconductor companies due to escalating mask costs, as well as a guiding objective for the development of EDA tools, especially in the systems and verification space. These pressures are magnified for the Internet of Things (IoT), especially the edge devices. Have system-level tools been able to contribute to first time success and control costs or are they an additional investment for which the impact is still being debated? Semiconductor Engineering sat down with Frank Schirrmeister, group director, product marketing for System Development Suite at Cadence; Eshel Haritan, vice president of engineering at Synopsys; Jon McDonald, strategic program manager in the design creation business unit of Mentor Graphics; Grant Pierce, chief executive officer at Sonics and Alex Starr, AMD Fellow and pre-silicon solutions architect at AMD. In part one the panelists discussed the impact that IoT will have on tools and methodologies and the role of system-level tools. What follows are excerpts of that conversation.


SE: How will design methodologies change?

Pierce: Perfection is the enemy of good enough. We need to change the methodologies in order to create things that are correct by construction because we should not be trying to over-optimize. We need to achieve success and manage time to market. It is when we try to over-optimize that we break what we could trust by construction. This is a big risk for many companies.

McDonald: Why do people think they need to over-engineer things? Why do they do it? They do it because they don’t know what the goal is. If you don’t have clear goals you will continue to make it better so that it is as good as it can be.

Pierce: You may over-design simply because there are a range of things to focus on, so I have to throw it all in there.

Schirrmeister: It is the unknown unknowns that scare us the most. Anticipation of what the user is doing with the device. With cost control it is being able to know what I am not doing by design and being precise about it. For example I may know that I am not verifying this, that I don’t want 100% Coverage because I can’t run enough simulation cycles, but I am precise about what functionality needs to work and that means I know the functionality that I don’t have to check.

Haritan: We are moving from what is the risk to how to mitigate the risk. The number one thing is to reuse as much as possible. The processor comes from someone else, you don’t invent a new form of connection so you can invest your time in the mixed-signal parts. If we don’t limit ourselves to IoT, and think in general, we need to make it easier to perfect your architecture. We need to reduce the investment necessary.

SE: We have heard that using IP saves time and money. Why not use IP for everything?

Schirrmeister: Then you lose differentiation. In any processor-based design, the big architectural task is choosing the right IP but integrating it is a nightmare. There are tools that can help. This is where things like IP-XACT come in, but it is more than that. If I want to assemble all of this by hand at the RT level, I am dead in the water. We need automation to assemble systems and that is a big component. The other side of this is that you have moved most of the differentiation into software.

Pierce: Architecture can differentiate but also understanding what IPs are available and making good use of them from a cost perspective is important. Is it worth buying a core? With small devices you need to be able to make fast decisions about where it is worthwhile going with a custom solution. Custom sensors, processors, filter: where do I need the differentiation? If I spec these things up front and pick correctly, that can be a huge difference in my product. Differentiation comes from having the pieces needed to get things done in the market. If it can be done with standard IP, then it can be done much faster. But in a lot of cases there will be a key piece of IP that will be a differentiation and if you can do that quickly, then the returns can be tremendous.

Starr: I think you really have to try and focus your efforts on differentiating IP. That is where you want to spend the engineering cost. You don’t want it on SoC construction. The problem is that today, IP is not plug and play.

Haritan: Look at a system with the same architecture and the same IP blocks. Each one of the blocks may have hundreds or thousands of parameters. PCI express has something like 8000 different parameters that can be controlled. The interconnect probably has hundreds of parameters.

Pierce: No, thousands and we often say we that we have created the most configurable IP on the earth.

Haritan: People who are creating IP for a living create it in a configurable way so getting to the right combination of parameters is what the architectural tool has to help with. The other thing they have to do is to help you partition your functionality between the different cores. The architecture is the same but you can put the software on it in different places and that will affect performance. Designing the memory controller doesn’t mean that you have to change the RTL. You may be changing parameters and configuring it. So there is a lot to do on the architectural side. I know customers today whose designs are really the same. They get to RTL integration a few weeks after they start the process but getting the architecture right and porting the software to the new configuration of parameters may take time. So making it easy on that level and to create the platform on which they do the software early is the key to getting a good Return on Investment.

Schirrmeister: The question is: what do I need to do from a tools perspective to enable integration? Many of the things we do on the system-level side are an add-on to what users have to do to get the chip done. They also need to keep things in sync. Where do I make my decisions? Interconnect decisions are made at the cycle accurate level. I take my 1000 parameters and that is very hard to model. What is the cost of building a cycle accurate model at a higher level of abstraction? Is it worth it, or should I just use RTL. If I invest this time in a system-level tool how do I measure the return I get for it?

Leave a Reply