Experts At The Table: Nice To Have Vs. Need To Have

First of three parts: Complexity, cost of development, business models and what it really takes to get a chip out the door.


Low-Power Engineering sat down to discuss what’s essential and what isn’t in EDA with Brani Buric, executive vice president at Virage Logic; Kalar Rajendiran, senior director of marketing at eSilicon; Mike Gianfagna, vice president of marketing at Atrenta, and Oz Levia, vice president of marketing and business development at Springsoft. What follows are excerpts of that conversation.

LPE: What are companies trying to do now that they didn’t do in the past?
Gianfagna: The whole mantra that you can’t close a design without better tools and without better accuracy and that complexity is an imperative isn’t new. We’ve been saying that for 20 years. But what has changed in the past two years is that it’s gone from marketing hype to reality. Complexity is so high these days that no one would dream of handing off a design without routing estimation, power estimation, architectural focus and paranoia on meeting timing budgets. Complexity is driving need to have for better up-front planning, better IP re-use, more things like verification IP and standards interfaces. That’s all continuity. Discontinuity will come from 3D. I defy anyone to say they can iterate on a 3D stack by implementing four chips, deciding it’s not right and then re-implementing the four chips until they get the partitioning right. You can’t get there from here.
Buric: A single failure on advanced process nodes will cost six months of re-doing the design. And then it will cost another $3 million to $4 million. You spend much less money purchasing all the EDA that’s needed, including the training and implementation, whether you’re doing it yourself or going to a service partner. Nobody who is doing complex chips has a question anymore about whether EDA is needed or not. The only question is whether you have the tool you need that is fully supported so that you can use it. But it’s not a question of purchasing a tool. It’s too expensive to fail.
Gianfagna: That’s been a battle cry for EDA. If you’re late to the market it will cost you this much money. It’s intuitively obvious, but a lot of companies sat there patiently and said it’s a self-serving message. Now if you don’t get it right it’s game over because you don’t have enough money to do it over again.
Buric: As a point of reference, at 40nm the average cost of a mask set is higher than the cost of the IP on a design.
Levia: For a product or methodology to be a must-have it has to satisfy two criteria. One is that it enables a step—transformation, verification, or whatever the step is—that is essential. Not all steps are essential. Getting to GDSII is essential, however you get there. The second criteria is that the tool has to automate or enable something in a way that is incapable of being done manually—or practically manually. So if you have a one- or two-person startup and they have a complicated problem, and the choice for them is an expensive tool or a bunch of pizzas and people working 16 hours a day, the tool is not a must-have. But if you have a large verification organization of 30 to 50 people and an investment in customer orders and the good will you have with many suppliers and consumers of your product, then it is a must have. What is a luxury to someone else is something you cannot afford to go without. You can still incentivize people to work 16 hours a day, but the economies don’t work anymore. If you can have a team of 50 people instead of 80 people, it’s a no-brainer. You buy the tool. In addition, all customers are not the same. Reliability might be paramount for the military and automotive, while time to market is more essential for a consumer electronics design that has to do with Bluetooth.

LPE: What’s the user perspective?
Rajendiran: We look at it from a business practicality perspective. What’s needed to get a chip out the door? If you go to a market where labor costs are lower, you need more people to get something done because the expertise level is lower there. The efficiency isn’t there. In the U.S., because labor is more expensive, we’ve always focused on using tools to get the product out. Not all tools have been adopted at the same rate, though. High-level synthesis has been talked about for 20 years. Functional verification and high-level synthesis are now taking root because things are so complex, but how do you close the gap and do the correlation? You can have great tools but still not get the chip out the door. The reason the IP industry is there is to ease that problem. We have been talking more about a multi-chip module, which never really caught on in the past, but maybe it is coming to a head. Do you really have to migrate a chip to 28nm or do you just leave it alone because you know it’s functioning and put a smaller chip at 28nm and tie it together. What we need isn’t just the newest and best tool. We have to combine that with a business perspective. You may need point tools to get over a hurdle, but don’t just change everything over. Why not leverage more IP or MCMs?

LPE: Are all companies moving forward to the next node?
Levia: We see a lot of demand for 40nm, 28nm and a road map into 22nm. Are there many people using 28nm? No. Are your tools viable if you don’t have a road map down to 22nm? No. I don’t think people are sitting back anymore and saying 65nm is enough. Silicon is cheap, but it’s not free. People are still looking for ways to improve the cost and they’re looking for ways to integrate, both in custom design and in digital.
Gianfagnia: As an EDA supplier, by definition your biggest demand and your biggest customers will always be from the leading edge. They don’t need any new tools, but maybe they’ll renew and they’ll always be the ones to push the limits. From our point of view, our tools are very front-end loaded. If you’re doing a simple chip, you don’t need it. But in a design where a back-end synthesis/place-and-route iteration is two weeks or three weeks, you can’t iterate there. That’s where we’re seeing a pronounced change. There’s a growing demand for beating the RTL in every different direction or you don’t trust it. You’re afraid of those back-end loops. The cost and the time to get it closed are changing. What used to be ‘nice to have’ is now ‘need to have.’
Buric: We see a lot of unexpected activity at 40nm in this early stage process. We already have about 40 customers. At 65nm, it was not adopted so fast. We are seeing a lot of activity at 28nm, but 40nm will be a node where you will see a decline in the number of customers moving to a new process node. People aren’t moving for performance purposes as much anymore. They’re moving there because they can put more functionality in a single chip. But in the future there will be too many applications at this process node. At the same time, we are seeing customers and foundries are investing a lot in mature process nodes. Starting from 180nm, we are seeing new generations of processes including low leakage. There is a trend to actively use a wide spectrum of nodes. From a user perspective, it will be less expensive to do older designs in-house. We are seeing a big disconnect at 65nm, 45nm and below. Owning the design at those nodes is too expensive for most companies. It’s not a question of working through the night. It’s a question of whether you can afford the design or not.

Leave a Reply

(Note: This name will be displayed publicly)