The End Of Closed EDA

At a time when many pieces of an EDA flow are being fused together, pressure is mounting to make it a lot more open and amenable to external extension and enhancement.

popularity

In a previous life, I was a technologist for a large EDA company. One of my primary responsibilities in that position involved talking to a lot of customers to identify their pain points, and what new tools we could develop that would ease their problems. You would think that would be an easy task, but it certainly was not the case.

For example, if you ask a developer what their biggest frustration is, or what is consuming their time, the answer invariably will be associated with a bug or a limitation in the tool because of a very special construct or situation for which they were unable to find a workaround. Their focus is on the immediate future, and this is totally acceptable. There is nothing more important than the problem that is making you work weekends or long hours.

Many semiconductor companies did have internal groups that looked after methodologies and flows, and when you asked them the same question, you would get incredibly fanciful requests that would take hundreds of man-years of effort — except they need it tomorrow. And no, they cannot really provide any more help in defining what this thing would be. Invariably when you ask the design groups about the methodology groups, they tell you that nothing good ever came from them.

In most cases, there was no one in between who could talk about, or was prepared to talk about, what new challenges they would face on their next design. Yes, they knew if they had X of some function on this generation, they would have X + Y on the next one, and each X needed to be faster and smaller. This was almost always a linear progression of the architecture, and that was usually the logical path to follow in the era of Moore’s Law.

It was thus easy to fulfill the technologist’s role for existing tools by just saying capacity had to increase, speed needed to increase, and bugs had to be fixed. In retrospect, I can’t believe I actually got paid for this.

A new tool is almost always a discontinuity, and poses a significant risk, because it involves a methodology change. It was always said that you had to be able to show a 10X gain over what they were doing today, and in many cases that was not too difficult. The challenge was in proving it.

We would create tool prototypes that would demonstrate how the tool would work for one or two simple examples, but it was often difficult for the customers to see how that would translate into their specific design or situation. We would add examples that would be closer to what they said they wanted. I can only think of two cases where this resulted in a tool being developed, and only one of those was successful.

If I had to take anything away from all of this, it would be that it is exceptionally difficult for EDA companies to attempt to preempt the needs of their customers. Being proactive was just too difficult. Perhaps the most expensive example of this was high-level synthesis. This was a central piece of the electronic system level (ESL) flow. Large quantities of time and money were invested because it was seen that whoever solved this problem would be the next generation of EDA company and dominate that portion of the flow, just as Synopsys did in the early days of RTL.

Things are somewhat different today because the linear progression of things has been broken by the slowing of Moore’s Law. You cannot put X + Y on your next chip because you don’t have enough extra transistors, or there is some other limiter such as power. Smaller, faster, and lower power have to come from re-architecting or rethinking of some of the fundamentals.

This now creates the opposite dilemma. There are so many potential new tools that could be created, each focused on a specific application, or portion of the design, or domain in which something is to be used. But which ones will have enough acceptance to make them economically viable? In order for this to work, you need a framework into which pieces can easily be plugged, but this is not something that EDA has really been good at.

It is a similar problem to design in that they are unwilling, in most cases, to accept a 5% overhead cost that would lead to a 50% reduction in verification time by providing strong encapsulation. In software, there is a similar cost associated with inserting well-defined interfaces that insulate parts of the code and control what information can go where, and in what form.

Something has to change in order to accelerate new tool development, and that is a problem the new generation of technologists has to solve. Without that, no one EDA company will have the bandwidth to do everything, and designers are beginning to demand that they can also create pieces of this themselves because they see it as a competitive advantage. Put simply, whoever solves this problem may become the next generation of EDA company.



Leave a Reply


(Note: This name will be displayed publicly)