Constrained Innovation

What’s really holding back fundamental changes to the way we design chips?

popularity

The semiconductor industry has long been seen as a risk-averse industry and that is probably to be expected. The rapid migration of technology nodes (lots of innovation happening there) produced a rapid expansion in transistor counts that stretched development teams to their limits. Every design had to contain more functionality while dealing with a plethora of new concerns, and be developed by the same number of people in the same amount of time.

New concerns were things like power and thermal impact, which suddenly meant that just adding more transistors didn’t help as much and, in some cases, hindered. More recently we added safety and security into the mix, and we still do not fully understand the long term impacts they might have. The list of new concerns means that with each new generation of a design, changing as little as possible was the safest way forward.

Design became incremental. This allowed teams to bring forward knowledge from previous designs and introduce the least perturbation possible. The IP reuse methodology also helped with this and allowed for those incremental improvements to be distributed. You expect each of the IP vendors to make small improvements in their cores for each design, you add a little, the foundry adds a little, and when you look at all of those little bits from the outside, it looks like tremendous progress.

There have been many points along that path where I am sure someone said, ‘The direction we are going in is not a good direction for the future.’ Unless you just had a chip failure, it is unlikely that anyone took notice unless it was a hard, physical barrier that you were looking at. Changes in direction added too much risk.

The same is true for the process itself. When was the last time we had real change in the EDA flow, like an abstraction change? EDA is incremental– improving on what is there and solving each problem as it appears, often as a patch on a patch on a patch. You can’t fully blame the EDA companies for this. It’s what their customers are demanding, and EDA companies do spend more on engineering than many semiconductor companies.

Some of you may not know that I used to work in an EDA company, developing functional verification tools. I started working on the first commercial RTL simulator – before Verilog. It seems ridiculous today to think that this was a methodology sell. The semiconductor industry at that time used gate level for everything. They scoffed at RTL (We didn’t even call it RTL back then, it was just event driven simulation). Those simulators were actually targeted at the board simulation folks and testers.

The semiconductor industry at that time was constrained by wisdom. Gate level designers had a decade or more of experience under their belt and had no need for a new modeling language and methodology that would just add more work for them. Even when RTL synthesis came along, a good gate designer could outdo those early tools. It was only when grads straight out of school managed to produce designs that were almost as good as the experienced designers, in a faction of the time, that the industry listened.

But design has become more complicated, languages are more complicated, flows are more complicated, making it unlikely that any new technology will allow a college grad to outshine an experienced developer. It also makes it a lot more difficult for a new EDA tool to be developed that would allow for wholesale replacement of existing flows. It will only get looked at when a chip fails, and if it was the only way that it could have been avoided. Even then, one of those incremental patches will be seen as the preferred path – if it is available.

When new methodologies or languages are suggested, the voice of wisdom seems to be very quick to dismiss them these days. I see that as a journalist who often tries to delve into new regions and explore possible advancements. I have heard responses along the lines of, ‘I tried that 20 years ago and it didn’t work,’ more times than I wish recently. If I applied that same reasoning to AI, we would not have AI in any form today, because that too was tried 20 years ago and failed.

The industry is changing in different ways today. Technology node development is slowing and has slowed even more for many because of economic reasons. The desire to create better chips and to continue providing faster performance and greater functionality has not stopped. There appear to be more potential areas in which technology can be pushed today than ever before, and while some of them may be in huge numbers at very low cost, a lot more of them will want highly specialized devices.

The future using an incremental approach may no longer be the lowest risk path. Today, we see venture capital money coming back into semiconductors, but the funding isn’t going into the same old chips using the same old flows or the same old architectures. That is not going to provide the types of ROI that they are seeking.

They want disruptive changes, and if that shows success, the traditional companies will either have to follow or go out of that business. That disruption will filter through the system. EDA likes to tell the financial community that it lags semiconductors, so just because semiconductors are going into a downturn doesn’t mean their revenues will get impact for at least six months. Of course, the same is true on the upturn, although they can talk about improvements coming in the future. But the same is true for their technology. If semiconductors start to change, it will force change on the EDA companies and if they don’t – startups will come in and disrupt.


Tags:

6 comments

Theodore Wilson says:

Thanks for another great article. My sense of this is that teams are missing data driven CI/CD internal to the chip teams or more accurately within functional teams of an SOC. Whenever I did something in this direction we found an internal customer for the existing work and that customer then drove disruption. Without this mindset or workflow you have to stick to POR which is all that is funded in terms of resources and time. I do not feel these things are incremental changes though. An apparent step change in productivity or performance is about goal setting sure but also many incremental steps at a rapid pace. CI/CD is an ecosystem that engenders this I think. All the best, Ted

Steve Hoover says:

Always a great deal of insight in your articles, Brian, not only about how the industry behaves but about why it behaves the way it does.

Kevin Cameron says:

The degree of change depends somewhat on the gap between where you are and where you could be, for years Intel drove done the size of transistors to keep X86 on top of the CPU market – Moore’s Law.

That approach ran out of steam, and the gap between the realized and the possible has been growing steadily for a decade or more. What’s next is a big pile of AI stuff that will sweep away the old methodologies and replace them.

Fairly predictable other than the incept date.

Brian Bailey says:

Thanks for your thoughts Ted. I am not familiar with POR. What is that? Also for other who may not be aware CI/CD is continuous integration, continuous delivery, continuous deployment.

Steve Hoover says:

POR == Plan Of Record, I assume.

Karl Stevens says:

quote: “When new methodologies or languages are suggested, the voice of wisdom seems to be very quick to dismiss them these days. I see that as a journalist who often tries to delve into new regions and explore possible advancements. I have heard responses along the lines of, ‘I tried that 20 years ago and it didn’t work,’ more times than I wish recently.”
It is more like 50 years ago that designing a computer that actually executed high level language was tried and failed. Well those languages have been replaced and I am using the Roslyn (C#) Compiler Abstract Syntax tree syntax walker to design a new kind of computer.

The prototype used a few hundred LUTs and 4 dual port embedded memory blocks. It takes 1 cycle to start an expression evaluation and one cycle for each operator in the expression.

It is a like an FPGA accelerator actually programmed in C. And I only have to use that god awful Verilog for the initial design and then just re-compile the C code to change function without creating a new bit stream.

Leave a Reply


(Note: This name will be displayed publicly)