Looking back on past productivity improvements to see what can be learned for the path forward.
The keynotes at the recent Design Automation Conference (DAC) gave some great insights into the direction of semiconductor technology and chip and system design. For the first time in a long time, my family members and friends have gained awareness of the importance of semiconductors and electronic design automation. I think this means it is also time to look back on where productivity improvements historically came from to see what we can learn for the path forward. Productivity is at the core of it all.
If you do not have time to re-watch the four keynotes from this year’s DAC, Brian Bailey did a fantastic job in “Distilling the Essence of Four DAC Keynotes” to do precisely what the title claims. It’s well worth watching on the DAC YouTube channel, though, as they were amongst the best I have seen in a while—take it from the guy keeping a top five list (see the list at the end of this post). Mark Papermaster’s quote sums it up, and it is simply too good not to quote it again:
“There has never been a more exciting time in technology and computation. We are facing a massive inflection point. The combination of exploding amounts of data and more effective analysis techniques that we see in new AI algorithms means that to put all that data to work has created an insatiable demand for computation.”
I returned to the IBS design start data provided by Handel Jones and plotted them by technology node from 2000 to the 2030 prediction below.
Source: IBS 2014, 2018, and 2022 reports “Design Start Activities & Strategic Implications.”
The growth of design starts does not look like an inflection point at first sight in the traditional “hockey stick” sense. Well, not until one multiplies the data out with the effort per technology node and the realization that most of the predicted growth comes from 7nm and beyond. Delving into the data further reveals that the expected development cost almost triples from 7nm to 5nm alone. The cost of prototypes and their validation alone at 3nm and 2nm is as much as the development cost of a whole 16nm design. We do not have enough engineers and cannot educate enough in time to satisfy the design demand, so we are stalling without productivity improvements.
Historically, International Technology Roadmap for Semiconductors (ITRS) has published yearly reports. In 2001, it famously said that the “cost of design is the greatest threat to the continuation of the semiconductor roadmap.” They tracked the estimated cost and kept a table outlining productivity improvements provided by EDA, as shown on the left side of the diagram below. As Andrew Kahng outlined in “The ITRS Design Technology and System Drivers Roadmap: Process and Status,” the SoC consumer portable chip cost in 2011 was about $40M. Without EDA technology advances from 1993 to 2009, the same chip would have cost $7.7B to design.
Besides tool improvements, the other quintessential part is the use of abstraction in a quest to define a higher level of model. Alberto Sangiovanni-Vincentelli used a version of the right side of the diagram above in his 2003 IEEE Micro article “The Tides of EDA.” The above expands the abstractions in my first-ever “blog” in 2008. I added the aspects of platform-based design and the “multicore-crisis” that had kicked in at the time.
Historically, design productivity improvements came from improvements in the tools combined with raising the abstraction of design entry and automation from a higher level.
About 25 minutes into his keynote this year, Anirudh Devgan charted a simplified version of the above with some quantifications. Starting from manual design, we got ~10X productivity from transistor-level design, ~10X from cell-based design methods, and ~10X from design reuse. He called the next level “AI-based EDA,” and attributed ~100X productivity. The fundamental change will be a switch from non-numerical methods that use the designer’s intuition and exhaustive searches through millions of combinations to numerical-based methods that are gradient-based and are derivatives of complex general systems otherwise not feasible. They augment existing flows with AI-based approaches of reinforcement learning.
The current techniques focus on the “high-effort” aspects of design tasks. As described in “Autonomous Design Automation: How Far Are We?”, they, of course, focus on chip development itself but extend to system design as well. We are just at the beginning, which makes me optimistic that we will indeed get to never-before-seen levels of productivity that will satisfy consumer demands for more and more electronics.
Exciting times ahead!
P.S. As promised, here are the five top keynotes I have seen in my career:
Leave a Reply