Chasing The Next Level Of Productivity

Looking back on past productivity improvements to see what can be learned for the path forward.

popularity

The keynotes at the recent Design Automation Conference (DAC) gave some great insights into the direction of semiconductor technology and chip and system design. For the first time in a long time, my family members and friends have gained awareness of the importance of semiconductors and electronic design automation. I think this means it is also time to look back on where productivity improvements historically came from to see what we can learn for the path forward. Productivity is at the core of it all.

If you do not have time to re-watch the four keynotes from this year’s DAC, Brian Bailey did a fantastic job in “Distilling the Essence of Four DAC Keynotes” to do precisely what the title claims. It’s well worth watching on the DAC YouTube channel, though, as they were amongst the best I have seen in a while—take it from the guy keeping a top five list (see the list at the end of this post). Mark Papermaster’s quote sums it up, and it is simply too good not to quote it again:

“There has never been a more exciting time in technology and computation. We are facing a massive inflection point. The combination of exploding amounts of data and more effective analysis techniques that we see in new AI algorithms means that to put all that data to work has created an insatiable demand for computation.”

What inflection point? And what role does productivity play?

I returned to the IBS design start data provided by Handel Jones and plotted them by technology node from 2000 to the 2030 prediction below.

Source: IBS 2014, 2018, and 2022 reports “Design Start Activities & Strategic Implications.”

The growth of design starts does not look like an inflection point at first sight in the traditional “hockey stick” sense. Well, not until one multiplies the data out with the effort per technology node and the realization that most of the predicted growth comes from 7nm and beyond. Delving into the data further reveals that the expected development cost almost triples from 7nm to 5nm alone. The cost of prototypes and their validation alone at 3nm and 2nm is as much as the development cost of a whole 16nm design. We do not have enough engineers and cannot educate enough in time to satisfy the design demand, so we are stalling without productivity improvements.

Traditional sources of productivity improvements

Historically, International Technology Roadmap for Semiconductors (ITRS) has published yearly reports. In 2001, it famously said that the “cost of design is the greatest threat to the continuation of the semiconductor roadmap.” They tracked the estimated cost and kept a table outlining productivity improvements provided by EDA, as shown on the left side of the diagram below. As Andrew Kahng outlined in “The ITRS Design Technology and System Drivers Roadmap: Process and Status,” the SoC consumer portable chip cost in 2011 was about $40M. Without EDA technology advances from 1993 to 2009, the same chip would have cost $7.7B to design.

Besides tool improvements, the other quintessential part is the use of abstraction in a quest to define a higher level of model. Alberto Sangiovanni-Vincentelli used a version of the right side of the diagram above in his 2003 IEEE Micro article “The Tides of EDA.” The above expands the abstractions in my first-ever “blog” in 2008. I added the aspects of platform-based design and the “multicore-crisis” that had kicked in at the time.

The next level of productivity

Historically, design productivity improvements came from improvements in the tools combined with raising the abstraction of design entry and automation from a higher level.

About 25 minutes into his keynote this year, Anirudh Devgan charted a simplified version of the above with some quantifications. Starting from manual design, we got ~10X productivity from transistor-level design, ~10X from cell-based design methods, and ~10X from design reuse. He called the next level “AI-based EDA,” and attributed ~100X productivity. The fundamental change will be a switch from non-numerical methods that use the designer’s intuition and exhaustive searches through millions of combinations to numerical-based methods that are gradient-based and are derivatives of complex general systems otherwise not feasible. They augment existing flows with AI-based approaches of reinforcement learning.

The current techniques focus on the “high-effort” aspects of design tasks. As described in “Autonomous Design Automation: How Far Are We?”, they, of course, focus on chip development itself but extend to system design as well. We are just at the beginning, which makes me optimistic that we will indeed get to never-before-seen levels of productivity that will satisfy consumer demands for more and more electronics.

Exciting times ahead!

 

P.S. As promised, here are the five top keynotes I have seen in my career:

  • Joe Costello’s “negative target fixation” CadenceLIVE keynote in 1997. It motivated me to switch from chip development to EDA. Thanks, Joe. If anybody has a recording, I’d LOVE a copy.
  • Aart De Geus’ “Design Gap” keynote at DATE in Paris, 2002. I still remember how he compared design complexity to the decoding of DNA and told us all to say to our families when coming home from work that “we have a design gap to close.”
  • Alberto Sangiovanni Vincentelli’s “The Waves of EDA.” Slightly self-serving as it was my honor helping Alberto develop the abstraction graph a couple of years earlier, showing the “inevitability” of system design.
  • John Hennessy’s keynote at the 2018 ERI workshop. John gave a modified version of his lecture with Dave Patterson for their Turing Award. It’s the defining lecture for the 2020s.
  • Lisa Su’s keynote at HotChips #31 in 2018. It’s when the Turing lecture “clicked” for me, and it became real somehow.


Leave a Reply


(Note: This name will be displayed publicly)