Systems & Design
SPONSOR BLOG

Revolutionizing Product Development And User Experience: The Transformative Power Of Generative AI

Generative AI in EDA tools could boost design space exploration.

popularity

Generative AI has become a prominent and versatile solution across various domains, including chip and system development. Its progress and impact have outpaced many other technological advancements, significantly benefiting numerous areas. In the semiconductor industry, EDA tools with generative AI have already established their position by offering unparalleled optimization capabilities. These tools empower chip and system development teams to achieve remarkable improvements in power, performance, and area. Moreover, they contribute to increased engineering productivity and expedited design closure processes.

During the CadenceLIVE Silicon Valley 2023 conference, the sessions on AI and big data analytics garnered significant attention. Among the popular sessions, the Generative AI panel discussion stood out as a highlight for many attendees. The panel, moderated by Bob O’Donnell of TECHnalysis Research, featured the following panelists:

  • Rob Christy, Arm
  • Prabal Dutta, University of California, Berkeley
  • Paul Cunningham, Cadence
  • Chris Rowen, Cisco
  • Igor Markov, Meta

The panel considered the impact generated by the integration of generative AI into electronic design automation (EDA) tools and how it shapes the landscape of chip design.

With a brief introduction from each participant setting the stage, Bob initiated the panel discussion with his first question:

“Is the addition of generative AI capabilities into these tools going to change how we design chips, and how?”

Rob was the first to respond, stating that it was definitely going to change things. He and his team are leveraging generative AI in the conventional PNR field, enhancing designs, and planning to integrate them into design technology and co-optimization strategies.

They recognize the significant prospects associated with the application of generative AI in augmenting design. Presently, they are employing this technology within the conventional PNR domain to enhance pre-existing designs, although they have not yet delved into its potential in the exploration space. They intend to establish a baseline and subsequently pinpoint areas that engineers may have disregarded. However, they acknowledge the indispensability of engineering input at the beginning. He concluded his comments by indicating they are looking forward to leveraging the potential utilization of AI in co-optimizing design technology strategies, particularly in circumstances where human input is constrained.

Prabal commented that today, there is less focus on design space exploration and architectural exploration due to the time-consuming downstream processes. However, he stated, “If we could accelerate these tasks with the help of tools, we could have more creative exploration on the front end.” The inherent challenge lies in the embedded synthesis problem, which necessitates reasoning and representations capable of encompassing not only RTO (requirements to objective) but also timing, closure, power, and wire delay.

Encoding these representations empowers expediting system design on the front end, resulting in significant time savings for system designers who would otherwise spend considerable time searching catalogs for components and determining how to reuse previous designs. Prabal and his team are pushing this concept further by automating the design space exploration and synthesis processes while still allowing for interactive adjustments. He concluded his comments with, “Remarkable advancements are taking place in this field, offering numerous intriguing possibilities.”

Paul said, “The answer to the question is both yes and no.” He thinks that AI cannot eliminate the building blocks and ingredients of the design process, like place and route, logic simulation, and space simulation. But Paul is confident about AI enabling the industry to address the substantial human interaction accompanying these building blocks. In reality, these fundamental components require significant human involvement, whether debugging or conducting multiple runs for fine-tuning.

Paul quoted Cadence President and CEO Anirudh Devgan’s keynote that AI allows us to redefine EDA. Cadence goal is to elevate the level of abstraction at which designers and engineers can engage with our software, thus significantly enhancing productivity and efficiency by orders of magnitude. Paul feels that after investing numerous years, the semiconductor industry will reflect and acknowledge the profound transformation in our methods of interaction, building, and verification of chips. Undoubtedly, the entire process will have undergone a remarkable change.

Chris believes that the impact of generative AI will be substantial, albeit uneven, due to its probabilistic nature. He foresees a distinct category emerging within design space exploration and the generation of verification suites. This category will involve making abstractions and generalizations about the design process, deviating from the desire for definitive answers in design rule checks, connectivity checks, or logic evaluations. Chris anticipates that the algorithm will benefit more in areas where probability is advantageous, allowing for quicker utilization. In contrast, it may be less beneficial in areas where probability poses challenges.

Igor joined the yes-and-no camp and made two points to support his reasoning. Firstly, the current generative AI techniques for text and images are not entirely reliable and need verification. He highlighted the technical limitations of this generation of generative AI, providing an example of how ChatGPT conducts arithmetic calculations by representing numbers in dimensional floating points and sending them through layers. Igor stated that PNR applications that rely heavily on floating-point operations may face difficulties in the next few years due to these limitations.

Additionally, he expressed concern regarding the lack of working memory in these models, which have a small I/O buffer of about 64 kilobytes and are flushed every time inputs are sent. This results in minimal learning and skill acquisition, making it challenging to compete with optimization tools. However, Igor acknowledged that these models excel in higher levels of abstraction and are adept in language processing, making them valuable in domains where humans are the competition.

I am leaving you with a thought that Bob shared:

“If ChatGPT hallucinates a fact somewhere, you can double-check it, and you might be okay. But if ChatGPT equivalent hallucinates different elements of a circuit, that seemingly could be a big, a much bigger issue.”

If you missed the chance to attend the AI Panel discussion at CadenceLIVE Americas 2023, don’t worry. You can register at the CadenceLIVE On-Demand site to watch it and all other presentations.



Leave a Reply


(Note: This name will be displayed publicly)