Iteration And Hallucination


Iteration loops have been a vital aspect of EDA flows for decades. Ever since gate delays and wire delays became comparable, it became necessary to find out if the result of a given logic synthesis run would yield acceptable timing. Over the years this problem became worse because one decision can affect many others. The ramifications of a decision may not have been obvious to an individual too... » read more

Can Models Created With AI Be Trusted?


EDA models that are created using AI need to pass more stringent quality and cost benefit analysis compared to many AI applications in the broader industry. Money is hanging on the line if AI gets it wrong, and all the associated costs must be factored into the equation. Models are some of the most expensive things a development team can create, and it is important to understand the value th... » read more

Dealing With AI/ML Uncertainty


Despite their widespread popularity, large language models (LLMs) have several well-known design issues, the most notorious being hallucinations, in which an LLM tries to pass off its statistics-based concoctions as real-world facts. Hallucinations are examples of a fundamental, underlying issue with LLMs. The inner workings of LLMs, as well as other deep neural nets (DNNs), are only partly kno... » read more