Capturing Knowledge Within LLMs


At DAC this year, there was a lot of talk about AI and the impact it is likely to have. While EDA companies have been using it for optimization and improving iteration loops within the flow, the end users have been concentrating on how to use it to improve the user interface between engineers and tools. The feedback is very positive. Large language models (LLMs) have been trained on a huge a... » read more

224Gbps PHY For The Next Generation Of High Performance Computing


Large language models (LLMs) are experiencing an explosive growth in parameter count. Training these ever-larger models requires multiple accelerators to work together, and the bandwidth between these accelerators directly limits the size of trainable LLMs in High Performance Computing (HPC) environments. The correlation between the LLM size and data rates of interconnect technology herald a... » read more

PCIe 7.0: Speed, Flexibility & Efficiency For The AI Era


As the industry came together for PCI-SIG DevCon last month, one thing took center stage, and that was PCI Express 7.0. While still in the final stages of development, the world is certainly ready for this significant new milestone of the PCIe specification. Let’s look at how PCIe 7.0 is poised to address the escalating demands of AI, high-performance computing, and emerging data-intensive ap... » read more

Lower Energy, High Performance LLM on FPGA Without Matrix Multiplication


A new technical paper titled "Scalable MatMul-free Language Modeling" was published by UC Santa Cruz, Soochow University, UC Davis, and LuxiTech. Abstract "Matrix multiplication (MatMul) typically dominates the overall computational cost of large language models (LLMs). This cost only grows as LLMs scale to larger embedding dimensions and context lengths. In this work, we show that MatMul... » read more

An LLM Approach For Large-Scale SoC Security Verification And Policy Generation (U. of Florida)


A technical paper titled “SoCureLLM: An LLM-driven Approach for Large-Scale System-on-Chip Security Verification and Policy Generation” was published by researchers at the University of Florida. Abstract: "Contemporary methods for hardware security verification struggle with adaptability, scalability, and availability due to the increasing complexity of the modern system-on-chips (SoCs). ... » read more

Vision Is Why LLMs Matter On The Edge


Large Language Models (LLMs) have taken the world by storm since the 2017 Transformers paper, but pushing them to the edge has proved problematic. Just this year, Google had to revise its plans to roll out Gemini Nano on all new Pixel models — the down-spec’d hardware options proved unable to host the model as part of a positive user experience. But the implementation of language-focused mo... » read more

How To Successfully Deploy GenAI On Edge Devices


Generative AI (GenAI) burst onto the scene and into the public’s imagination with the launch of ChatGPT in late 2022. Users were amazed at the natural language processing chatbot’s ability to turn a short text prompt into coherent humanlike text including essays, language translations, and code examples. Technology companies – impressed with ChatGPT’s abilities – have started looking ... » read more

Leveraging LLMs To Explain EDA Synthesis Errors And Help Train New Engineers 


A technical paper titled “Explaining EDA synthesis errors with LLMs” was published by researchers at University of New South Wales and University of Calgary. Abstract: "Training new engineers in digital design is a challenge, particularly when it comes to teaching the complex electronic design automation (EDA) tooling used in this domain. Learners will typically deploy designs in the Veri... » read more

Can Models Created With AI Be Trusted?


EDA models that are created using AI need to pass more stringent quality and cost benefit analysis compared to many AI applications in the broader industry. Money is hanging on the line if AI gets it wrong, and all the associated costs must be factored into the equation. Models are some of the most expensive things a development team can create, and it is important to understand the value th... » read more

Fundamental Issues In Computer Vision Still Unresolved


Given computer vision’s place as the cornerstone of an increasing number of applications from ADAS to medical diagnosis and robotics, it is critical that its weak points be mitigated, such as the ability to identify corner cases or if algorithms are trained on shallow datasets. While well-known bloopers are often the result of human decisions, there are also fundamental technical issues that ... » read more

← Older posts