Tools Needed To Track, Catalog Hardware Vulnerabilities


Monitoring for cyberattacks is a key component of hardware-based security, but what happens afterward is equally important. Logging and cataloging identified hardware vulnerabilities to ensure they are not repeated is essential for security. In fact, thousands of weak points have been identified as part of the chip design process, and even posted publicly online. Nevertheless, many companies... » read more

LLMs Show Promise In Secure IC Design


The introduction of large language models into the EDA flow could significantly reduce the time, effort, and cost of designing secure chips and systems, but they also could open the door to more sophisticated attacks. It's still early days for the use of LLMs in chip and system design. The technology is just beginning to be implemented, and there are numerous technical challenges that must b... » read more

Mass Customization For AI Inference


Rising complexity in AI models and an explosion in the number and variety of networks is leaving chipmakers torn between fixed-function acceleration and more programmable accelerators, and creating some novel approaches that include some of both. By all accounts, a general-purpose approach to AI processing is not meeting the grade. General-purpose processors are exactly that. They're not des... » read more

Using AI To Glue Disparate IC Ecosystem Data


AI holds the potential to change how companies interact throughout the global semiconductor ecosystem, gluing together different data types and processes that can be shared between companies that in the past had little or no direct connections. Chipmakers always have used abstraction layers to see the bigger picture of how the various components of a chip go together, allowing them to pinpoi... » read more

Capturing Knowledge Within LLMs


At DAC this year, there was a lot of talk about AI and the impact it is likely to have. While EDA companies have been using it for optimization and improving iteration loops within the flow, the end users have been concentrating on how to use it to improve the user interface between engineers and tools. The feedback is very positive. Large language models (LLMs) have been trained on a huge a... » read more

224Gbps PHY For The Next Generation Of High Performance Computing


Large language models (LLMs) are experiencing an explosive growth in parameter count. Training these ever-larger models requires multiple accelerators to work together, and the bandwidth between these accelerators directly limits the size of trainable LLMs in High Performance Computing (HPC) environments. The correlation between the LLM size and data rates of interconnect technology herald a... » read more

PCIe 7.0: Speed, Flexibility & Efficiency For The AI Era


As the industry came together for PCI-SIG DevCon last month, one thing took center stage, and that was PCI Express 7.0. While still in the final stages of development, the world is certainly ready for this significant new milestone of the PCIe specification. Let’s look at how PCIe 7.0 is poised to address the escalating demands of AI, high-performance computing, and emerging data-intensive ap... » read more

Language’s Role In Embodied Agents


Large Language Models (LLMs) and models cross-trained on natural language are a major growth area for edge applications of neural networks and Artificial Intelligence (AI). Within the spectrum of applications, embodied agents stand out as a major developing focal point for this AI. This article will address developments in this space and how the application of language-trained models improves t... » read more

Vision Is Why LLMs Matter On The Edge


Large Language Models (LLMs) have taken the world by storm since the 2017 Transformers paper, but pushing them to the edge has proved problematic. Just this year, Google had to revise its plans to roll out Gemini Nano on all new Pixel models — the down-spec’d hardware options proved unable to host the model as part of a positive user experience. But the implementation of language-focused mo... » read more

Scheduling Multi-Model AI Workloads On Heterogeneous MCM Accelerators (UC Irvine)


A technical paper titled “SCAR: Scheduling Multi-Model AI Workloads on Heterogeneous Multi-Chiplet Module Accelerators” was published by researchers at University of California Irvine. Abstract: "Emerging multi-model workloads with heavy models like recent large language models significantly increased the compute and memory demands on hardware. To address such increasing demands, designin... » read more

← Older posts