Fully Automated Hardware And Software Design Of Processor Chips (Chinese Academy Of Sciences)


A new technical paper titled "QiMeng: Fully Automated Hardware and Software Design for Processor Chip" was published by researchers at Chinese Academy of Sciences. Abstract "Processor chip design technology serves as a key frontier driving breakthroughs in computer science and related fields. With the rapid advancement of information technology, conventional design paradigms face three majo... » read more

Agentic AI In Chip Design


Large language models (LLMs) like ChatGPT are just the starting point for generating content with AI. The next phase will be about harnessing LLMs with agents, providing automated feedback and improvements in performance and accuracy. Mehir Arora, backend engineer at ChipAgents, talks about the impact this can have on EDA and chip design, allowing smaller teams to compete with larger teams, and... » read more

Evaluation of LLMs on HDL-Based Communication Protocol Generation (U. of Illinois Urbana, CISPA)


A new technical paper titled "ProtocolLLM: RTL Benchmark for SystemVerilog Generation of Communication Protocols" was published by researchers at University of Illinois Urbana Champaign and CISPA Helmholtz Center for Information Security. Abstract "Recent advances in Large Language Models (LLMs) have shown promising capabilities in generating code for general-purpose programming languages. ... » read more

Roadmap for AI HW Development, With The Role of Photonic Chips In Supporting Future LLMs (CUHK, NUS, UIUC, Berkeley)


A new technical paper titled "What Is Next for LLMs? Next-Generation AI Computing Hardware Using Photonic Chips" was published by researchers at The Chinese University of Hong Kong, National University of Singapore, University of Illinois Urbana-Champaign and UC Berkeley. Abstract "Large language models (LLMs) are rapidly pushing the limits of contemporary computing hardware. For example, t... » read more

Cache Side-Channel Attacks On LLMs (MITRE, WPI)


A new technical paper titled "Spill The Beans: Exploiting CPU Cache Side-Channels to Leak Tokens from Large Language Models" was published by researchers at MITRE and Worcester Polytechnic Institute. Abstract "Side-channel attacks on shared hardware resources increasingly threaten confidentiality, especially with the rise of Large Language Models (LLMs). In this work, we introduce Spill The... » read more

Customizing A LLM Model For VHDL Design of High-Performance MPUs (IBM)


A new technical paper titled "Customizing a Large Language Model for VHDL Design of High-Performance Microprocessors" was published by researchers at IBM. Abstract "The use of Large Language Models (LLMs) in hardware design has taken off in recent years, principally through its incorporation in tools that increase chip designer productivity. There has been considerable discussion about the ... » read more

Inference Framework For Deployment Challenges of Large Generative Models On GPUs (Google)


A new technical paper titled "Scaling On-Device GPU Inference for Large Generative Models" was published by researchers at Google and Meta Platforms. Abstract "Driven by the advancements in generative AI, large machine learning models have revolutionized domains such as image processing, audio synthesis, and speech recognition. While server-based deployments remain the locus of peak perform... » read more

GenAI for Analog IC Design (McMaster University)


A new technical paper titled "Generative AI for Analog Integrated Circuit Design: Methodologies and Applications" was published by researchers at McMaster University. Abstract "Electronic Design Automation (EDA) in analog Integrated Circuits (ICs) has been the focus of extensive research; however, unlike its digital counterpart, it has not achieved widespread adoption. In this systematic re... » read more

2030 Data Center AI Chip Winners: The Trillion Dollar Club


At the start of 2025, I believed AI was overhyped, ASICs were a niche, and a market pullback was inevitable. My long-term view has changed dramatically. AI technology and adoption is accelerating at an astonishing pace. One of the GenAI/LLM leaders, or Nvidia, will be the first $10 Trillion market cap company by 2030. Large language models (LLMs) are rapidly improving in both capability and ... » read more

GPU Analysis Identifying Performance Bottlenecks That Cause Throughput Plateaus In Large-Batch Inference


A new technical paper titled "Mind the Memory Gap: Unveiling GPU Bottlenecks in Large-Batch LLM Inference" was published by researchers at Barcelona Supercomputing Center, Universitat Politecnica de Catalunya, and IBM Research. Abstract "Large language models have been widely adopted across different tasks, but their auto-regressive generation nature often leads to inefficient resource util... » read more

← Older posts