AI Accelerator Architectures Poised For Big Changes


AI is driving a frenzy of activity in the chip world as companies across the semiconductor ecosystem race to include AI in their product lineup. The challenge now is how to make AI run faster, use less energy, and to be able to leverage it from the edge to the data center — particularly with the rollout of large language models. On the hardware side, there are two main approaches for accel... » read more

Generative AI Training With HBM3 Memory


One of the biggest, most talked about application drivers of hardware requirements today is the rise of Large Language Models (LLMs) and the generative AI which they make possible.  The most well-known example of generative AI right now is, of course, ChatGPT. ChatGPT’s large language model for GPT-3 utilizes 175 billion parameters. Fourth generation GPT-4 will reportedly boost the number of... » read more

AI Adoption Slow For Design Tools


A lot of excitement, and a fair amount of hype, surrounds what artificial intelligence (AI) can do for the EDA industry. But many challenges must be overcome before AI can start designing, verifying, and implementing chips for us. Should AI replace the algorithms in use today, or does it have a different role to play? At the end of the day, AI is a technique that has strengths and weaknesses... » read more

Where And Why AI Makes Sense In Cars


Experts at the Table: Semiconductor Engineering sat down to talk about where AI makes sense in automotive and what are the main challenges, with Geoff Tate, CEO of Flex Logix; Veerbhan Kheterpal, CEO of Quadric; Steve Teig, CEO of Perceive; and Kurt Busch, CEO of Syntiant. What follows are excerpts of that conversation, which were held in front of a live audience at DesignCon. Part two of this... » read more

Will Floating Point 8 Solve AI/ML Overhead?


While the media buzzes about the Turing Test-busting results of ChatGPT, engineers are focused on the hardware challenges of running large language models and other deep learning networks. High on the ML punch list is how to run models more efficiently using less power, especially in critical applications like self-driving vehicles where latency becomes a matter of life or death. AI already ... » read more

Memory and Energy-Efficient Batch Normalization Hardware


A new technical paper titled "LightNorm: Area and Energy-Efficient Batch Normalization Hardware for On-Device DNN Training" was published by researchers at DGIST (Daegu Gyeongbuk Institute of Science and Technology). The work was supported by Samsung Research Funding Incubation Center. Abstract: "When training early-stage deep neural networks (DNNs), generating intermediate features via con... » read more

New Method of Comparing Neural Networks (Los Alamos National Lab)


A new research paper titled "If You’ve Trained One You’ve Trained Them All: Inter-Architecture Similarity Increases With Robustness" from researchers at Los Alamos National Laboratory (LANL) and was recently presented at the Conference on Uncertainty in Artificial Intelligence. The team developed a new approach for comparing neural networks and "applied their new metric of network simila... » read more

Techniques For Improving Energy Efficiency of Training/Inference for NLP Applications, Including Power Capping & Energy-Aware Scheduling


This new technical paper titled "Great Power, Great Responsibility: Recommendations for Reducing Energy for Training Language Models" is from researchers at MIT and Northeastern University. Abstract: "The energy requirements of current natural language processing models continue to grow at a rapid, unsustainable pace. Recent works highlighting this problem conclude there is an urgent need ... » read more

ISA Extension For Low-Precision NN Training On RISC-V Cores


New technical paper titled "MiniFloat-NN and ExSdotp: An ISA Extension and a Modular Open Hardware Unit for Low-Precision Training on RISC-V cores" from researchers at IIS, ETH Zurich; DEI, University of Bologna; and Axelera AI. Abstract "Low-precision formats have recently driven major breakthroughs in neural network (NN) training and inference by reducing the memory footprint of the N... » read more

Can Analog Make A Comeback?


We live in an analog world dominated by digital processing, but that could change. Domain specificity, and the desire for greater levels of optimization, may provide analog compute with some significant advantages — and the possibility of a comeback. For the last four decades, the advantages of digital scaling and flexibility have pushed the dividing line between analog and digital closer ... » read more

← Older posts