AI Accelerator Architectures Poised For Big Changes


AI is driving a frenzy of activity in the chip world as companies across the semiconductor ecosystem race to include AI in their product lineup. The challenge now is how to make AI run faster, use less energy, and to be able to leverage it from the edge to the data center — particularly with the rollout of large language models. On the hardware side, there are two main approaches for accel... » read more

Considerations For Accelerating On-Device Stable Diffusion Models


One of the more powerful – and visually stunning – advances in generative AI has been the development of Stable Diffusion models. These models are used for image generation, image denoising, inpainting (reconstructing missing regions in an image), outpainting (generating new pixels that seamlessly extend an image's existing bounds), and bit diffusion. Stable Diffusion uses a type of dif... » read more

Continuous Energy Monte Carlo Particle Transport On AI HW Accelerators


A technical paper titled “Efficient Algorithms for Monte Carlo Particle Transport on AI Accelerator Hardware” was published by researchers at Argonne National Laboratory, University of Chicago, and Cerebras Systems. Abstract: "The recent trend toward deep learning has led to the development of a variety of highly innovative AI accelerator architectures. One such architecture, the Cerebras... » read more

Vision Transformers Change The AI Acceleration Rules


Transformers were first introduced by the team at Google Brain in 2017 in their paper, "Attention is All You Need". Since their introduction, transformers have inspired a flurry of investment and research which have produced some of the most impactful model architectures and AI products to-date, including ChatGPT which is an acronym for Chat Generative Pre-trained Transformer. Transformers a... » read more

A Study Of LLMs On Multiple AI Accelerators And GPUs With A Performance Evaluation


A technical paper titled “A Comprehensive Performance Study of Large Language Models on Novel AI Accelerators” was published by researchers at Argonne National Laboratory, State University of New York, and University of Illinois. Abstract: "Artificial intelligence (AI) methods have become critical in scientific applications to help accelerate scientific discovery. Large language models (L... » read more

LLM-Aided AI Accelerator Design Automation (Georgia Tech)


A technical paper titled “GPT4AIGChip: Towards Next-Generation AI Accelerator Design Automation via Large Language Models” was published by researchers at Georgia Institute of Technology. Abstract: "The remarkable capabilities and intricate nature of Artificial Intelligence (AI) have dramatically escalated the imperative for specialized AI accelerators. Nonetheless, designing these accele... » read more

Developing Energy-Efficient AI Accelerators For Intelligent Edge Computing And Data Centers


Artificial intelligence (AI) accelerators are deployed in data centers and at the edge to overcome conventional von Neumann bottlenecks by rapidly processing petabytes of information. Even as Moore’s law slows, AI accelerators continue to efficiently enable key applications that many of us increasingly rely on, from ChatGPT and advanced driver assistance systems (ADAS) to smart edge device... » read more

New Neural Processors Address Emerging Neural Networks


It’s been ten years since AlexNet, a deep learning convolutional neural network (CNN) model running on GPUs, displaced more traditional vision processing algorithms to win the ImageNet Large Scale Visual Recognition Competition (ILSVRC). AlexNet, and its successors, provided significant improvements in object classification accuracy at the cost of intense computational complexity and large da... » read more

The Past Predicting The Future


It is often said that you cannot predict the future by looking at the past, but that isn't always correct. There are many clues provided by digging into change. Those changes are a prelude to what may happen in the future. One way we can do that here at Semiconductor Engineering is by looking at changes in reading habits. What types of articles are attracting the most attention? This is a sure ... » read more

Power/Performance Bits: Aug. 24


Low power AI Engineers at the Swiss Center for Electronics and Microtechnology (CSEM) designed an SoC for edge AI applications that can run on solar power or a small battery. The SoC consists of an ASIC chip with RISC-V processor developed at CSEM along with two tightly coupled machine-learning accelerators: one for face detection, for example, and one for classification. The first is a bin... » read more

← Older posts Newer posts →