Difficult Memory Choices In AI Systems


The number of memory choices and architectures is exploding, driven by the rapid evolution in AI and machine learning chips being designed for a wide range of very different end markets and systems. Models for some of these systems can range in size from 10 billion to 100 billion parameters, and they can vary greatly from one chip or application to the next. Neural network training and infer... » read more

Big Changes In AI Design


Semiconductor Engineering sat down to discuss AI and its move to the edge with Steven Woo, vice president of enterprise solutions technology and distinguished inventor at Rambus; Kris Ardis, executive director at Maxim Integrated; Steve Roddy, vice president of Arm's Products Learning Group; and Vinay Mehta, inference technical marketing manager at Flex Logix. What follows are excerpts of that ... » read more

Are Better Machine Training Approaches Ahead?


We live in a time of unparalleled use of machine learning (ML), but it relies on one approach to training the models that are implemented in artificial neural networks (ANNs) — so named because they’re not neuromorphic. But other training approaches, some of which are more biomimetic than others, are being developed. The big question remains whether any of them will become commercially viab... » read more

ML Opening New Doors For FPGAs


FPGAs have long been used in the early stages of any new digital technology, given their utility for prototyping and rapid evolution. But with machine learning, FPGAs are showing benefits beyond those of more conventional solutions. This opens up a hot new market for FPGAs, which traditionally have been hard to sustain in high-volume production due to pricing, and hard to use for battery-dri... » read more

The Murky World Of AI Benchmarks


AI startup companies have been emerging at breakneck speed for the past few years, all the while touting TOPS benchmark data. But what does it really mean and does a TOPS number apply across every application? Answer: It depends on a variety of factors. Historically, every class of design has used some kind of standard benchmark for both product development and positioning. For example, SPEC... » read more

Re-Imagining The GPU


John Rayfield, CTO at Imagination Technologies, sat down with Semiconductor Engineering to talk about RISC-V, AI, and computing architectures. What follows are excerpts of that conversation. SE: What your plans are for RISC-V? Rayfield: We're actively finalizing the integration of RISC-V cores into future-generation GPUs. That work has been going on for several months. Moving forward, we'... » read more

What Machine Learning Can Do In Fabs


Semiconductor Engineering sat down to discuss the issues and challenges with machine learning in semiconductor manufacturing with Kurt Ronse, director of the advanced lithography program at Imec; Yudong Hao, senior director of marketing at Onto Innovation; Romain Roux, data scientist at Mycronic; and Aki Fujimura, chief executive of D2S. What follows are excerpts of that conversation. L-R:... » read more

How Much Power Will AI Chips Use?


AI and machine learning have voracious appetites when it comes to power. On the training side, they will fully utilize every available processing element in a highly parallelized array of processors and accelerators. And on the inferencing side they, will continue to optimize algorithms to maximize performance for whatever task a system is designed to do. But as with cars, mileage varies gre... » read more

HBM2E Memory: A Perfect Fit For AI/ML Training


Artificial Intelligence/Machine Learning (AI/ML) growth proceeds at a lightning pace. In the past eight years, AI training capabilities have jumped by a factor of 300,000 (10X annually), driving rapid improvements in every aspect of computing hardware and software. Memory bandwidth is one such critical area of focus enabling the continued growth of AI. Introduced in 2013, High Bandwidth Memo... » read more

Thinking About AI Power In Parallel


Most AI chips being developed today run highly parallel series of multiply/accumulate (MAC) operations. More processors and accelerators equate to better performance. This is why it's not uncommon to see chipmakers stitching together multiple die that are larger than a single reticle. It's also one of the reasons so much attention is being paid to moving to the next process node. It's not ne... » read more

← Older posts