Are Better Machine Training Approaches Ahead?


We live in a time of unparalleled use of machine learning (ML), but it relies on one approach to training the models that are implemented in artificial neural networks (ANNs) — so named because they’re not neuromorphic. But other training approaches, some of which are more biomimetic than others, are being developed. The big question remains whether any of them will become commercially viab... » read more

ML Opening New Doors For FPGAs


FPGAs have long been used in the early stages of any new digital technology, given their utility for prototyping and rapid evolution. But with machine learning, FPGAs are showing benefits beyond those of more conventional solutions. This opens up a hot new market for FPGAs, which traditionally have been hard to sustain in high-volume production due to pricing, and hard to use for battery-dri... » read more

The Murky World Of AI Benchmarks


AI startup companies have been emerging at breakneck speed for the past few years, all the while touting TOPS benchmark data. But what does it really mean and does a TOPS number apply across every application? Answer: It depends on a variety of factors. Historically, every class of design has used some kind of standard benchmark for both product development and positioning. For example, SPEC... » read more

Re-Imagining The GPU


John Rayfield, CTO at Imagination Technologies, sat down with Semiconductor Engineering to talk about RISC-V, AI, and computing architectures. What follows are excerpts of that conversation. SE: What your plans are for RISC-V? Rayfield: We're actively finalizing the integration of RISC-V cores into future-generation GPUs. That work has been going on for several months. Moving forward, we'... » read more

What Machine Learning Can Do In Fabs


Semiconductor Engineering sat down to discuss the issues and challenges with machine learning in semiconductor manufacturing with Kurt Ronse, director of the advanced lithography program at Imec; Yudong Hao, senior director of marketing at Onto Innovation; Romain Roux, data scientist at Mycronic; and Aki Fujimura, chief executive of D2S. What follows are excerpts of that conversation. L-R:... » read more

How Much Power Will AI Chips Use?


AI and machine learning have voracious appetites when it comes to power. On the training side, they will fully utilize every available processing element in a highly parallelized array of processors and accelerators. And on the inferencing side they, will continue to optimize algorithms to maximize performance for whatever task a system is designed to do. But as with cars, mileage varies gre... » read more

HBM2E Memory: A Perfect Fit For AI/ML Training


Artificial Intelligence/Machine Learning (AI/ML) growth proceeds at a lightning pace. In the past eight years, AI training capabilities have jumped by a factor of 300,000 (10X annually), driving rapid improvements in every aspect of computing hardware and software. Memory bandwidth is one such critical area of focus enabling the continued growth of AI. Introduced in 2013, High Bandwidth Memo... » read more

Thinking About AI Power In Parallel


Most AI chips being developed today run highly parallel series of multiply/accumulate (MAC) operations. More processors and accelerators equate to better performance. This is why it's not uncommon to see chipmakers stitching together multiple die that are larger than a single reticle. It's also one of the reasons so much attention is being paid to moving to the next process node. It's not ne... » read more

Defining And Improving AI Performance


Many companies are developing AI chips, both for training and for inference. Although getting the required functionality is important, many solutions will be judged by their performance characteristics. Performance can be measured in different ways, such as number of inferences per second or per watt. These figures are dependent on a lot of factors, not just the hardware architecture. The optim... » read more

Die-To-Die Connectivity


Manmeet Walia, senior product marketing manager at Synopsys, talks with Semiconductor Engineering about how die-to-die communication is changing as Moore’s Law slows down, new use cases such as high-performance computing, AI SoCs, optical modules, and where the tradeoffs are for different applications.   Interested in more Semiconductor Engineering videos? Sign-up for our YouTu... » read more

← Older posts