Big Changes In AI Design


Semiconductor Engineering sat down to discuss AI and its move to the edge with Steven Woo, vice president of enterprise solutions technology and distinguished inventor at Rambus; Kris Ardis, executive director at Maxim Integrated; Steve Roddy, vice president of Arm's Products Learning Group; and Vinay Mehta, inference technical marketing manager at Flex Logix. What follows are excerpts of that ... » read more

Are Better Machine Training Approaches Ahead?


We live in a time of unparalleled use of machine learning (ML), but it relies on one approach to training the models that are implemented in artificial neural networks (ANNs) — so named because they’re not neuromorphic. But other training approaches, some of which are more biomimetic than others, are being developed. The big question remains whether any of them will become commercially viab... » read more

AI Inference Memory System Tradeoffs


When companies describe their AI inference chip they typically give TOPS but don’t talk about their memory system, which is equally important. What is TOPS? It means Trillions or Tera Operations per Second. It is primarily a measure of the maximum achievable throughput but not a measure of actual throughput. Most operations are MACs (multiply/accumulates), so TOPS = (number of MAC units) x... » read more

Inferencing In Hardware


Cheng Wang, senior vice president of engineering at Flex Logix, examines shifting neural network models, how many multiply-accumulates are needed for different applications, and why programmable neural inferencing will be required for years to come. https://youtu.be/jb7qYU2nhoo         See other tech talk videos here. » read more

Energy-Efficient AI


Carlos Maciàn, senior director of innovation for eSilicon EMEA, talks about how to improve the efficiency of AI operations by focusing on the individual operations, including data transport, computation and memory. https://youtu.be/A3p_w7ENefs » read more