Developers Turn To Analog For Neural Nets


Machine-learning (ML) solutions are proliferating across a wide variety of industries, but the overwhelming majority of the commercial implementations still rely on digital logic for their solution. With the exception of in-memory computing, analog solutions mostly have been restricted to universities and attempts at neuromorphic computing. However, that’s starting to change. “Everyon... » read more

Power/Performance Bits: May 4


Speculative execution vulnerable again Computer scientists from the University of Virginia and University of California San Diego warn of a processor architecture vulnerability that gets around the techniques used to secure processors in the wake of Spectre. In 2018, Spectre and the similar Meltdown vulnerability were announced. These types of attacks could allow malicious agents to exploit... » read more

Applications, Challenges For Using AI In Fabs


Experts at the Table: Semiconductor Engineering sat down to discuss chip scaling, transistors, new architectures, and packaging with Jerry Chen, head of global business development for manufacturing & industrials at Nvidia; David Fried, vice president of computational products at Lam Research; Mark Shirey, vice president of marketing and applications at KLA; and Aki Fujimura, CEO of D2S. Wh... » read more

How Do Machines Learn?


We depend, or hope to depend, on machines, especially computers, to do many things, from organizing our photos to parking our cars. Machines are becoming less and less "mechanical" and more and more "intelligent." Machine learning has become a familiar phrase to many people in advanced manufacturing. The next natural question people may ask is: How do machines learn? Recognizing diverse obje... » read more

Maximizing Edge AI Performance


Inference of convolutional neural network models is algorithmically straightforward, but to get the fastest performance for your application there are a few pitfalls to keep in mind when deploying. A number of factors make efficient inference difficult, which we will first step through before diving into specific solutions to address and resolve each. By the end of this article, you will be arm... » read more

Power/Performance Bits: March 16


Adaptable neural nets Neural networks go through two phases: training, when weights are set based on a dataset, and inference, when new information is assessed based on those weights. But researchers at MIT, Institute of Science and Technology Austria, and Vienna University of Technology propose a new type of neural network that can learn during inference and adjust its underlying equations to... » read more

The Best AI Edge Inference Benchmark


When evaluating the performance of an AI accelerator, there’s a range of methodologies available to you. In this article, we’ll discuss some of the different ways to structure your benchmark research before moving forward with an evaluation that directly runs your own model. Just like when buying a car, research will only get you so far before you need to get behind the wheel and give your ... » read more

Taming Non-Predictable Systems


How predictable are semiconductor systems? The industry aims to create predictable systems and yet when a carrot is dangled, offering the possibility of faster, cheaper, or some other gain, decision makers invariably decide that some degree of uncertainty is warranted. Understanding uncertainty is at least the first step to making informed decisions, but new tooling is required to assess the im... » read more

Power/Performance Bits: Jan. 26


Neural networks on MCUs Researchers at MIT are working to bring neural networks to Internet of Things devices. The team's MCUNet is a system that designs compact neural networks for deep learning on microcontrollers with limited memory and processing power. MCUNet is made up of two components. One is TinyEngine, an inference engine that directs resource management. TinyEngine is optimized t... » read more

Improving the Performance Of Deep Neural Networks


Source: North Carolina State University. Authors: Xilai Li, Wei Sun, and Tianfu Wu Abstract: "In state-of-the-art deep neural networks, both feature normalization and feature attention have become ubiquitous. They are usually studied as separate modules, however. In this paper, we propose a light-weight integration between the two schema and present Attentive Normalization (AN). Instead of l... » read more

← Older posts Newer posts →