中文 English

Taming Non-Predictable Systems


How predictable are semiconductor systems? The industry aims to create predictable systems and yet when a carrot is dangled, offering the possibility of faster, cheaper, or some other gain, decision makers invariably decide that some degree of uncertainty is warranted. Understanding uncertainty is at least the first step to making informed decisions, but new tooling is required to assess the im... » read more

Xilinx AI Engines And Their Applications


This white paper explores the architecture, applications, and benefits of using Xilinx's new AI Engine for compute intensive applications like 5G cellular and machine learning DNN/CNN. 5G requires between five to 10 times higher compute density when compared with prior generations; AI Engines have been optimized for DSP, meeting both the throughput and compute requirements to deliver the hig... » read more

Manufacturing Bits: April 23


Sorting nuclei CERN and GSI Darmstadt have begun testing the first of two giant magnets that will serve as part of one of the largest and most complex accelerator facilities in the world. CERN, the European Organization for Nuclear Research, recently obtained two magnets from GSI. The two magnets weigh a total of 27 tons. About 60 more magnets will follow over the next five years. These ... » read more

Deep Learning Hardware: FPGA vs. GPU


FPGAs or GPUs, that is the question. Since the popularity of using machine learning algorithms to extract and process the information from raw data, it has been a race between FPGA and GPU vendors to offer a HW platform that runs computationally intensive machine learning algorithms fast and efficiently. As Deep Learning has driven most of the advanced machine learning applications, it is r... » read more

Machine Learning Shifts More Work to FPGAs, SoCs


A wave of machine-learning-optimized chips is expected to begin shipping in the next few months, but it will take time before data centers decide whether these new accelerators are worth adopting and whether they actually live up to claims of big gains in performance. There are numerous reports that silicon custom-designed for machine learning will deliver 100X the performance of current opt... » read more

High-Performance Memory At Low Cost Per Bit


Hardware developers of deep learning neural networks (DNN) have a universal complaint – they need more and more memory capacity with high performance, low cost and low power. As artificial intelligence (AI) techniques gain wider adoption, their complexity and training requirements also increase. Large and complex DNN models do not fit on the small on-chip SRAM caches near the processor. This ... » read more

Deep Learning Neural Networks Drive Demands On Memory Bandwidth


A deep neural network (DNN) is a system that is designed similar to our current understanding of biological neural networks in the brain. DNNs are finding use in many applications, advancing at a fast pace, pushing the limits of existing silicon, and impacting the design of new computing architectures. Figure 1 shows a very basic form of neural network that has several nodes in each layer that ... » read more

Power/Performance Bits: March 27


Equalizing batteries Engineers at the University of Toledo propose a bilevel equalizer technology to improve the life span of batteries by combining the high performance of an active equalizer with the low cost of a passive equalizer. "Whenever we are talking about batteries, we are talking about cells connected in a series. Over time, the battery is not balanced and limited by the weakest ... » read more

Bridging Machine Learning’s Divide


There is a growing divide between those researching [getkc id="305" comment="machine learning"] (ML) in the cloud and those trying to perform inferencing using limited resources and power budgets. Researchers are using the most cost-effective hardware available to them, which happens to be GPUs filled with floating point arithmetic units. But this is an untenable solution for embedded infere... » read more