The Week In Review: Design


M&A GlobalFoundries formed Avera Semiconductor, a wholly-owned subsidiary focused on custom ASIC designs. While Avera will use its relationship with GF for 14/12nm and more mature technologies, it has a foundry partnership lined up for 7nm. The new company's IP portfolio includes high-speed SerDes, high-performance embedded TCAMs, ARM cores and performance and density-optimized embedded SR... » read more

Real-Time Object Recognition At Low Cost/Power/Latency


Most neural network chips and IP talk about ResNet-50 benchmarks (image classification at 224x224 pixels). But we find that the number one neural network of interest for most customers is real-time object recognition, such as YOLOv3. It's not possible to do comparisons here because nobody shows a YOLOv3 benchmark for their inferencing. But it's very possible to improve on the inferencing per... » read more

System Bits: Oct. 30


Ethics, regional differences for programming autonomous vehicles MIT researchers have revealed some distinct global preferences concerning the ethics of autonomous vehicles, as well as some regional variations in those preferences based on a recently completed survey. [caption id="attachment_24139620" align="alignleft" width="300"] Ethical questions involving autonomous vehicles are the foc... » read more

Implementing Mathematical Algorithms In Hardware For Artificial Intelligence


Petabytes of data efficiently travels between edge devices and data centers for processing and computing of AI functions. Accurate and optimized hardware implementations of functions offload many operations that the processing unit would have to execute. As the mathematical algorithms used in AI-based systems evolve, and in some cases stabilize, the demand to implement them in hardware increase... » read more

Machine Learning Invades IC Production


Semiconductor Engineering sat down to discuss artificial intelligence (AI), machine learning, and chip and photomask manufacturing technologies with Aki Fujimura, chief executive of D2S; Jerry Chen, business and ecosystem development manager at Nvidia; Noriaki Nakayamada, senior technologist at NuFlare; and Mikael Wahlsten, director and product area manager at Mycronic. What follows are excerpt... » read more

Processors Are Exciting Again


Today is a very exciting time in the world of processor architectures. Domain-specific processor architectures are now fully realized as the best answers to the challenges of low power and high performance for many applications. Advancements in artificial intelligence are leading the way to exciting new experiences and products today and in our future. There have been more advances in deep lear... » read more

Blog Review: Oct. 10


In a video, Cadence's Megha Daga dives into sparsity in neural networks and how it affects bandwidth, performance, and power efficiency. In a video, Mentor's Colin Walls takes a look at efficient embedded code, and why that means different things at different times. Synopsys' Eric Huang argues that in the realm of video standards, HDMI, DisplayPort, and USB Type-C are set to continue comp... » read more

Using ASICs For AI Inferencing


Flex Logix’s Cheng Wang looks at why ASICs are the best way to improve performance and optimize power and area for inferencing, and how to add flexibility into those designs to deal with constantly changing algorithms and data sets. https://youtu.be/XMHr7sz9JWQ » read more

AI Chips Must Get The Floating-Point Math Right


Most AI chips and hardware accelerators that power machine learning (ML) and deep learning (DL) applications include floating-point units (FPUs). Algorithms used in neural networks today are often based on operations that use multiplication and addition of floating-point values, which subsequently need to be scaled to different sizes and for different needs. Modern FPGAs such as Intel Arria-10 ... » read more

Week In Review: Design, Low Power


Tools & IP Cadence unveiled deep neural-network accelerator (DNA) AI processor IP, Tensilica DNA 100, targeted at on-device neural network inference applications. The processor is scalable from 0.5 TMAC (Tera multiply-accumulate) to 12 TMACs, or 100s of TMACs with multiple processors stacked, and the company claims it delivers up to 4.7X better performance and up to 2.3X more performance p... » read more

← Older posts Newer posts →