PCM-based analog accelerators are a sensible choice for deep learning workloads, even for large natural language processing models like BERT.
Abstract:
“Recent advances in deep learning have been driven by ever-increasing model sizes, with networks growing to millions or even billions of parameters. Such enormous models call for fast and energy-efficient hardware accelerators. We study the potential of Analog AI accelerators based on Non-Volatile Memory, in particular Phase Change Memory (PCM), for software-equivalent accurate inference of natural language processing applications. We demonstrate a path to software-equivalent accuracy for the GLUE benchmark on BERT (Bidirectional Encoder Representations from Transformers), by combining noise-aware training to combat inherent PCM drift and noise sources, together with reduced-precision digital attention-block computation down to INT6.”
View this technical paper here. Published 07/2021.
Spoon K, Tsai H, Chen A, Rasch MJ, Ambrogio S, Mackin C, Fasoli A, Friz AM, Narayanan P, Stanisavljevic M and Burr GW (2021) Toward Software-Equivalent Accuracy on Transformer-Based Deep Neural Networks With Analog Memory Devices. Front. Comput. Neurosci. 15:675741.
Less precision equals lower power, but standards are required to make this work.
New applications require a deep understanding of the tradeoffs for different types of DRAM.
Ensuring that your product contains the best RISC-V processor core is not an easy decision, and current tools are not up to the task.
What is it, why is it important, and why now?
It’s possible for FPGAs to be small, low power, and cost-effective.
How prepared the EDA community is to address upcoming challenges isn’t clear.
Advanced etch holds key to nanosheet FETs; evolutionary path for future nodes.
Details on more than $500B in new investments by nearly 50 companies; what’s behind the expansion frenzy, why now, and challenges ahead.
From specific design team skills, to organizational and economic impacts, the move to bespoke silicon is shaking things up.
Less precision equals lower power, but standards are required to make this work.
New memory approaches and challenges in scaling CMOS point to radical changes — and potentially huge improvements — in semiconductor designs.
Open-source processor cores are beginning to show up in heterogeneous SoCs and packages.
Open source by itself doesn’t guarantee security. It still comes down to the fundamentals of design.
Leave a Reply