Technical paper titled “CFU Playground: Full-Stack Open-Source Framework for Tiny Machine Learning (tinyML) Acceleration on FPGAs,” from Google, Purdue University and Harvard University.
Abstract
“We present CFU Playground, a full-stack open-source framework that enables rapid and iterative design of machine learning (ML) accelerators for embedded ML systems. Our toolchain tightly integrates open-source software, RTL generators, and FPGA tools for synthesis, place, and route. This full-stack development framework gives engineers access to explore bespoke architectures that are customized and co-optimized for embedded ML. The rapid, deploy-profile-optimization feedback loop lets ML hardware and software developers achieve significant returns out of a relatively small investment in customization. Using CFU Playground’s design loop, we show substantial speedups (55x-75x) and design space exploration between the CPU and accelerator.”
Find the technical paper here. Published Jan. 2022.
arXiv:2201.01863v1 Shvetank Prakash, Tim Callahan, Joseph Bushagour, Colby Banbury, Alan V. Green, Pete Warden, Tim Ansell, Vijay Janapa Reddi.
Visit Semiconductor Engineering’s Technical Paper library here and discover many more chip industry academic papers.
Upcoming versions of high-bandwidth memory are thermally challenging, but help may be on the way.
Sensor technologies are still evolving, and capabilities are being debated.
Wireless technology is getting faster and more reliable, but it’s also becoming more challenging to support all of the necessary protocols.
Price parity with silicon modules, increased demand in EVs, and more capacity are driving widespread adoption.
First systems built, with production planned for 2025; hyper-NA to follow next decade.
Photonics, sustainability, and AI chips draw investment; 157 companies raised over $2.4 billion.
Why this 25-year-old technology may be the memory of choice for leading edge designs and in automotive applications.
Leave a Reply