Compiling And Optimizing Neural Nets


Edge inference engines often run a slimmed-down real-time engine that interprets a neural-network model, invoking kernels as it goes. But higher performance can be achieved by pre-compiling the model and running it directly, with no interpretation — as long as the use case permits it. At compile time, optimizations are possible that wouldn’t be available if interpreting. By quantizing au... » read more

The Evolution Of High-Level Synthesis


High-level synthesis is getting yet another chance to shine, this time from new markets and new technology nodes. But it's still unclear how fully this technology will be used. Despite gains, it remains unlikely to replace the incumbent RTL design methodology for most of the chip, as originally expected. Seen as the foundational technology for the next generation of EDA companies around the ... » read more

Week In Review: Design, Low Power


Tools & IP Arm unveiled several new processor IPs. Targeting next-gen smartphones, the Cortex-A78 CPU provides a 20% increase in sustained performance over Cortex-A77-based devices within a 1-watt power budget, and more efficient management of compute workloads and on-device ML. The Mali-G78 GPU provides a 25% increase in performance over the Malti-G77. It supports up to 24 cores and in... » read more

What Machine Learning Can Do In Fabs


Semiconductor Engineering sat down to discuss the issues and challenges with machine learning in semiconductor manufacturing with Kurt Ronse, director of the advanced lithography program at Imec; Yudong Hao, senior director of marketing at Onto Innovation; Romain Roux, data scientist at Mycronic; and Aki Fujimura, chief executive of D2S. What follows are excerpts of that conversation. L-R:... » read more

Improving Algorithms With High-Level Synthesis


Most computer algorithms today are developed in high-level languages on general-purpose computers. But someday they may be deployed in embedded systems where the development, verification, and validation of algorithms is done in languages like python, Java, C++, or even numerical frameworks like MatLab. This is the goal of high-level synthesis (HLS), and it aims to solve a fundamental proble... » read more

Checkmate: Breaking The Memory Wall With Optimal Tensor Rematerialization


Source: Published on arXiv 10/7/ 2019   Paras Jain Ajay Jain Aniruddha Nrusimha Amir Gholami Pieter Abbeel Kurt Keutzer Ion Stoica Joseph E. Gonzalez A recent paper published on arXiv by a team of UC Berkeley researchers notes that neural networks are increasingly impeded by the limited capacity of on-device GPU memory. The UC Berkeley team uses off-the-shel... » read more

Machine Learning Drives High-Level Synthesis Boom


High-level synthesis (HLS) is experiencing a new wave of popularity, driven by its ability to handle machine-learning matrices and iterative design efforts. The obvious advantage of HLS is the boost in productivity designers get from working in C, C++ and other high-level languages rather than RTL. The ability to design a layout that should work, and then easily modify it to test other confi... » read more

From AI Algorithm To Implementation


Semiconductor Engineering sat down to discuss the role that EDA has in automating artificial intelligence and machine learning with Doug Letcher, president and CEO of Metrics; Daniel Hansson, CEO of Verifyter; Harry Foster, chief scientist verification for Mentor, a Siemens Business; Larry Melling, product management director for Cadence; Manish Pandey, Synopsys fellow; and Raik Brinkmann, CEO ... » read more

The Automation Of AI


Semiconductor Engineering sat down to discuss the role that EDA has in automating artificial intelligence and machine learning with Doug Letcher, president and CEO of Metrics; Daniel Hansson, CEO of Verifyter; Harry Foster, chief scientist verification for Mentor, a Siemens Business; Larry Melling, product management director for Cadence; Manish Pandey, Synopsys fellow; and Raik Brinkmann, CEO ... » read more

Edge Inferencing Challenges


Geoff Tate, CEO of Flex Logix, talks about balancing different variables to improve performance and reduce power at the lowest cost possible in order to do inferencing in edge devices. https://youtu.be/1BTxwew--5U » read more

← Older posts