The Evolution Of High-Level Synthesis


High-level synthesis is getting yet another chance to shine, this time from new markets and new technology nodes. But it's still unclear how fully this technology will be used. Despite gains, it remains unlikely to replace the incumbent RTL design methodology for most of the chip, as originally expected. Seen as the foundational technology for the next generation of EDA companies around the ... » read more

Designing For Extreme Low Power


There are several techniques available for low power design, but whenever a nanowatt or picojoule matters, all available methods must be used. Some of the necessary techniques are different from those used for high-end designs. Others have been lost over time because their impact was considered too small, or not worth the additional design effort. But for devices that last a lifetime on a si... » read more

ML Opening New Doors For FPGAs


FPGAs have long been used in the early stages of any new digital technology, given their utility for prototyping and rapid evolution. But with machine learning, FPGAs are showing benefits beyond those of more conventional solutions. This opens up a hot new market for FPGAs, which traditionally have been hard to sustain in high-volume production due to pricing, and hard to use for battery-dri... » read more

New Ways To Optimize Machine Learning


As more designers employ machine learning (ML) in their systems, they’re moving from simply getting the application to work to optimizing the power and performance of their implementations. Some techniques are available today. Others will take time to percolate through the design flow and tools before they become readily available to mainstream designers. Any new technology follows a basic... » read more

Speeding Up Verification Using SystemC


Brett Cline, senior vice president at OneSpin Solutions, explains how adding formal verification into the high-level synthesis flow can reduce the time spent in optimization and debug by about two-thirds, why this needs to be done well ahead of RTL, starting with issues such as initialization, memory out of bounds and other issues that are difficult to find in simulation. » read more

Machine Learning At The Edge


Moving machine learning to the edge has critical requirements on power and performance. Using off-the-shelf solutions is not practical. CPUs are too slow, GPUs/TPUs are expensive and consume too much power, and even generic machine learning accelerators can be overbuilt and are not optimal for power. In this paper, learn about creating new power/memory efficient hardware architectures to meet n... » read more

Divided On System Partitioning


Building an optimal implementation of a system using a functional description has been an industry goal for a long time, but it has proven to be much more difficult than it sounds. The general idea is to take software designed to run on a processor and to improve performance using various types of alternative hardware. That performance can be specified in various ways and for specific applic... » read more

Improving Algorithms With High-Level Synthesis


Most computer algorithms today are developed in high-level languages on general-purpose computers. But someday they may be deployed in embedded systems where the development, verification, and validation of algorithms is done in languages like python, Java, C++, or even numerical frameworks like MatLab. This is the goal of high-level synthesis (HLS), and it aims to solve a fundamental proble... » read more

Optimizing Power And Performance For Machine Learning At The Edge


While machine learning (ML) algorithms are popular for running on enterprise Cloud systems for training neural networks, AI/ML chipsets for edge devices are growing at a triple digit rate, according to Tractica “Deep Learning Chipsets” (Figure 1). Edge devices include automobiles, drones, and mobile devices that are all employing AI/ML to provide valuable functionality. Figure 1: Marke... » read more

Machine Learning At The Edge


Moving machine learning to the edge has critical requirements on power and performance. Using off-the-shelf solutions is not practical. CPUs are too slow, GPUs/TPUs are expensive and consume too much power, and even generic machine learning accelerators can be overbuilt and are not optimal for power. In this paper, learn about creating new power/memory efficient hardware architectures to meet n... » read more

← Older posts Newer posts →