Changes In Sensors And DSPs


Pulin Desai, group director for product marketing, management and business development at Cadence, talks about why processing is moving closer to the end point, how to save energy through reduced area and sensor fusion, and the impact of specialization, 3D capture and always-on circuits. » read more

Hyperconnectivity, Hyperscale Computing, And Moving Edges


As described in “The Four Pillars of Hyperscale Computing” last year, the four core components that development teams consider for data centers are computing, storage, memory, and networking. Over the previous decade, requirements for programmability have fundamentally changed data centers. Just over a decade ago, in 2010, virtual machines would compute user workloads on CPU-centric archite... » read more

One-On-One: Lip-Bu Tan


Lip-Bu Tan, CEO of Cadence, sat down with Semiconductor Engineering to talk about the impact of massive increases in data across a variety of industries, the growing need for computational software, and the potential implications of U.S.-China relations. What follows are excerpts of that discussion. SE: What do you see as the biggest change for the chip industry? Tan: We're in our fifth g... » read more

Machine Learning At The Edge


Moving machine learning to the edge has critical requirements on power and performance. Using off-the-shelf solutions is not practical. CPUs are too slow, GPUs/TPUs are expensive and consume too much power, and even generic machine learning accelerators can be overbuilt and are not optimal for power. In this paper, learn about creating new power/memory efficient hardware architectures to meet n... » read more

Developers Turn To Analog For Neural Nets


Machine-learning (ML) solutions are proliferating across a wide variety of industries, but the overwhelming majority of the commercial implementations still rely on digital logic for their solution. With the exception of in-memory computing, analog solutions mostly have been restricted to universities and attempts at neuromorphic computing. However, that’s starting to change. “Everyon... » read more

More Data Drives Focus On IC Energy Efficiency


Computing workloads are becoming increasingly interdependent, raising the complexity level for chip architects as they work out exactly where that computing should be done and how to optimize it for shrinking energy margins. At a fundamental level, there is now more data to compute and more urgency in getting results. This situation has forced a rethinking of how much data should be moved, w... » read more

New Uses For AI


AI is being embedded into an increasing number of technologies that are commonly found inside most chips, and initial results show dramatic improvements in both power and performance. Unlike high-profile AI implementations, such as self-driving cars or natural language processing, much of this work flies well under the radar for most people. It generally takes the path of least disruption, b... » read more

Customized Micro-Benchmarks For HW/SW Performance


Raw performance used to be the main focus of benchmarks, but they may have outlived their usefulness for many applications. Dana McCarty, vice president of sales and marketing for AI Inference Products at Flex Logix, talks about why companies need to develop and utilize their own specific models to accurately gauge hardware and software performance, which can be slowed by bottlenecks in I/O and... » read more

Making Sense Of New Edge-Inference Architectures


New edge-inference machine-learning architectures have been arriving at an astounding rate over the last year. Making sense of them all is a challenge. To begin with, not all ML architectures are alike. One of the complicating factors in understanding the different machine-learning architectures is the nomenclature used to describe them. You’ll see terms like “sea-of-MACs,” “systolic... » read more

Edge-Inference Architectures Proliferate


First part of two parts. The second part will dive into basic architectural characteristics. The last year has seen a vast array of announcements of new machine-learning (ML) architectures for edge inference. Unburdened by the need to support training, but tasked with low latency, the devices exhibit extremely varied approaches to ML inference. “Architecture is changing both in the comp... » read more

← Older posts Newer posts →