Making Sense Of Inferencing Options


Ian Bratt, fellow in Arm’s machine learning group, sheds light on all the different processing elements in machine learning, how different end user requirements affect those choices, why CPUs are a critical element in orchestrating what happens in these systems, and how power and software play into these choices. » read more

Making Sense Of ML Metrics


Steve Roddy, vice president of products for Arm’s Machine Learning Group, talks with Semiconductor Engineering about what different metrics actually mean, and why they can vary by individual applications and use cases. » read more

More Data, More Processing, More Chips


Simon Segars, CEO of Arm, sat down with Semiconductor Engineering to talk about the impact of heterogeneous computing and new packaging approaches on IP, the need for more security, and how 5G and the edge will impact compute architectures and the chip industry. SE: There are a whole bunch of new markets opening up. How does Arm plan to tackle those? Segars: Luckily for us, we can design ... » read more

Where Is The Edge?


Mike Fitton, senior director of strategic planning at Achronix, talks about what the edge will look like, how that fits in with the cloud, what the requirements are both for processing and for storage, and how this concept will evolve.   Edge Knowledge Center Top stories, videos, blogs, white papers all related to the Edge » read more

Machine Learning Inferencing At The Edge


Ian Bratt, fellow in Arm's machine learning group, talks about why machine learning inferencing at the edge is so difficult, what are the tradeoffs, how to optimize data movement, how to accelerate that movement, and how it differs from developing other types of processors. » read more

What’s Powering Artificial Intelligence?


While artificial intelligence (AI) and machine learning (ML) applications soar in popularity, many organizations are questioning where ML workloads should be performed. Should they be done on a central processor (CPU), a graphics processor (GPU), or a neural processor (NPU)? The choice most teams are making today will surprise you. To scale artificial intelligence (AI) and machine learning (... » read more

Week in Review – IoT, Security, Autos


Products/Services Rambus entered an exclusive agreement to acquire the Silicon IP, Secure Protocols, and Provisioning business from Verimatrix, formerly known as Inside Secure. Financial terms were not revealed. The transaction is expected to close this year. Rambus will use the Verimatrix offerings in such demanding applications as artificial intelligence, automotive, the Internet of Things, ... » read more

The Race For Better Computational Software


Anirudh Devgan, president of Cadence, sat down with Semiconductor Engineering to talk about computational software, why it's so critical at the edge and in AI systems, and where the big changes are across the semiconductor industry. What follows are excerpts of that conversation. SE: There is no consistent approach to how data will be processed at the edge, in part because there is no consis... » read more

Chiplets, Faster Interconnects, More Efficiency


Big chipmakers are turning to architectural improvements such as chiplets, faster throughput both on-chip and off-chip, and concentrating more work per operation or cycle, in order to ramp up processing speeds and efficiency. Taken as a whole, this represents a significant shift in direction for the major chip companies. All of them are wrestling with massive increases in processing demands ... » read more

Why Scaling Must Continue


The entire semiconductor industry has come to the realization that the economics of scaling logic are gone. By any metric—price per transistor, price per watt, price per unit area of silicon—the economics are no longer in the plus column. So why continue? The answer is more complicated than it first appears. This isn't just about inertia and continuing to miniaturize what was proven in t... » read more

← Older posts Newer posts →