New Ways To Optimize Machine Learning


As more designers employ machine learning (ML) in their systems, they’re moving from simply getting the application to work to optimizing the power and performance of their implementations. Some techniques are available today. Others will take time to percolate through the design flow and tools before they become readily available to mainstream designers. Any new technology follows a basic... » read more

HBM Issues In AI Systems


All systems face limitations, and as one limitation is removed, another is revealed that had remained hidden. It is highly likely that this game of Whac-A-Mole will play out in AI systems that employ high-bandwidth memory (HBM). Most systems are limited by memory bandwidth. Compute systems in general have maintained an increase in memory interface performance that barely matches the gains in... » read more

High-Performance Memory For AI And HPC


Frank Ferro, senior director of product management at Rambus, examines the current performance bottlenecks in high-performance computing, drilling down into power and performance for different memory options, and explains what are the best solutions for different applications and why. » read more

Brighter Future For Photonics


Photons increasingly are taking over where electrons are failing in communications, but mixing the two never has been easy. There always have been two potential implementation paths — building each on its own substrate and then stacking them, or building them on a single substrate. The tradeoff between the two solutions is more complex than it may initially appear, and ongoing improvements... » read more

Tradeoffs In Embedded Vision SoCs


Gordon Cooper, product marketing manager for embedded vision processors at Synopsys, talks with Semiconductor Engineering about the need for more performance in these devices, how that impacts power, and what can be done to optimize both prior to manufacturing. » read more

CXL Vs. CCIX


Kurt Shuler, vice president of marketing at ArterisIP, explains how these two standards differ, which one works best where, and what each was designed for. » read more

Die-To-Die Connectivity


Manmeet Walia, senior product marketing manager at Synopsys, talks with Semiconductor Engineering about how die-to-die communication is changing as Moore’s Law slows down, new use cases such as high-performance computing, AI SoCs, optical modules, and where the tradeoffs are for different applications.   Interested in more Semiconductor Engineering videos? Sign-up for our YouTu... » read more

GDDR6 Drilldown: Applications, Tradeoffs And Specs


Frank Ferro, senior director of product marketing for IP cores at Rambus, drills down on tradeoffs in choosing different DRAM versions, where GDDR6 fits into designs versus other types of DRAM, and how different memories are used in different vertical markets. » read more

GDDR6 Pushes The Memory Envelope For AI And ADAS


Memory bandwidth is an ever-increasing critical bottleneck for a wide range of use cases and applications. These include artificial intelligence (AI), machine learning (ML), advanced driver-assistance systems (ADAS), as well as 5G wireless and wireline infrastructure. In addition to memory bottlenecks, the above-mentioned use cases and applications are rapidly hitting the real-world limits of t... » read more

Tricky Tradeoffs For LPDDR5


LPDDR5 is slated as the next-gen memory for AI technology, autonomous driving, 5G networks, advanced displays, and leading-edge camera applications, and it is expected to compete with GDDR6 for these applications. But like all next-gen applications, balancing power, performance, and area concerns against new technology options is not straightforward. These are interesting times in the memory... » read more

← Older posts Newer posts →