Monitoring Heat On AI Chips


Stephen Crosher, CEO of Moortec, talks about monitoring temperature differences on-chip in AI chips and how to make the most of the power that can be delivered to a device and why accuracy is so critical. » read more

The New CXL Standard


Gary Ruggles, senior staff product marketing manager at Synopsys, digs into the new Compute Express Link standard, why it’s important for high bandwidth in AI/ML applications, where it came from, and how to apply it in current and future designs. » read more

Making Better Use Of Memory In AI


Steven Woo, Rambus fellow and distinguished inventor, talks about using number formats to extend memory bandwidth, what the impact can be on fractional precision, how modifications of precision can play into that without sacrificing accuracy, and what role stochastic rounding can play. » read more

Machine Learning Inferencing At The Edge


Ian Bratt, fellow in Arm's machine learning group, talks about why machine learning inferencing at the edge is so difficult, what are the tradeoffs, how to optimize data movement, how to accelerate that movement, and how it differs from developing other types of processors. » read more

In-Memory Computing


Gideon Intrater, CTO at Adesto Technologies, talks about why in-memory computing is now being taken seriously again, years after it was first proposed as a possible option. What's changed is an explosion in data, and a recognition that it's too time- and energy-intensive to send all of that data back and forth between memories and processors on the same chip, let alone to the cloud and back. On... » read more

In Memory And Near-Memory Compute


Steven Woo, Rambus fellow and distinguished inventor, talks about the amount of power required to store data and to move it out of memory to where processing is done. This can include changes to memory, but it also can include rethinking compute architectures from the ground up to achieve up to 1 million times better performance in highly specialized systems. Related Find more memor... » read more

Memory In Microcontrollers


Gideon Intrater, CTO of Adesto, talks about how to use microcontrollers for applications where more memory is required, such as automotive, communication, and AI at the edge. Options include moving MCUs toward a more aggressive process node, adding external non-volatile memory, and execute-in-place types of architectures. » read more

Memory Options And Tradeoffs


Steven Woo, Rambus fellow and distinguished inventor, talks with Semiconductor Engineering about different memory options, why some are better than others for certain tasks, and what the tradeoffs are between the different memory types and architectures.     Related Articles/Videos Memory Tradeoffs Intensify In AI, Automotive Applications Why choosing memories and archi... » read more

Latency Under Load: HBM2 vs. GDDR6


Steven Woo, Rambus fellow and distinguished inventor, explains why data traffic and bandwidth are critical to choosing the type of DRAM, options for improving traffic flow in different memory types, and how this works with multiple memory types.   Related Video GDDR6 - HBM2 Tradeoffs Why designers choose one memory type over another. Applications for each were clearly delineate... » read more

New Challenges For Data Centers


Rita Horner, senior technical marketing manager in Synopsys’ Solutions Group, looks at the impact of a significant rise in data, why this often leads to big cost increases, and where the bottlenecks are occurring. » read more

← Older posts Newer posts →