Optical Interconnectivity At 224 Gbps


AI is generating so much traffic that traditional copper-based approaches for moving data inside a chip, between chips, and between systems, are running out of steam. Just adding more channels is no longer viable. It requires more power to drive signals, and the distance those signals can travel without excessive loss is shrinking. Mike Klempa, product marketing specialist at Alphawave Semi, di... » read more

Speeding Up Die-To-Die Interconnectivity


Disaggregating SoCs, coupled with the need to process more data faster, is forcing engineering teams to rethink the electronic plumbing in a system. Wires don't shrink, and just cramming more wires or thicker wires into a package are not viable solutions. Kevin Donnelly, vice president of strategic marketing at Eliyan, talks about how to speed up data movement between chiplets with bi-direction... » read more

What’s Changing In SerDes


SerDes is all about pushing data through the smallest number of physical channels. But when it comes to AI, more data needs to be moved, and it has to be moved more quickly. Todd Bermensolo, product marketing manager at Alphawave Semi, talks about the impact of faster data movement on the transmitter (more power) and on the receiver (gain and advanced equalization), how to ensure signal inte... » read more

Optimizing Data Movement In SoCs And Advanced Packages


The amount of data that needs to move around a chip is growing exponentially, driven by the rollout of AI and more sensors everywhere. There may be hundreds of IP blocks, more compute elements, and many more wires to contend with. Andy Nightingale, vice president of product management and marketing at Arteris, talks about the demand for low-latency on-chip communication in increasingly complex ... » read more

Scenario Coverage In Formal Verification


A rapid increase in complexity with heterogeneous assemblies and advanced-node chips is raising all sorts of questions on the formal verification side about the completeness of coverage. Engineers may assume proofs are complete, but in many cases they're black boxes that provide little or no insights into what's actually being proven. This is where scenario coverage comes into play. Ashish Darb... » read more

Cracking The Memory Wall


Processor performance continues to improve exponentially, with more processor cores, parallel instructions, and specialized processing elements, but it is far outpacing improvements in bandwidth and memory. That gap, the so-called memory wall, has persisted throughout most of this century, but now it is becoming more pronounced. SRAM scaling is slowing at advanced nodes, which means SRAM takes ... » read more

PCIe Over Optics


Moving data through a chip or package, and between packages and systems, is becoming a much bigger challenge as the volume of data continues to explode, and as more compute resources are deployed to work on data-intensive problems such as training AI algorithms or running long and complex simulations. There is more data to process in more places, more levels of data storage and access, and any ... » read more

Livelocks And Deadlocks In NoCs


Devices that are stuck in a specific state, or which appear to be making progress even though they are not, are common problems in complex systems. Processing elements need to fetch data they don't have from routers may be frozen out by other processors, a problem that is exacerbated by common bus protocols. Ashish Darbari, CEO of Axiomise, talks about how to identify potential bottlenecks, why... » read more

Distributed Voltage And Frequency Scaling Gaining Traction


DVFS has been used in smart phones for more than a decade as a way of trading off power and performance when both are constrained, but much of the semiconductor industry has avoided this technique because it's too difficult to work with. That's starting to change as processing demands increase, driven by the rollout of AI everywhere and an increase in the number of features in advanced packages... » read more

The Evolution of HBM


High-bandwidth memory originally was conceived as a way to increase capacity in memory attached to a 2.5D package. It has since become a staple for all high-performance computing, in some cases replacing SRAM for L3 cache. Archana Cheruliyil, senior product marketing manager at Alphawave Semi, talks about how and where HBM is used today, how it will be used in the future, why it is essential fo... » read more

← Older posts