Working With Chiplets


The usual method of migrating to the next process node to cram more features onto a piece of silicon no longer works. It's too expensive, and too limited for most applications. The path forward is now heterogeneous chiplets targeted at specific markets, and while logic will continue to scale, other features are being separated out into chiplets developed using different process technologies. Th... » read more

Barriers To Chiplet Sockets


Experts At The Table: Demand for chiplets is growing, but debate continues about whether standards and general-purpose chiplets will kick-start the commercialization boom, or whether success will come through customization of those chiplets. Semiconductor Engineeering sat down to discuss these and other related issues with Elad Alon, CEO of Blue Cheetah; Mark Kuemerle, vice president of technol... » read more

Enabling Innovative Multi-Vendor Chiplet-Based Designs


Chiplets have emerged as a critical implementation paradigm for semiconductor products, primarily because they can deliver cost benefits relative to a non-chiplet-based approach. The first, most well-proven, and obvious benefit of a chiplet-based approach is manufacturing cost. Manufacturing cost benefits are accrued either from the appropriate selection of chiplet die size, or by optimizin... » read more

Optimizing Wafer Edge Processes For Chip Stacking


Stacking chiplets vertically using short and direct wafer-to-wafer bonds can reduce signal delay to negligible levels, enabling smaller, thinner packages with faster memory/processor speeds and lower power consumption. The race is on to implement wafer stacking and die-to-wafer hybrid bonding, now considered essential for stacking logic and memory, 3D NAND, and possibly multi-layer DRAM stac... » read more

Memory Fundamentals For Engineers


Memory is one of a very few elite electronic components essential to any electronic system. Modern electronics perform extraordinarily complex duties that would be impossible without memory. Your computer obviously contains memory, but so does your car, your smartphone, your doorbell camera, your entertainment system, and any other gadget benefiting from digital electronics. This eBook prov... » read more

HBM4 Feeds Generative AI’s Hunger For More Memory Bandwidth


Generative AI (Gen AI), built on the exponential growth of Large Language Models (LLMs) and their kin, is one of today’s biggest drivers of computing technology. Leading-edge LLMs now exceed a trillion parameters and offer multimodal capabilities so they can take a broad range of inputs, whether they’re in the form of text, speech, images, video, code, and more, and generate an equally broa... » read more

A New Generation Of 7400 Socket


When I was 18, and just been accepted at Brunel University in West London to start my undergraduate degree in electrical and electronic engineering, I sent off a letter to Texas Instruments telling them about the journey ahead of me and asked if they could they send me a copy of their TTL Data Book. A few weeks later a package arrived and there it was. This incredible brown/orange book, thicker... » read more

3.5D: The Great Compromise


The semiconductor industry is converging on 3.5D as the next best option in advanced packaging, a hybrid approach that includes stacking logic chiplets and bonding them separately to a substrate shared by other components. This assembly model satisfies the need for big increases in performance while sidestepping some of the thorniest issues in heterogeneous integration. It establishes a midd... » read more

Freeing Up Near-Memory Capacity For Cache Using Compression Techniques In A Flat Hybrid-Memory Architecture


A technical paper titled “HMComp: Extending Near-Memory Capacity using Compression in Hybrid Memory” was published by researchers at Chalmers University of Technology and ZeroPoint Technologies. Abstract: "Hybrid memories, especially combining a first-tier near memory using High-Bandwidth Memory (HBM) and a second-tier far memory using DRAM, can realize a large and low cost, high-bandwi... » read more

HBM3E: All About Bandwidth


The rapid rise in size and sophistication of AI/ML training models requires increasingly powerful hardware deployed in the data center and at the network edge. This growth in complexity and data stresses the existing infrastructure, driving the need for new and innovative processor architectures and associated memory subsystems. For example, even GPT-3 at 175 billion parameters is stressing the... » read more

← Older posts