HBM Options Increase As AI Demand Soars


High-bandwidth memory (HBM) sales are spiking as the amount of data that needs to be processed quickly by state-of-the-art AI accelerators, graphic processing units, and high-performance computing applications continues to explode. HBM inventories are sold out, driven by massive efforts and investments in developing and improving large language models such as ChatGPT. HBM is the memory of ch... » read more

A New Generation Of 7400 Socket


When I was 18, and just been accepted at Brunel University in West London to start my undergraduate degree in electrical and electronic engineering, I sent off a letter to Texas Instruments telling them about the journey ahead of me and asked if they could they send me a copy of their TTL Data Book. A few weeks later a package arrived and there it was. This incredible brown/orange book, thicker... » read more

HBM3E: All About Bandwidth


The rapid rise in size and sophistication of AI/ML training models requires increasingly powerful hardware deployed in the data center and at the network edge. This growth in complexity and data stresses the existing infrastructure, driving the need for new and innovative processor architectures and associated memory subsystems. For example, even GPT-3 at 175 billion parameters is stressing the... » read more

Are You Ready For HBM4? A Silicon Lifecycle Management (SLM) Perspective


Many factors are driving system-on-chip (SoC) developers to adopt multi-die technology, in which multiple dies are stacked in a three-dimensional (3D) configuration. Multi-die systems may make power and thermal issues more complex, and they have required major innovations in electronic design automation (EDA) implementation and test tools. These challenges are more than offset by the advantages... » read more

What’s Missing In 2.5D EDA Tools


Gaps in EDA tool chains for 2.5D designs are limiting the adoption of this advanced packaging approach, which so far has been largely confined to high-performance computing. But as the rest of the chip industry begins migrating toward advanced packaging and chiplets, the EDA industry is starting to change direction. There are learning periods with all new technologies, and 2.5D advanced pack... » read more

Enabling Scalable Accelerator Design On Distributed HBM-FPGAs (UCLA)


A technical paper titled “TAPA-CS: Enabling Scalable Accelerator Design on Distributed HBM-FPGAs” was published by researchers at University of California Los Angeles. Abstract: "Despite the increasing adoption of Field-Programmable Gate Arrays (FPGAs) in compute clouds, there remains a significant gap in programming tools and abstractions which can leverage network-connected, cloud-scale... » read more

The Power Of HBM3 Memory For AI Training Hardware


AI training data sets are constantly growing, driving the need for hardware accelerators capable of handling terabyte-scale bandwidth. Among the array of memory technologies available, High Bandwidth Memory (HBM) has emerged as the memory of choice for AI training hardware, with the most recent generation, HBM3, delivering unrivaled memory bandwidth. Let’s take a closer look at this important... » read more

DRAM Test And Inspection Just Gets Tougher


DRAM manufacturers continue to demand cost-effective solutions for screening and process improvement amid growing concerns over defects and process variability, but meeting that demand is becoming much more difficult with the rollout of faster interfaces and multi-chip packages. DRAM plays a key role in a wide variety of electronic devices, from phones and PCs to ECUs in cars and servers ins... » read more

Generative AI Training With HBM3 Memory


One of the biggest, most talked about application drivers of hardware requirements today is the rise of Large Language Models (LLMs) and the generative AI which they make possible.  The most well-known example of generative AI right now is, of course, ChatGPT. ChatGPT’s large language model for GPT-3 utilizes 175 billion parameters. Fourth generation GPT-4 will reportedly boost the number of... » read more

HBM’s Future: Necessary But Expensive


High-bandwidth memory (HBM) is becoming the memory of choice for hyperscalers, but there are still questions about its ultimate fate in the mainstream marketplace. While it’s well-established in data centers, with usage growing due to the demands of AI/ML, wider adoption is inhibited by drawbacks inherent in its basic design. On the one hand, HBM offers a compact 2.5D form factor that enables... » read more

← Older posts