The Challenges Of Designing An HBM2 PHY

As designers work to move higher bandwidth closer to the CPU, HBM is gaining momentum in server and networking systems.

popularity

Originally targeted at the graphics industry, HBM continues to gain momentum in the server and networking markets as system designers work to move higher bandwidth closer to the CPU. Expanding DRAM capacity – which boosts overall system performance – allows data centers to maximize local DRAM storage for wide throughput.

HBM DRAM architecture effectively increases system memory bandwidth by providing a wide interface to the SoC of 1024 bits. The maximum speed of HBM2 is 2Gbits/s, or a total bandwidth of 256Gbytes/s. Although the bit rate is similar to DDR3 at 2.1Gbps, the eight 128-bit channels provide HBM with approximately 15x more bandwidth.

HBM modules are connected to the SoC via a silicon or organic interposer. A short and controlled channel between the memory and the SoC requires less drive from the memory interface, thus reducing the power when compared to DIMM interfaces. In addition, since the interface is wide, system designers can achieve very high bandwidth with a slower frequency.

Perhaps not surprisingly, there are multiple challenges associated with the design of robust HBM2 PHYs. One such challenge is maintaining signal integrity at speeds of two gigabits per pin throughout via the interposer. Extensive modeling of both signal and power integrity is essential to achieving reliable operation in the field. As such, HBM PHY design engineers should possess extensive knowledge of 2.5D design techniques, along with a comprehensive understanding of system behavior under various conditions including temperature and voltage variations.

Determining signal routing tradeoffs via the interposer presents engineers with another significant challenge. More specifically, the tradeoffs entail balancing the ability to maintain optimal system performance while keeping the cost of the interposer as low as possible. For example, design teams must decide if one or two signal routing layers should be used throughout the interposer. Although one routing layer saves cost, it demands a more challenging design with narrower channel widths and higher crosstalk. Moreover, design teams need to determine how far apart the ASIC can be moved from the HBM DRAM modules on the interposer. While farther distances can help with thermal dissipation, each millimeter increases the likelihood of signal integrity issues.

The implementation of 2.5D technology in HBM2 systems adds numerous manufacturing complexities, requiring PHY vendors to work closely with multiple entities, such as semiconductor, manufacturing partner (foundry) and packaging house. Careful design of the entire system – including SoC, interposer, DRAM and package – are essential to ensure high yield and proper system operation. Having a high yielding module is a critical element of keeping costs in check, given the number of expensive components, including the SoC, multiple HBM die stacks and interposer.

Even with these challenges, the advantages of having increased memory bandwidth and density closer to the CPU clearly improves overall system efficiency for server and networking systems.



1 comments

Steve Casselman says:

What’s the random access performance? Is it better than HMC?

Leave a Reply


(Note: This name will be displayed publicly)