Multi-DRAM Memory Subsystems In SoCs

When combining multiple individual DRAMs to create higher density memories, watch out for these requirements.


Even with DRAM capacity going up with each generation of DRAM, the demand for memory densities by a variety of applications is growing at an even faster rate. To support these high memory densities and bus width requirements (that are typically more than what a single DRAM can support), almost all the new generation of memory subsystems and SoCs have multiple DRAM dies combined to effectively create higher density memories with more bus width than individual DRAM. There are many ways in which multiple individual DRAMs can be combined to create higher effective density DRAMs. One of the most common ways to do that is to create DIMMs with DRAMs organized in a defined structure that had multiple Ranks with each Rank containing multiple component DRAMs. Other options can include using die stacking with multiple DRAM dies on top of each other to create 3DS SDRAM (like DDR4/DDR5 3DS) and even those can be used as DRAM components of DIMMs.

With a multi-DRAM system there are several things that system designer/Host must consider for the system to be able to work as expected, in addition to the requirements that apply to a single DRAM. These are vital for the memory subsystem to perform the desired function.

Here is the list of some of the most important considerations for multi-DRAM systems:

  1. Typical multi-rank systems share the same ZQ Resistor (which may be external). In such cases the SDRAM controller will have to ensure that the ZQ Calibration commands don’t overlap. The controller should also make sure that the ZQ Latch is completed before the Read/Write command Data phase even to the non-Target Rank.
  2. Read/Write command Data phases to different Rank DRAMs should be separated by enough time to make sure there is no data collision/contention that can lead to incorrect Data sampled by Host for Reads and by DRAM for Writes. It can even lead to DRAM failure/burnout.
  3. Host has to make sure the On-Die Termination and Non-Target On-Die Termination are set appropriately for both the Targeted and non-Targeted Ranks. Setting right combination of ODT and NT-ODT is really important for multi-Rank system to work optimally.
  4. For memory systems that have non-homogeneous memories, Read/Write latencies can be different from the cases when all memories in subsystem are of similar type. For example, mixed LPDDR5 packages that are configured with both x16 and Byte-Mode dies support byte-mode latency parameters only even though some DRAMs are x16.
  5. In a multi-rank/channel system sharing the command bus, Host should train the terminated die first, followed by the nonterminated die(s).
  6. For efficient module power supply design, the maximum number of DRAM die that can be in simultaneous or overlapping activity are typically limited and Host should be aware of these numbers.
  7. For 3DS/Stacked Dies Host has to adhere to addition timings when accessing different logical Ranks that typically shared the Mode Registers.
  8. Host has to account for Temperature dependent Refresh and other timings requirement by different DRAMs as the Temperature could vary from one DRAM to another.

As memory subsystems evolve to meet growing memory demands, memory standards are also evolving to define requirements and expectations of the multi-DRAM environment. JEDEC DDR5+ and LPDDR5 memory specifications are very good examples of this trend and have extensive requirement and usage consideration included in the standard.

Cadence MMAV VIPs for DDR5 SDRAM, DDR5 DIMM, LPDDR5/LPDDR5X and LPDDR5 Multi Die Package are VIP solutions that support all of the above listed features for a multi-DRAM system.

More information on Cadence DDR5 and LPDDR5 VIP is available at the Cadence VIP Memory Models website.

Leave a Reply

(Note: This name will be displayed publicly)