Engineering The Signal For GDDR6

Advanced extraction tools and accurate models are key to a successful GDDR6 implementation.


DDR1 through DDR3 had their challenges, but speeds were below one gigabit and signal integrity (SI) challenges were more centered around static timing and running pseudo random binary sequence (PRBS) simulations. Now, with GDDR6, we are working on 16 to 20 gigabits per second (Gbps) signaling and even faster in the near future. As a result, engineering the signal for GDDR6 will require careful simulations of the entire system at low bit error rates (BERs) including power, jitter and circuit equalizations.

What it all boils down to is applying the quality of engineering that is devoted to signal and power integrity (PI) for designing such systems, including device package, printed circuit board (PCB), connectors, and associated assemblies. Also, factored in here is how the SI engineers influence the circuit design early-on to have features and enablers put into the controller to have a robust signaling.

There is no question about it, implementing GDDR6 in an SoC or a system is a challenge. And that’s where savvy SI and PI engineers come in. They must perform a considerable number of precise simulations to assure every bit of noise is simulated and accounted for.

For instance, significant effort needs to go into the Bump and BGA ball definition to reduce crosstalk. Questions have to be asked about how the package is designed to minimize and mitigate reflection and crosstalk. You also have to ask: how is the PCB designed? How are the vias and plating through-hole (PTH) vias placed? What type of transmission lines are selected for each different type of signal?

Furthermore, you need to fully understand the mechanism and behavior of different kinds of jitter in GDDR6 and how to simulate them or budget for them. Then, you have to determine how simultaneous switching noise (SSN) plays a role in vertical eye closure and how to simulate the power supply induced jitter (PSiJ).

All layout and channels must go through heavy-duty extraction and simulations. Transmission line models are no longer sufficient. Perhaps a high-frequency, full-wave, 3D extraction tools, which can produce more accurate models up to 50GHz+, need to be used. In today’s GDDR6 day and age, you have to learn these tools and be able to use them effectively.

Running these advanced extraction tools and generating accurate models that can predict crosstalk and reflection is the key to a successful GDDR6 implementation at the system level. Furthermore, these models need to be correlated with lab measurements using vector network analyzers (VNAs). That’s step one.

Step two involves simulation tools, or lack of, since there aren’t many tools and flows available to address GDDR6 single-ended simulation environments. There are only a few channel simulations tools available for single-ended DDR4 and DDR5, which could also be utilized for GDDR6 simulations.

Even these tools are missing certain features which are important to the source-synchronous channel simulations. An example is a sampling receiver that allows the data to be clocked (strobed) using a none-ideal clock. This none-ideal clock carries the correct loss, noise and jitter required.

New equalization options now put on single-ended asynchronous buses – like decision feedback equalizations (DFEs) and Continuous Time Linear Equalizations (CTLEs) – require behavioral modeling like AMI models used on SerDes. However, AMI models are designed for differential signaling, and there are no single ended versions of AMI models. The IBIS committee is working on this. But currently, there are no solutions.

GDDR6 signal integrity simulations are left to the ingenuity and experience of the SI/PI engineer. Using mathematical tools such as MATLAB or applying ideal DFE or other equalization available in some generic simulators can be a solution. That way, you’re at least able to estimate the performance of the link simulation and allowing additional budget to cover the difference between the ideal and actual design.

Step three, once SI and system designers have a good handle on the extraction tools and have accurate simulation environment which is correlated with measurements, it’s time to determine the correct route to take for implementation. The GDDR6 roadmap has branched out of the graphics applications into several detours, all focused on the end application, whether it’s networking, automotive, AI, or other flavors of the system design.

An easy way to differentiate the system applications is based on their PCB and device packaging requirements. For example, most networking applications use thick PCBs with large numbers of layers.

These thick, multi-layer boards require plated through holes (PTHs) to be placed close to one another (near the BGA fanout) and this can induce a considerable amount of crosstalk. There are certain design rules for device packaging and PCB that allow for better spacing and shielding between vias and PTHs. These design rules need to be followed for designing packages to make them work better for networking applications. Other options would be to have a BGA ball-out that allows for staggered/shielded PTHs under the package to reduce the crosstalk.

GDDR6 automotive (self-driving) applications’ requirements are different. Generally, PCB layers for general automotive applications are six, maybe eight layers, but for more complex applications, they can go up to 16-layer PCBs. These boards tend to use less expensive materials (higher dielectric loss) and traditionally, the lower cost plating through hole (PTH) vias have been acceptable. However, now with the high data rates of GDDR6, better dielectric materials and blind and buried vias or back drilling are encouraged to help mitigate insertion loss, crosstalk and stub-resonance.

So, you can see that GDDR6 brings a mixed bag of signal integrity issues that demand extra careful scrutiny, the right tools, and a full measure of patience and understanding of the chip-package-system. SI and PI need to be co-designed in the heart of the design-cycle from product definition and architecture to the end of validation and system level testing for a successful outcome.

To learn more about Rambus’ GDDR6 solution, visit Or, download the GDDR6 eBook today.

Leave a Reply

(Note: This name will be displayed publicly)