Part 2: The need grows for co-simulation of signal integrity analysis and more accurate models for DDR simulation.
By Matt Elmore
In Part 1, we reviewed the importance of simultaneous switching output (SSO) timing and the challenges associated with double data rate (DDR) simulation complexity.
DDR memory interfacing has reached incredible levels of performance (17 Gb/s), requiring precise quantification and reduction of noise. In order to account for each noise contributor, we must model systems end-to-end: from the die, through the package/PCB channel, to the receiving memory. As most signal integrity (SI) engineers will attest, modeling technology from the previous generation simply doesn’t allow simulating full I/O banks (128+ bit) in a reasonable turnaround time. Having a solution that addresses the complexity and accuracy concerns of a DDR timing simulation with various modeling technologies is critical to successful high speed interface designs.
Accurate and compact reduction of I/O buffer models is necessary for full channel simulation and fast simulation performance. A typical 28nm I/O cell contains thousands of transistors and tens of thousands of extracted resistive and capacitive components. Subsequently, I/O cell complexity is a primary limiting factor in the speed and capacity of SSO simulation. A compact non-linear device macro-model for I/O cells can reduce the number of nodes by orders of magnitude compared to traditional Spice I/O models. This model, which includes the impact from power and ground noise, can deliver SPICE accuracy with full I/O bank simulation capacity and performance speed-up of 5 to 10x compared to SPICE, allowing for full channel simulation where only single bytes could be simulated before.
I/O buffer performance is highly susceptible to on-die power noise. I/O buffers firing simultaneously will draw current from the battery in sharp increments, resulting in voltage drop and a shared fluctuation of effective supply levels (Vdd-Vss). In order to take this into account, the power/ground routing of the I/O ring must also be modeled. Leveraging a chip power model with power/ground extraction and reduction technology helps create a compact model of the resistive, capacitive, and inductive coupling of the I/O ring power grid.
The communication channel from the ASIC to the DDR memory requires broadband-accurate modeling of both the signal lines and the power and ground domains. Typically, 3D extraction of the package and PCB channel produces an S-parameter model, which cannot be optimized and is susceptible to convergence issues in the time domain. A channel builder model that can produce a full-wave equivalent Spice model of the channel takes advantage of the signal integrity concept of victim and aggressor. When solving for a particular victim, non-contributing aggressors may be optimized out of the channel model, reducing turnaround time.
SPICE is still the de facto simulation engine for DDR signal and power integrity simulation. Using a tool, such as Apache’s Sentinel-SSO solution for example, helps bundle these modeling technologies together in a single interface designed for signal integrity. The netlisting and connection of these models is automated by the tool and fed into the SPICE simulator of choice. After simulation, the timing, jitter, and noise waveforms and metrics are displayed for review. Previously, full-channel SSO simulations of the chip, package, and PCB were unobtainable. However, designers now have advanced modeling technology to sign-off their systems with reasonable turnaround times.
This article is Part 2 of a discussion about the growing need for co-simulation of signal integrity (SI) analysis and addressing DDR memory simulation complexity with accurate and compact models.
—Matt Elmore is a principal application engineer at Apache Design (a subsidiary of ANSYS).
Leave a Reply