Home
SPONSOR BLOG

Signal Integrity’s Growing Complexity

Part 1: Challenges in simulation and how to address them with accurate and compact models.

popularity

By Matt Elmore
While in the market for a memory upgrade recently, I was surprised by the availability of commercial DDR memories. You can get 8GB of DDR3 memory, transferring 17GB/s, relatively inexpensively.

The progress in memory design is outstanding. From smartphones to gaming PCs, quick communication between the IC and off-chip memory is key to enabling the performance we demand in these devices. These speed achievements are enabled by the fast transition times of emerging technologies and breakthrough architecture developments in memory interfaces, such as DDR and three-dimensional integrated circuits (3D-ICs). As with most technological advancements, DDR performance gains come with challenges.

Signal integrity (SI) and various forms of simultaneous switching output (SSO) simulation of chip-to-chip communication have been around for decades, but the complexity challenges faced by SI designers today are unique. As clock frequencies increase, timing and noise margins shrink. Smaller and stronger transistors result in faster transition times and increased power and signal integrity noise, further impacting design margins. To meet performance specifications, designers must guard each pico-second, and optimize each component of the communication channel (chip, package, and PCB) for signal integrity.

SSO timing analysis requires chip-package-system (CPS) co-simulation. Noise starts at and from the die, where switching of the I/O cells generates on-chip power noise, affecting the performance of the driver buffers. As the signals travel out through the package and PCB, coupling exists in signal-to-signal, as well as signal-to-PG interconnects. When 128 bits of a DDR register fire simultaneously, the power/ground and coupling noise affects delays between various bits and distorts the waveforms on the far-receiving end. A full CPS signal integrity simulation must consider the on-die I/O ring power delivery network (PDN), the package/printed circuit board (PCB) channel including the signal and power route, along with the termination load of the memory that the chip is in communication with.

Fig. 1: Signal and power coupling noise affects the channel at the chip, package and PCB levels.

Fig. 1: Signal and power coupling noise affects the channel at the chip, package and PCB levels.

The biggest challenge in CPS SI co-simulation is managing the complexity. Starting with the on-chip driver I/O buffers, detailed SPICE models are too complex for full I/O bank simulation. Forget about running a full I/O bank using SPICE models. Simulation runtime for one or two bytes can be extremely long. Modeling the on-die PDN is essential to establish the coupling between the switching I/Os. These power grids can be quite dense, with millions of nodes making up the network. The channel model connecting the application chip to the memory is another source of complexity. For complete accuracy, package and PCB models require 3D extraction of both power and signal using full-wave electromagnetic extraction tools. The typical output format is S-parameter, which brings about convergence issues with the number of ports that are implied by 64 or 128-bit bus widths.

One possible solution for addressing the complexity of SSO simulation is to simply divide up the simulation into byte-size pieces. However, this approach means the coupling between adjacent bytes is ignored, which can be fatally optimistic considering how tightly packed the bus routing is. To achieve an accurate simulation that properly reflects the timing characteristics of the interface, a full CPS simulation framework is required for the entire I/O bank.

Given the simulation size, it is critical to optimize while maintaining accuracy. Advancements in modeling solutions are making 128-bit CPS SSO simulations possible. New behavioral models are able to accurately capture both the signal and power characteristics of the I/O cells in a compact model, with vast improvements in runtime. Prior models have struggled with accurately representing power noise when the I/O switches. In terms of on-die PDN modeling, long standing technology such as chip power models can be leveraged to deliver a compact, reduced representation of the power grid. Channel model reduction advancements are able to accurately retain the broadband validity of S-parameters while creating an equivalent model, making convergence possible. In Part 2 of this article; we will discuss modeling advancements in detail.

—Matt Elmore is a principal application engineer at Apache Design, a subsidiary of ANSYS.



Leave a Reply


(Note: This name will be displayed publicly)