Understanding how to correctly optimize the communications within a system—including IC, package, board and interconnects—is increasingly critical.
By Ann Steffora Mutschler
Managing the electrical components of signal paths between IC, package, board and system is no small task, and it’s only growing in complexity. Understanding how to correctly optimize the communications within a system is critical given that the I/O power is becoming a significant portion of the overall chip power as the number of bits and the speed at which they operate continue to increase.
To account for this, engineering teams are reducing the supply voltage on SoCs, meaning that something that used to operate at say, 1.8 volts, now operates at 900 mV. This makes supply voltages far more sensitive to noise fluctuations.
Another problem is that the number of bits on the DDR interface is no longer just 16 bits. It’s now 128 bits.
“When you have 128 bits and you have 110 of the 128 bits switching they will draw a lot more current than, say, 2 of the bits switching,” said Aveek Sarkar, vice president of product engineering and support at Apache Design. “So you have 2 drivers switching versus 110 drivers switching, and the amount of current that will draw out of the package and the board is going to be significantly different. The voltage drop in one case is going to be much different than the other. When you have two bits switching, let’s say you have a 20 mV drop and then you have 110 bits switching with a 300 mV drop. The signal propagation is going to get affected. If you have a 20 mV drop, the signal goes much faster compared to when you have 300 mV drop, when the signal slows down considerably. This variability in the signal transmission is what people dread. If it came slow all the time that’s fine, you can plan for it,” he said.
Historically, engineering teams have looked at signal integrity on the I/O interface as a signal integrity problem because the voltage drop effect was not as important when supply voltages were much higher and the number of bits was lower. “What they would do is match the impedance to make sure that every single line on every single driver sees the same impedance from one driver to another,” Sarkar explained. “If the impedance is matched then everybody sees the same sort of coupling, so from a signal integrity point of view it is addressed.”
Then, from a power integrity perspective, when there are a lot more buffers switching versus a lot less buffers switching—that’s a different analysis altogether. Matching the impedance does not take care of that. He noted that people have been waking up to that fact lately.
Another important consideration is the PHY, which talks to the outside world (the board, the interconnect, the connectors and cables, etc.). Just meeting the base specification of the PHY is no longer good enough because it is specified almost in isolation. The relationship between the IC, package, board and system must be understood.
“Traditionally on the protocol side what happens is that if you are looking at say PCI Express Gen 3, USB 3.0 or DDR, the electrical specifications are specified by the protocol and they define what is required to meet the electrical compliance,” said Navraj Nandra, senior director of marketing for DesignWare Analog and MSIP at Synopsys. “But when you are going into some applications where if it’s like a SerDes or PCI Express, for example, you’ve got a channel and that channel can be a backplane in a line card, in a big rack mounted system or it can be chip-to-chip. People are using high-speed PHYs to communicate chip-to-chip, or it could be portside. You’ve got the PHY on the line card and on the other side you’ve basically got a big cable that runs over a long distance.”
However, Nandra noted, none of that is actually captured in the electrical specification for compliance. “So you might get the compliance certification for your piece of IP that does this, but it may not necessarily support the channels—although the channels in the system basically can extend beyond the reach that’s afforded in the standard. That’s the key point: The standard doesn’t necessarily reflect what happens in real systems. It’s not enough that the PHY meets the specification in isolation. You have to understand how the PHY or the SerDes is used in the customer’s environment, how it’s used in the ASIC, how it is going to be packaged, how the PHY interfaces across the different channels, across the different links, the PCB traces, the connectors, the cables, whether it’s optical or copper, and so on.”
Bradley Brim, product engineer in the silicon package board division at Cadence agreed. “If I am building a chip and I need to verify my high-speed I/O drivers and how they behave when I actually put this in the package and sell it to a customer, I am performing chip-centric simulation—but it is a system simulation. On the other side, if you are an OEM who has purchased this chip, you are doing the system simulation to see if your DDR bus has excessive noise and things like that. So ‘system’ can really mean different things to different people.”
The path forward
To address these complexity issues, engineering teams try to run SPICE simulations, Sarkar noted.
“The most common approach is they’ll take, say, one byte of it, or even a couple of bits, and they will switch a couple of those drivers once and then they will keep one of them quiet. The ones that are switching they call the aggressors, and the one that is quiet, they call the victim. They will see what the voltage drop noise is they are injecting on the victim, and how it changes when all of the aggressors are switching versus one of them switching. The problem with something like that is when people simulate few of the buffers or one byte, they are not able to predict what happens when 128 bits are present because the amount of current you draw if you switch two buffers versus 128 buffers is very different. And you cannot really scale that. You cannot say that the voltage drop is going to be 10 mV when one switches and it’s going to be 100 mV if 10 of them are switching—it does not scale like that,” he continued.
Ideally the entire bank of I/Os needs to be simulated in one shot, Sarkar said. “Not only that but you have to have the rest of the environment present, which is the I/O ring itself because it has a lot of parasitics from the power grid routings. You may not be putting enough decap next to these buffers. You may have some missing or weak routing coming from the pads, coming from the bumps to these pads. The I/O ring parasitics have to be considered and then on top of that the package and the board have to be considered; not only the signal routing parasitics but the power ground routing parasitics and the coupling that exists in between them. The coupling between the power and the ground, the power to the signal, the signal to the ground and the signal to the signal—all of those are couplings we have to consider.”
Dave Wiens, business development manager for the System Design Division at Mentor Graphics, believes the best route is modeling with a combination of 2D and 3D technology to accurately represent the signal path from chip to chip—through the bond wire into the package, through the package interconnect out the back and through the package to the BGA ball, onto the board, across the board and back up again. “And, of course, often that signal isn’t traveling just on a board. It’s going across multiple boards. So you have to accurately be able to model connectors, as well, in that system interconnect.”
As far as when to use 2D versus 3D models, Dave Kohlmeier, product line director for analysis products at Mentor Graphics, said that in a perfect world with infinite computer resources it would be possible to model everything in 3D. That’s far from reality, though. “The place to use 3D is wherever the return current is not totally in line with the outbound current/outbound signal path so that would be anyplace there is a via or pins from a connector, for example.” The rest can be modeled using 2D transmission line/2D planar models that would be accurate to 50 GHz and perform much faster.
Toward concurrent design
As far as the futuristic dream of an integrated codesign system flow, Wiens said progress depends on the occasion in which the SoC is being codesigned with the rest of the system.
“In that situation, it’s certainly possible and desirable to do it that way,” he added. “The question is, given the amount of data, if you look at the pure volume of data at the SoC level and the number of pins, let alone as everything goes out into the package and bond wires and everything like that, being able to model that what you see here on one signal path is one thing. Now multiply that by a few thousand and you see the problem with trying to accurately simulate that. That’s where defining a good model, setting up good constraints to drive what happens on the board become important. Just thinking about designing an SoC at the same time as the rest of the system and modeling it all concurrently—that gets to be a bit of a scope thing.”
Leave a Reply