The Deafening Problem Of High-Speed I/O

Lowering the power makes components more susceptible to noise; good analysis remains best approach.

popularity

By Ann Steffora Mutschler
The performance of digital systems today is limited by the interconnection bandwidth between chips, boards, and cabinets. This has driven I/O speeds up into the gigabytes. While this boosts performance, it also opens the door to a host of new problems within the chip, board and system. Add low-power requirements to the mix and it is a recipe for huge headaches.

One of the most rapidly emerging problems with high-speed I/O is the integrity of the power being delivered to the ICs. “In attempts to lower power while increasing the speed, power voltages are going lower even to sub-1V voltage levels. As the voltages become lower, the tolerances become more difficult to meet. Therefore the ability to deliver sufficient and clean power to the ICs is an ever-increasing problem,” explained Patrick Carrier, technical marketing engineer in the systems design division of Mentor Graphics.

Noise matters, too. Lowering power is like filling a room with equipment that is more sensitive to noise. “When you go into process nodes of 28nm, for example, you are now down to core voltage supplies of about 1v or 0.9v, and when you are operating at these really low voltages you universally have a problem of noise because you are dealing with threshold voltages that have a much lower difference. The noise coming in, whether it was from the same or from the high-speed I/Os, the noise is the same or larger proportional to the voltage,” noted Eric Huang, product marketing manager for USB digital cores at Synopsys.

EDA Consultant Les Spruiell agrees. “The biggest problem with high-speed I/O is driving something that is off-chip, which means there is a lot of impedance. It requires a lot of power to get nice sharp clean edges. People build up their chip in different voltage domains so the I/O will have a higher voltage typically than the core logic. As the power to the core logic is lowered the overall power usage of the system drops. However, this still creates an enormous amount of noise, especially with gigabyte transfer rates, because as the switching is happening that fast the noise is banging on the overall power. If you are not careful it will get back up through and into your core logic. As the voltage in the core logic drops, the same amount of noise is worse.”

Compounding this problem is the fact that many different levels of voltage (and grounds) must be delivered to the ICs forming very complex power distribution networks (PDNs) on the PCB. “We have some customers that are required to put more than 30 unique PDNs on a single PCB. This requires sophisticated power integrity analysis on the PCB that can simulate both DC and AC conditions, and allow the designer to adjust the design to assure clean and sufficient power delivery,” Mentor’s Carrier said.

Other high-speed I/O problems include system timing, which is especially true for DDR2 and DDR3 interfaces that have very complicated timing relationships, and timing margins measured in picoseconds, as well as qualifying SerDes and trying to meet bit error rate requirements in the sub 1×10^-15 range, optimize pre-emphasis and equalization settings, and test performance on links that can’t be measured at the die.

“We can segment this problem into two parts,” said Aveek Sarkar, vice president of product engineering and support at Apache Design Solutions. “The first is where you need a lot of buffers for switching at the same time. If you have a 64-bit interface and you switch 56 at the same time, there is a lot of effect on the propagation of the signal. In the past we were mostly worried about crosstalk. Now, we’ve got a power integrity issue. The number of buffers is increasing in the I/O ring and the package/board designs are using fewer layers because of cost. On top of that, the DCap efficiency is going down. So with a high-speed I/O interface you’ve got to model the I/O, the buffer, the package and board parasitics and the receiver and then simulate the jitter from switching in the I/O ring.”

To deal with these issues, two forms of power integrity analysis are needed. DC drop analysis can be used to analyze the PDN from the voltage supply to all the IC pins requiring that voltage level to identify places in the PDN where the voltage level will be below acceptable levels. They can be increased by modifications to the PDN shape, with possible modifications such as the addition of more metal or additional power vias.

“Included in this analysis are results showing current density, which highlights neck-downs in the PDN that may result in higher than acceptable current density. This will affect the DC drop and may also result in a situation where the neck-down can overheat over time and cause either PCB de-lamination or fusing. Again either more metal or power vias can solve the problem,” he explained.

Another type of analysis is AC analysis where switching of the IC can result in current spikes and produce waves along the PDN, Carrier noted. “This can result in PDN voltages that are not clean (beyond tolerance) or even affect signal carrying interconnects adjacent to the PDN. The addition of decoupling caps or stitching vias, modification of cap mounting and/or location, changes in stackup including use of different dielectric materials, can correct this problem.”

As system timing requires understanding of flight times on the PCB–delays of each signal as it passes through the board and is affected by board stackup, loading, and crosstalk. This can be taken a step further and automatically integrated with on-chip timing and margin analyzed. Further, SerDes qualification can be done through simulation of SerDes buses.

Synopsys’ approach is to figure out whether power domains have been properly isolated by taking in the user power intent specification via UPF. The specification can include power domain information along with power management cells (also known as special cells such as ISO, LS, retention register) strategies. The goal is to check power domains and special cells to ensure that the power management cell strategies are properly inserted, said Mary Ann White, director of Galaxy Power marketing at Synopsys.

High-speed I/O issues require dedicated resources
Dealing with high-speed I/Os is complicated, though, and due to the speeds being used the design must be finely tuned. That puts tremendous pressure on the designers.

Within design teams today there is a group of people that do nothing but focus on the I/O pad, Spruiell said. “When you get into flip-chip packaging it used to be that the I/O pads were all around the edge of the chip, but that is not true anymore. You could theoretically have an I/O pad banging away in the core of your chip, and getting that isolation down is a tough problem. Typically it is a small group of analog experts that treat the I/O path pretty much as an analog/mixed signal problem because you have to balance so many different things. It isn’t just ‘turning on a logic gate,’ ‘turning off a logic gate.’ When that thing switches and the power starts flowing out of the chip, it has a tendency to yank down the power grid, which affects stuff around it. It’s like being on a trampoline. If I am bouncing on a trampoline and there is nobody else there, I don’t have a problem. But if you put somebody else on the trampoline then you end up with a problem.”

Related to this, the pads must be isolated from each other because they are connected to a common voltage domain. “Yanking on one will cause a yank on the other so it becomes a very delicate balancing problem,” he explained. “This is why companies like Xilinx with their big Virtex chip spend a huge amount of time on getting those signals on and off chip because the core of what they do is programmable—the core logic elements themselves are fairly adaptable to custom digital design techniques.”

To do the isolation, Synopsys’ Huang explained that there must be enough pads and pins. “For example, you might have a number of pads for the analog logic with their own voltage supply. You need to have separate power domains for each element in your chip, whether it’s a PHY or the digital logic. By having separate power domains you isolate and decouple the different things so that you don’t end up with noise crossing from one into the other because of a coupling effect. Once you have separate power domains, you need to make sure you provide enough of those power and ground pins or pads to each one and that increases the number of pads in your design which is also a problem, because if you have more pads it’s more area and less space along the edge of your chip.”

Noise must be decoupled at the substrate. “Most processes have very well-defined guard ring rules and that typically comes from the foundry. The foundry itself has done a lot of that characterization. Theoretically, if you follow the design rules you’ve got a certain level of noise isolation but it depends on what you’re trying to isolate noise from. When you get down into the really small geometries the design rules get really tough,” Spruiell added.

According to a Synopsys white paper, “advanced processes are no longer driven by simple spacing and enclosure checks but now contain complex and situation dependent rules. At 45-nm there are almost 1,400 rules mostly described as complex mathematical equations. At 28- and 22-nm design rule counts exceed 1,800.”

In this way, the foundries play a huge role in defining how the designer can implement isolation techniques.

At the end of the day, while the solutions to these problems sound simple (add more metal, more vias, more de-caps), Mentor’s Carrier noted, “adding these increases product cost. More layers, more drill time, more components and space. So good analysis in the hands of the engineer and designer can prevent an over conservative design and also ensure the product will work as planned, reliably, and over long periods of time.”



Leave a Reply


(Note: This name will be displayed publicly)