Signal And Power Integrity Cross Paths

What used to be separate concerns are merging as more power domains are added into SoCs.

popularity

Signal integrity and power integrity historically have been relatively independent issues, and engineers with expertise in one area generally operate independently of the other. But as more power domains are added to conserve energy and allow more features, as voltages are reduced to save battery life, and as dynamic power becomes more of a concern at advanced nodes, these worlds are suddenly merging and frequently colliding.

The result is that SoC designs are very quickly becoming even more complex, including those done at older process nodes. More analysis is required, more tradeoffs need to be made between signal strength, wire length and thickness, and drive current, and more considerations such as package, power delivery network and basic architecture need to be added into the mix.

Even then, there are likely to be more unforeseen issues cropping up. The addition of differing voltages, varying rates of ramping up and down power with various sleep states, and on-chip traffic that is more of a plotted distribution than a fixed number, add up to unprecedented levels of complexity and overlap that can alter both functionality and power levels.

“As voltage drops closer to the threshold, variability goes up significantly,” noted Robert Hoogenstryd, senior director of marketing for design analysis and signoff at Synopsys. “That has a bigger impact on the signal. The waveform of signals becomes distorted, and that can have an impact on timing.”

That’s only part of the picture, too. In the memory space, design teams need to make sure the impedance from the driver to the receiver are balanced. But as the voltage decreases or jitter increases and the I/O buffers get wider, the supply voltage from one side of the I/O buffer to the other may be different enough to cause problems.

“If you have 64 bits and the first four bits are switching and the last bit is switching, on bits 0 to 3 you will see more current than on bit number 63,” said Aveek Sarkar, vice president of product engineering and support at Ansys-Apache. “But the signal out of bit 63 will be faster than at bits 0 to 3. You have a fluctuation because of the voltage drop. But two cycles later you fire all 63 bits, and for bit number 63 the signal was faster at first, now it’s slower.”

This is like floor planning on steroids—more things to consider with much higher stakes if something goes wrong, and that doesn’t even begin to take into account that it’s harder to find those problems when they do occur.

“Placement is still a very critical step in the design process, because later on changing anything at this scale it is very difficult to change it,” said PV Srinivas, senior director of engineering for the Place & Route Division at Mentor Graphics. “So when you’re talking about high-level changes like signal integrity, it’s too late to move anything about. That means things like communication density have to be considered during layout, particularly in the presence of multiple voltage islands.”

While these kinds of considerations are relatively new to many SoC designers—particularly those migrating downward to 45/40 and 28nm, there is at least some precedence to draw upon. EDA companies have been working here for at least four process nodes with their most advanced customers.

“This didn’t show up on the radar until about 10 years ago, when we started seeing static IR drop,” said Jerry Zhao, product marketing director for IC power signoff at Cadence. “But it’s becoming extremely important on advanced notes. And it’s not just power integrity. It’s also how fast you can run the chip. It’s about load and frequency.”

The memory and high-speed interface segments have been wrestling with these problems for nearly a decade, as well.

“We almost say signal integrity and power integrity in the same sentence,” said Frank Ferro, senior director of product management at Rambus. “In one sense, the industry is catching up to where we’ve been for 10 years. We hit 3.2 gigabits for DDR4 and LPDDR4. A lot of customers say everything was okay at 1600 (megabits per second) and they struggled at 2133, but at 3.2 gigabits they are struggling to maintain good signal integrity. It’s a little easier in mobile where they’re using package-on-package, but DRAM on a board has a longer channel and you see more questions being asked about how to work with it.”

Design changes
One of the challenges is that the overlap of power and signal integrity needs to be dealt with on a number of fronts—architecture and architectural power issues, methodology, layout, hardware-software co-design, verification and debug, and even during manufacturing. Each of these is a vital part of the design flow, and changes in any area can have an effect somewhere else.

“The end goal is coming up with a power-efficient RTL design,” said Mark Milligan, vice president of marketing at Calypto. Achieving that goal is another matter. “The microarchitecture has a huge impact, but how you factor that in is a combination of measurement, optimization and exploration. Just doing power estimation at RTL is an unsatisfactory solution, though, because it tells you the bad news, not what to do about it.”

Milligan said design teams are doing two things to improve power. One is making changes to their libraries. The second is focusing on microarchitectures—improving designs without necessarily migrating to the next node. “We’re seeing designs repeating at 28nm instead of moving to 16 or 14nm,” he said.

That seems to be a recurring theme these days. While EDA companies have been predicting the need for more advanced tools for any designs at 65nm and below, the fallout from dealing with increased battery life has finally made them essential. Krishna Balachandran, product marketing director for low power at Cadence, said that even at 28nm and 45nm, designs need the same kinds of complex tools for analyzing what gets turned on and off.

“Electromigration issues and IR drop are rearing their heads even at older process nodes,” Balachandran said. “Susceptibility to noise increases, and unless you have a robust methodology in place you may have issues. Power signoff is becoming a key part of the design flow and it’s an essential part of what you need to look at. It’s only getting worse with finFETs, but it’s not just about finFETs.”

Also problematic are the power models, which are incredibly complex. “You have to create models and run simulations on all the buffers,” said Ansys-Apache’s Sarkar. “That means you need a chip signal model that can connect to the package and board parasitics. Essentially you’re running a signal with power integrity.”

Tradeoffs
Part of the challenge is being able to make “what if” statements in a complex design, swapping out one design component or IP block for another. Design services companies such as eSilicon and Open-Silicon have focused much of their attention in this area, and big IP companies are now offering IP integration services with their IP.

A large part of what is making these kinds of services so attractive is the need for information about which IP offers better energy savings and how it should be configured to ensure both power and signal integrity. One IP block or memory type may wake up more quickly in the specs, but it may not be the best choice in a complex design.

“It’s not just about whether the signal is observable at a certain point and stable,” said Anand Iyer, director of product marketing at Calypto. “It’s about for how many cycles is it stable.”

This is particularly true in memory, which contributes about 30% of the overall power budget. Saving power in memory can be very significant for low-power designs, but it also has to be done with signal integrity in mind.

“You need to more effectively turn memory on and off, which is why we’ve taken an asymmetric approach with DRAM and the PHY controller,” said Ferro. “You turn the memory on quickly, do a transaction, and then turn it off quickly. That also allows you to save 25% of the power in memory.”

Shrinking the distance
One way to soften the effects of the power and signal integrity is by using 2.5D or 3D-IC stacked die configurations. The pipes are big enough to avoid RC delay, the distances that signals need to travel are shorter, and for both of those reasons it requires less power to drive those signals. That tends to minimize both power and signal integrity issues, and companies have been turning out test chips to test out this approach.

“There are multiple types of 3D,” said Rob Aitken, an ARM fellow. “One is the ultra-dense 3D method. The other is vertical nanowires where you can have multiple active layers. And then there are other varieties in between. All of it looks very interesting, especially as you push out to the N+3 generation. It’s like a lot of other optimizations in that it really can reduce your wire length and it can do a lot of clever things, but if you try to retrofit it to an existing core it’s not going to work. It has to be redesigned from scratch, or it can be part of a subsystem that will allow this core on this level that will communicate with this cache above it.”

One such approach, which has been attracting attention from some of the biggest chipmakers, is the monolithic 3D IC.

“Some of the big companies are getting much more interested in monolithic 3D,” said Bernard Murphy, chief technology officer at Atrenta. “The TSVs are getting much smaller—they’re about 1/100th the size of a regular TSV—so you can have true fine-grained connectivity between the layers. You’re not really stacking die anymore. They’ve become an integral part of one chip. It looks a lot more like real 3D, and if that works it could bring Moore’s Law back to life again.”

But no matter how these problems get solved—by architecture, by tools, or one a one-by-one basis—it’s going to get harder at every process node. The days of hanging back a few process nodes and reaping the benefits of the trailblazing companies are over. It’s getting more complex everywhere.



2 comments

Bill Martin says:

I am surprise that no one discuss PDN target impedance coupled with SI/PI analysis. If the PDN impedance is too high, it can lead to significant performance loss along with additional guardbanding of the power supply. Power Integrity Modeling and Design for Semiconductors and Systems by Swaminathan and Engin is an excellent book published in 2008 (Prentice Hall) that shows the impact and why PDN along with co SI/PI simulation is required. A few of Prof Swaminathan’s tutorial videos are also published at: https://www.youtube.com/user/esystemdesign that show how important PDNs are.

Bill Martin says:

oops the YouTube link I provided showed a short ‘ad’. Go to the YouTube site and you will see the longer videos listed at the bottom. Bill

Leave a Reply


(Note: This name will be displayed publicly)