Signal Integrity Issues

Experts at the table, part one: The impact of increased density and more components; where the delays will be, or at least should be; how to reduce noise and clock jitter; what happens when new materials are added?

popularity

Semiconductor Engineering sat down to discuss signal integrity with Rob Aitken, research fellow at ARM; PV Srinivas, senior director of engineering for the Place & Route Division of Mentor Graphics; and Bernard Murphy, chief technology officer at Atrenta. What follows are excerpts of that conversation.

SE: As density increases, and as we add in more components from third parties, what impact is this having on signal integrity?

Srinivas: First, the drive to lower technology nodes is allowing us to build larger and larger chips. With 14nm and 10nm, you can pack many more instances onto one device. This, combined with double patterning, increases the complexity of timing closure. Coupled with that, signal integrity and timing analysis have taken on more of an analog nature. You can’t characterize the delay nicely anymore. We’re seeing more and more of a waveform effect due to the long interconnect. It’s no longer a nice, smooth shift. It has a very long tail, and the analysis of noise on this very long tail is more complicated. We have been forced to model waveform distortion and propagation. In characterization steps, people are already calculating normalized waveforms and waveform propagation, but the drive to 0.5V and 0.6V and increased sensitivity to noise have to be taken into account for how the waveform propagates.

Aitken: What we’ve seen in sub-20nm nodes is there really is an emphasis on analog behavior. There’s also an emphasis on careful accounting. It’s not always obvious where the gate delays and wire delays are. But it’s also not obvious what’s included in the device model, what’s included in the extraction, what’s included in the library characterization, and so on. We want to do enough accounting to make sure each effect is modeled, but that it’s only modeled once. We don’t want to count it two or three times. And that changes among different devices and different foundries, so we have to be careful. We also see that as we push designs into higher and higher frequency realms, some of the signal integrity effects are not things that are readily calculated just by looking at the netlist. They’re also workload-dependent, so you have to know what the thing is doing. That’s especially true with the power network. There are signals roaming around the power network in the kilohertz or megahertz realm in gigahertz designs, and we need to know how those affect things. The other angle that’s interesting for signal integrity is on the IoT space, where we’re not pushing finFET designs. We’re using an older process, but we’re pushing it to places it hasn’t been used before. That goes back to lower voltages, and understanding the nature of the signal integrity challenges at those earlier nodes, and especially with regard to the variability and delay across process corners and wiring corners and with the clock network—there are a lot of different pieces of variability to take into account. Especially in that area, the methodology is still under development.

Srinivas: It’s easy to overdesign if you don’t properly account for everything. If there is room on silicon you can overdesign. But when you have to meet frequency challenges in a limited space, then you have to make sure you don’t overdesign.

Aitken: But if you overdesign and you use twice as much power as you ought to, that’s not good either.

Murphy: I was looking at the impact of power switching on power integrity. If you have all of these domains banging up and down, that’s not very good for things like clock jitter. You can say you’re not going to switch from 1 volt to zero in a short period of time, or maybe you do that in stages. But you also can just slide from one to the other. That takes a lot of the harmonics out of the transition, so you could ideally make it fairly noise-free. There are other consequences to that approach, of course. It isn’t free. But if you use a smoother approach, that can help with noise—particularly clock jitter, which will affect things like DDR interfaces. There also is a question of whether you can come up with noise-tolerant communication, on-chip and on-board. You can imagine just encoding buses. You can have error correction on that data or you can gray code it or put shielding on the bus. But you also can use RF encoding so essentially do QAM (quadrature amplitude modulation) where you have a modem at each end. You can get some compression on that and add quite a bit of data into the signal.

Srinivas: Is that mostly for long distance?

Murphy: Yes, you’re going from one side of the chip to the other, or on the board. One company is doing 100 GHz signals on board. That’s below deep infrared. It’s very fast, and you cannot send that signal through a trace. You need braided wire or optical, or you can do RF.

SE: As we get down to 10nm, one of the issues we’re encountering is electron mobility is slowing or inconsistent. We’ve been hearing about electron crashes for a decade, and now we’ve got quantum effects in memory. How does that affect signal integrity? Do we need new materials, or do we need a different way of designing?

Aitken: At 10nm, we’re definitely going to see changes in the channel to improve mobility, whether everyone goes to III-V materials or they go to silicon germanium. Something different will happen, because just shrinking silicon down to that dimension isn’t going to work. The other place this starts to happen is on the metal itself. There we’re starting to see some interesting work on comparing the material and the barrier layer. If you look at a copper wire, typically it has a titanium nitride or some other barrier around it in order to keep it isolated from the silicon. At 90nm, that barrier layer is insignificant, but at 10nm it’s a significant chunk of the overall wire and it’s carrying a significant amount of current. The combination of titanium nitride and copper is potentially not as good as a barrier-free material by itself, so even if the barrier-free material isn’t as good as copper by itself, without the barrier layer it turns out to be better. There is a lot of materials research going on in that area. Whether it makes it into 10nm or it has to wait isn’t certain. But those kinds of issues are really important. At least from ARM’s perspective, we know a lot more about materials than we did a few years ago.

Srinivas: In general, there is inherent variability. People are starting to move to a statistical way of dealing with that variability. Things like parametric CV or parametric on-chip variation are becoming more popular. They’re more representative of what really happens. Whenever you are doing static analysis of signal integrity you are bounded by the fact that you don’t have an accurate time waveform. You don’t know exactly when a signal switches. You just know a window when it switches. Within the activity window you can do alignment and come up with a worst-case scenario, but essentially you are doing worst-case analysis.

SE: This is more bell-curve distribution than fixed numbers, right?

Srinivas: Yes.

Murphy: Eventually you have to look at two different transport mechanisms like spin waves. We already have spintronic switches. Intel is doing work with Georgia Tech on spin-based interconnect. You have a couple magnetic layers with an insulator in between. This is an important technology. Doing the switches is not that difficult. Doing the interconnect is the hard part. If you just do the switches, then you have to convert back to electrical and you’ve lost a lot of power.

Aitken: The transition from transistor to interconnect is also critical. If you take a fin channel it’s conductive, but as soon as it gets out into the source and drain region it falls to pieces and the contact layers are a disaster. You really have to look at it holistically.

Srinivas: You also see with all the double and triple patterning there is variation in the coupling capacitance. You have this colored layer next to another colored layer, and there is a variation in that, too. So there can be signal variation, depending on how you route it. One other problem we see is that signal integrity information comes late in the game, when you are doing placement. Information becomes available only when you’ve done final routing, and your flexibility to make design changes is much smaller. You become aware of all the signal integrity issues, but your timing closure loop becomes longer—typically at the tail end.

Aitken: It’s also challenging from a standard cell perspective because you have to characterize the cells to an existing environment. You have to determine whether that environment is going to be a typical environment, a worst-case environment, or something in between. The choices you make there have direct impact on the timing of the overall design.



Leave a Reply


(Note: This name will be displayed publicly)