Signal Integrity Issues

Experts at the table, part two: Dealing with longer wires and multi-dimensional coupling; the impact of finFETs on signal strength; effects of use cases and applications on signals.


Semiconductor Engineering sat down to discuss signal integrity with Rob Aitken, research fellow at ARM; PV Srinivas, senior director of engineering for the Place & Route Division of Mentor Graphics; and Bernard Murphy, chief technology officer at Atrenta. What follows are excerpts of that conversation.

SE: We’re looking at a combination of use cases plus the electrical characteristics of what you’re designing plus the unknowns on the black boxes you’re putting into a device. How does this modular approach affect signal integrity?

Aitken: It’s when it gets instantiated, what wiring is above it, what are they doing—all of those things.

Srinivas: Before 14nm, a lot of the coupling was from things next to you. Now it’s coupling from above, from below—everything adds up to a small bump here, a small bump there.

Murphy: Does this argue for a more chiplet approach? You have more isolated and shielded and buffered chiplets?

Aitken: It’s possible. With a hierarchical design style, anything can help.

SE: Is there a limit to how long a wire can be?

Murphy: Without repeaters, yes. And we’re already there. You have to pipeline long wires.

Aitken: You definitely have to consider long wires as one of the paths that you optimize in a design. A typical CPU architecture will consider long wires and when you’re implementing that, you have to decide whether you’re going to buffer it or not. Buffering introduces its own challenges, especially with multiple power domains.

Murphy: There’s an interesting question, ‘Does timing and quality of service become a problem before signal integrity becomes a problem?’ The reason NoCs have taken off in big chips is because it wasn’t possible to manage quality of service through a traditional switch. You’ve already got these stages of timing management, which can help with signal integrity.

SE: We’re used to thinking of timing and quality of service as separate from signal integrity. As we look forward, are they blurring?

Srinivas: Yes. Signal integrity can be two different things. One is the traditional meaning. The other thing is the timing effect of it. If you delay a transition it can impact signal integrity. We’re also seeing problems at 10nm because the transistors are stronger and the interconnects are longer. The waveform is no longer nice and smooth. If you see most of the libraries, they have been done with some kind of flow. That starts to break down with all this long tail.

Aitken: Or you have to use some kind of current-sourcing methodology so you can accommodate different shapes of wires. Close to the driver the waveform will look significantly different than something that’s far away.

SE: Why are the transistors getting stronger. Is it because with lower leakage you can do more?

Srinivas: Yes, with the drive strengths are stronger.

Aitken: The device current is good. Self-loading is an issue, though. In previous generations if you doubled the size of a buffer it automatically was twice as strong as before. In a finFET design, if you’re not careful, it’s exactly the same strength as before because all your extra drives are eaten up with all the extra capacitance you’ve added. You have to be clever when you’re designing in order to take advantage of the device.

SE: We’re getting lots of different IP blocks from different companies. How does this affect signal integrity? Is the characterization really good enough?

Murphy: A lot of signal integrity is dependent on the application. I don’t know how an IP provider is going to do any reasonable characterization until they see it inside that chip.

Aitken: As a standard cell provider we can characterize for noise, we can provide the current source models, and we can explain how the characterization happens. But at some point, whoever is implementing the design is actually going to have to do the work. How much noise is there? How much shielding do I need? What’s actually going on in my design? And we can come at it the other way as a core provider. We can build test implementations and go through the whole process and say, ‘We did this and this is what we see.’ But eventually the buck stops with the people who do the chip and they’re going to have to do the work.

SE: You’re not just selling cores or IP. You’re putting it together ahead of time and making sure it all works together, right?

Murphy: Yes, you’re moving up into subsystems.

Aitken: It’s an interesting challenge because it goes back to the debate over hard versus soft IP. People like soft IP. It’s more flexible and it can do what they want. But they also like pre-defined recipes, with a demonstration showing how everything works together, along with a library and an explanation of how it all works. It has gotten to that stage. Whether it goes to the next stage, which is, ‘Here’s a chiplet,’ may never happen in the high-end world. But it might happen in the IoT space. If you’re running a little shop and you have a brilliant idea, you probably don’t want to spend time figuring out how to make an on-chip Bluetooth device work.

Srinivas: The recipe for getting to 1.2 GHz with soft IP works.

Murphy: There’s certainly a challenge with subsystems. It’s difficult to deal with a fixed pin-out for something you’re going to drop into a larger system. In one application a pin-out may be fine. In another it might have to be completely different because the DDR is over here and another component is over here. Freezing it is difficult.

Srinivas: And the technology we’re targeting isn’t so easy. You might need to make changes along the way.

SE: There contention for memory right now is enormous. There are wires coming in from all directions. How does that affect all of this?

Srinivas: That’s a design-specific thing you have to manage. With functional modes, there are design techniques you can adopt to stagger them, such as using inverters to make sure that simultaneous switching is minimized. Those are standard techniques, but beyond that there aren’t any I’m aware of.

Aitken: Most of that goes into the memory generators. You have to margin it so it will function in case there’s a massive power spike right when it’s trying to do something. We’ve also seen that the demand for true dual-port functionality is going down. People are saying it’s too complicated. Trying to manage some of the simultaneous switching issues is difficult. If people have a dual-port in their design it’s probably because they can’t do it any other way.

SE: Does cache coherency enter into this, as well? Now you have things that have to be balanced across different cores.

Aitken: That’s handled at a higher level. The cache management system keeps track of which requests go where and who’s snooping on what. It’s managed at the CPU-system level rather than trying to make it all work at the system level.

SE: Everyone is trying to reduce the number of metal layers. What effect does that have on signal integrity? You’ve now lost the margin.

Srinivas: If you’re reducing the number of metal layers, you can do that if the design is routable. The number of coupling capacitances increases. If you reduce the metal layers, you increase the number of neighbors. That directly impacts signal integrity. The lower the number of metal layers, the less ability to improve signal integrity. You can’t move things away. At 10nm, though, we’re seeing more layers. With , I’ve seen up to 10 or 12 layers.

Aitken: People always want to reduce the number of double-patterned layers. There is great argument over whether you should have two, three or four of them. You can conjure up all these reasons why one is better than another, but it’s yet another problem where you have to consider all of these things simultaneously. From a routing perspective, this is what it means. This is how you get your power network in. This is how your clock network applies. But you also can look at it from a standard cell perspective. It’s no longer the case that a 7.5-track library is automatically denser than a 9-track library. You may find it’s the reverse because having reduced routing capability means the extra area that the 9-track library provides also gives you more space that the wires need. And because the drive is better, you don’t have to double up on cells. The whole problem has to be viewed holistically.

SE: You’re building your own subsystem that way, right?

Aitken: Kind of. There are two or three things that happen with libraries. Depending on your pin access on cells, it can mean switching elements are right next to each other or further away. Also, the power network is a critical part. What does your power rail look like in your cell? If the power rail is small, how does a large power network feed that and keep the network running correctly? And how does that not interfere with everything around it? And the last piece that’s really important is electromigration. If you have inadequate power rails you’re going to see a lot of electromigration issues, especially because in such libraries you wind up building giant buffers all over the place.

Srinivas: We’ve seen electromigration slews for some of these libraries. For this kind of slew and this frequency, this is the maximum capacitance.

Aitken: Absolutely, and all of that is key. Sometimes we joke that the hardest part of designing a standard cell library is designing the power rail and the filler cell.

Murphy: Back on reducing metal layers, you’re hooking up to sensors. You need isolated power and ground. That doesn’t seem to help in reducing layers.

Leave a Reply

(Note: This name will be displayed publicly)