Architecting For Optimal Interface IP Integration

Experts at the table, part one: Floorplanning is critical; trading off configurability and power; keeping pace with evolving technology; the impact of smaller process geometries.

popularity

Semiconductor Engineering sat down to discuss the design and integration of complex interface IP with Ty Garibay, VP of engineering at Altera; Brian Daellenbach, president of Northwest Logic; Frank Ferro, senior director of product management for memory and interface IP at Rambus; Saman Sadr, director of analog design at Semtech; and Navraj Nandra, senior director of marketing for analog/mixed signal IP and embedded memories at Synopsys. What follows are excerpts of that discussion.

SE: From an overall design perspective, what are some of the key things that the design team has to keep in mind when they are going into a design and considering the different interfaces that they need to include?

Garibay: We’re relatively unique in that we still design the vast majority of our own IP. We recently acquired one piece from Synopsys but it was a rather unusual event. We start from the ground up and one of the primary consideration becomes the floorplan of how to do you get power to the high speed I/O between power and all the different types of power that each different analog circuit wants. And then how do you get the data out once its followed down to the normal processing speed of whatever the rest of the chip is? So, 28Gb I/O might have 28 different types of wires coming out for each channel — how do we get those out with a reasonable noise and other things we don’t want going onto the rest of the system. So floorplanning is really the biggest initial issue to start with.

Ferro: From an IP perspective, one of the things we’re doing is, because of the difficulty these guys are having especially in integrating the IP is that before we even start to design the high speed interfaces, we’re looking at the channels. We actually have channel models in the whole system, so we’re not just looking at the interface IP, we’re looking at what we think the system requirements are and have internal software tools that model that channel and give us a very good starting point from the design. After we do that, we go back and verify those designs against those models. We create what we call a solution space so if the SoC manufacturer designers stays within those parameters then they should have a very easy time of bringing up the PHY or the DDR interfaces — we find that’s been very, very helpful. Some customers have even asked to buy that tool.

Daellenbach: It’s interesting that you asked that question because you’ve got some very divergent perspectives here. You’ve got somebody who makes chips, you’ve got somebody who makes PHYs, another person that makes PHYs, somebody that makes digital IP and PHYs, and then you’ve got a digital IP company. We’ve all got very unique perspectives on that question. The PHY guys have an attractive thing in that PHYs are somewhat common in functionality so interfacing to them is not as much an issue as whether it works across having the right power, having the right size, across the right process — that’s the challenge these guys are dealing with day in, day out. From a digital IP perspective, the challenge is we’re sitting between the PHY and what the customer’s really trying to do and the world that that is — customers want to do a lot of different things so one digital piece of IP, it’s hard to make it meet everybody’s requirements. The challenges that we run into in terms of supporting customers is can we make it configurable enough and how do we control all that and make sure we achieve quality across all those configuration, etc. Those are the challenges that we run into and we focus on trying to solve.

Nandra: We’ve been doing IP for many, many years and have a fairly substantial customer base now and what we’re seeing over the last few years is that customers are expecting more and more IP on their chip. It’s now an acceptable way to go to market by buying external blocks and integrating them. So they’re expecting more IP. The IP can be digital IP, mixed-signal PHY IP. The speeds of these IPs and interfaces is increasing. People are expecting the next generation so PCI-Express Gen 4, DDR-4 are the things that people are talking about today. And then the process nodes are getting smaller. This is impacting certainly people who provide physical IP or hard IP because you have to redesign those IPs in technologies that aren’t particularly friendly to analog design in many cases. The other thing that we’re seeing is that our customers are spending a lot of their budget on software development and so from a PHY and a digital IP perspective, that doesn’t come into play so much until you start realizing that the digital IP needs to support some kind of middleware, some kind of firmware so what we are starting to see — and what we’re actually doing now — is to support the software development as well through virtual models for IP. We have transaction-level models of that digital IP. We have high-level models of the PHY IP. We’ve captured that in software development kits so customers can do hardware prototyping, they can do software prototyping. Basically they can go from the electrical interface all the way up to the SoC bus in an environment that will hopefully get them some working IP on their SoC.

Sadr: One challenge we used to have with the IP — particularly the physical IP at the interfaces that you have the technologies that are evolving, they are becoming more and more difficult to design with; they are more challenging. The time to market is certainly shrinking and shrinking — so you want to have a methodology that is yielding first time success. What we have done is put more and more programmability into PHY, now it is digital programmability, firmware, software to do that. The other challenge is all these programmabilities cost in power. How do deal with that? Two orthogonal challenges. From one side you are pressed for time, you want to have first time success, you want to have a methodology that yields that first time success; pushing in towards adding more and more programmability. On the other side, clearly a challenge is power reduction; even challenge those programmabilities that you’re adding. This is we’re working on addressing for next generation when you’re moving to femto-joules or pico-joules per bit type of efficiencies right now.

Nandra: Customers want really high speed; they want it in really cheap packages; and they want extremely low power — all for nothing, yesterday.

SE: How do you do the tradeoffs between allowing the programmability but balancing that with the power?

Sadr: It starts with a bifurcation right off the bat. Traditionally we wanted to have one IP to address everything from the lowest data rate to the highest data rate. As we’re getting to 28Gbps and 56Gbps on the high end, and you still have the low end of the market to address at 1Gb or 3Gb, you need to find that right data point. How do you address this? I believe it is clear to all industry neighbors that a DC to wide type of solution doesn’t really address this. How you break that, that is a very critical choice to make.

Garibay: That’s market segmentation for us. For us, it’s pretty easy: we have three tiers of families and historically we’ve shared a lot of the SERDES technologies now and throughout the families. As we go forward now, we’ve actually evolved these families into three different processes, so there is fundamentally three different implementations: high-speed, 32-56Gbps target for Stratix. There’s a mid-range, more a base station-type application, 12, 16Gbps. Then at the low end, it’s 6Gbps. And each of those are optimized very uniquely. The top one, the power is important but it’s really about time to market and features. The middle one, power is everything. The low end is cost. Each of the market segments has matured and the breadth has grown enough that we have to literally field three teams, and rotate them.

Ferro: You’re a very enlightened customer because a lot of companies still want the multi-protocol SERDES at the power efficiency in a wide speed range. Most companies are going to have to come to the realization that if they really want that power efficiency, they’re going to have to maybe deal with two different physical pieces of IP.

Daellenbach: As a digital IP provider, the good news is we don’t have quite as many of those issues but those issues are still there in that the marketplace is very broad, and someone who is searching for performance versus power versus cost, they have different feature requirements.

Garibay: You have the same thing in terms of our software teams, right? They have DDR-4 2+Gbps and then we have DDR-3 or something, and the customer wants it in one piece of IP for no apparent reason, so now you have this configurable thing that is all things to all people — and it’s not a very good density, power option. In your case, when would you break that up into two offerings or something?

Daellenbach: What we try to do is have a configurable solution where we can basically say we’ll deliver the features needed and if you want something that is one-size-fits-all-does-everything solution, we can deliver that. But if you want this highly-optimized, only-does-this thing, creating a product that is configurable to address that DC to daylight sort of need, that’s the challenge of the digital IP company. What goes along with that is that you have this configurable product but how do you ensure that every single configuration is fully validated because now there’s a huge expansion of the space of all the things it does. You’ve got three things now but this product is going to go into 50 different things. Those are the kind of challenges the digital IP company that you’re constantly trying to wrestle with.

SE: What about the impact of smaller geometries? How is that impacting the interface aspect of designing IP?

Sadr: Smaller geometries always come with the advantage of higher speed. The most challenging item right now, particularly analog/mixed signal designers are dealing with how to spread and deliver that power from the local points on the chip to the outside world. Electromigration, IR drops and non idealities come into play when you want to take that path that is causing extra local heat and you want to take that heat and spread it. That’s the biggest challenge and it’s limiting, literally, that integration that you expect these finer geometries to deliver. You have to spread your design and that means your parasitics are higher, which takes a little bit from the advantage of the smaller geometry. That is typically on the high end of the data rates.

Garibay: We are seeing as we get into FinFETs for the first time the self-heating aspects of the design were something that was new to us and it’s effected the local packing density we can achieve with analog. It makes it so hard to complete the design in analog because it’s a trial and error — there are no good rules as of yet. I’ve actually been working with my teams to say let’s do as much as we can in super high speed digital so we’re moving more and more of the analog into digital because while it’s hard to close, we’re pretty sure we can close that. And we know how to lay it out. The analog stuff, you’re not really sure it’s closed until the foundry accepts it and says it’s ok. We are seeing a lot of pressure to move things out of analog into high speed digital.

Ferro: Architecturally, how do you re-architect to break the digital and the analog apart?

Garibay: It’s pushing the architecture in a new direction for a lot of the guys. It’s more similar to what we did for power many years ago for different reasons — the digital was so much lower power than analog for Bluetooth and some other stuff. So much of the analog went digital, and it’s the exact same thing that connects them to the interface. Whatever has to absolutely be analog at the interface is analog, but it goes digital as soon as it can.

Nandra: I agree that some of the partitioning has changed more into the digital domain but to address the question about the smaller nodes, what we’ve seen is that there’ve been two changes. First, the smaller nodes going to 20nm still using the standard bulk technology — what happened there was that at 20nm we saw a lot more variation with the devices, the way the devices are modeled. I guarantee the devices were faster but the variation was such that you needed to make the devices bigger in order to keep the same type of matching. Leakage was horrible, In fact, if you look at the profile of a 20nm transistor only 5nm of control is provided by the gates and that’s why there is so much leakage. When the fabs transitioned into FinFETs, what we saw firstly was that the variation reduced, so we saw better drive, better matching, parasitics are higher, so to do RF types of circuits, it’s a challenge. But if you are doing high speed large signal circuits, like a DDR interface for example, it’s not a challenge. So the technology parameters are better controlled with FinFET technology. The other thing is that we took this opportunity to redesign the architectures completely to save on power. Going back to the earlier question about can you be a one size fits all IP for all diffrerent markets, the answer is no. We’ve also segmented our portfolio to target different markets and for FinFETs today, there’s a market for consumer devices so we redesigned a lot of the analog architecture to take advantage of some of the FinFET technology parameters to be able to achieve that. You are seeing some advantages with the new, smaller process technology, and there are different skill sets you need to adopt. the key one is to be able to do a lot of the layout type work a lot earlier in the design phase, so it’s not, do your schematic design and then hand it off to the layout engineer.’ The layout has to happen at the schematic stage because of what Samr was saying about electromigration, IR drop, well proximity, MBTI–these things should all be modeled at the schematic level.