IP And FinFETs At Advanced Nodes

Experts at the table, part 1: Power emerges as the primary driver to shrink features; heat and electromigration emerge as critical issues; integration becomes an enormous challenge.

popularity

Semiconductor Engineering sat down to discuss IP and finFETs at advanced nodes with Bernard Murphy, CTO of Atrenta; Warren Savage, president and CEO of IPextreme; Aveek Sarkar, vice president of engineering and product support at Ansys-Apache; Randy Smith, vice president of marketing at Sonics. What follows are excerpts of that conversation.

SE: As we push into the next nodes, we’ve got a bunch of issues involving integration of IP, including power, characterization and proximity effects. How big a change is this?

Smith: Fundamentally, as a software provider, it hasn’t changed that much. Even with finFETs, we’re working with standard cell libraries that have all been characterized. The bigger issue for finFETs is designer intentions. Why did they go to finFETs in the first place? They’re looking for improved power, less leakage, longer battery life. They’re dealing with more integration. But people are struggling already to put this many devices on a die. The real reason to move forward is power.

SE: Is it dynamic power, leakage current, or power management across the chip?

Smith: Certainly leakage is a big driver—is it really off when you turn it off? We hear about power of all sorts, so power management and dynamic power matter. More granularity with clock trees is important. If one product can get double the battery life, that could be very significant. It’s only going more important with wearables and harvesting energy.

Murphy: A lot of the finFET impact is really hidden from the logic part of the design, so you have to stretch a little bit to figure out what the impact really is, but there are some big impacts. The standard MCU guys, when they want to dial down power, don’t want to mess with the architecture because that has ripple impacts on a lot of other areas. So they use biasing, which is a great way to reduce leakage. But that doesn’t work with finFETs, and if you still have a power problem with your MCU you have to change the architecture. You have to do gate clocks and fine-grain DVFS.

Savage: Anyone doing 16nm is doing a massive level of IP integration. The amount of integration that has to happen at that process node is enormous. That is still the Wild West. It’s an engineering problem from the standpoint of how you connect these things. It’s about performance, whether the clock needs to be shut off, but from a functionality standpoint it’s incredibly inefficient because there’s just so much of it. For the last couple years people have been talking about subsystem integration, where if you have very large pre-defined subsystems—audio and video, for example—those are completely shrink wrapped.

Murphy: A number of companies are building their own ARM subsystems. Some is ARM stuff, some is their own.

Savage: There are other companies using QorIQ stuff. That started out in Power architectures and it’s in automotive drive train management, and we’re now seeing that whole subsystem architecture is dropped down and put into the ARM architecture. They’re re-using the subsystem.

Sarkar: As an industry, we have a lot of experience working with finFETs. There’s the whole IP side, but there also are standard cells. With standard cells, people never worried about that in the past. But you do have to worry about finFET drive strengths; there is 25% more current. Even with the outputs, you’re looking at are 30% lower EM limits. Now we are seeing almost all IP vendors paying more attention to how they do their design, how they do their characterization, and how they create a model. We’re starting to get questions about how they can create rule checks at the SoC level. When you consider leakage, how do you analyze something like this in the IP space? The compiler guys never cared about memory. Now they do. We’re seeing the same types of concerns with the PHY and the SerDes. FinFET adds a level of control and reliability they’re looking for.

SE: When you’re designing IP for finFETs, do you have to change the design or is it a question of better characterization of what’s already created?

Murphy: We’re all used to multi-Vt IP. With finFET, I’m hearing there’s another dimension. You can have a different style of cell design that gives you different leakage performance. So now you can potentially have a bigger spectrum of options you can synthesize to. That becomes a more challenging problem.

Sarkar: We’ve been talking about chip-package design for a long time—seven years, in fact. This is the first year that people are discussing thermal issues. So is it just an enclosed problem for the system guys in terms of cooling? This also affects the reliability. A lot of the fabless guys haven’t really thought about thermal in the past.

Smith: Architecturally we have to think about these things. We think that controlling power strictly at the software level is crazy because you’re burning the most expensive processor you’ve got over a few thousand cycles just to figure out if you can even turn it off. We’re working on bringing power down to the hardware level. We’ve got a couple customers trying to shift from a software focus on power management to a hardware focus, which hides all the details from the software. Some of it comes from handset manufacturers, where their QA covers 50,000 scenarios. Under all these different scenarios, how much power are you burning?

SE: And those are what, about 95% to 98% of the use cases?

Smith: At least. They’re adding to them all the time. So one generation was 20,000. The next was 50,000. Everybody has to do that because they need to understand how many pieces of the chip are going to be on. They can’t have it all on at once. These scenarios have to cover multiple processors because we’re talking about quad cores.

Savage: From an integration standpoint, how do you model that? And how do you communicate all of that to the customer.

Smith: Our job is to provide a mechanism that allows them to solve the problem, rather than solving it for them. We’re giving them the knobs. Under what scenarios can you turn off certain things.

Savage: It’s like a race car. You need to know how to drive it.

Bernard: It’s looking at all the handshaking.

Sarkar: For CPU providers, when they ship to the end client they have their entire power delivery and it works just fine. But when you have the package connected to that, how do you share that information. The information has to be bi-directional. When you’re designing the IP, you have to be able to share that information in context.

Smith: There’s a big difference about whether it’s soft IP or hardened IP. If you’re doing soft IP, you absolutely need more characterization information. But now they have the visibility because you can see it all the way through for what they want to use. For hard IP it’s completely different.

Murphy: There’s a dirty little secret here. Everyone things of ARM cores as something you can drop in your design and not mess with it. That’s not how it works in a lot of cases. People are messing with it. They’re pulling out fixed statements and adding their own for performance reasons. They’re tuning the performance. They’re finding ways to differentiate even though they’re still using the same ARM core that everyone else is using.

To view part 2 of this roundtable, click here.
To view part 3, click here.



2 comments

[…] To view part one of this roundtable, click here. […]

[…] view part one of this roundtable, click here. To view part two, click […]

Leave a Reply


(Note: This name will be displayed publicly)