IP And FinFETs At Advanced Nodes

Experts at the table, part 2: FinFETs become more complex at each new node; stacked die IP challenges; including the package in the simulation; local versus global design concerns.

popularity

Semiconductor Engineering sat down to discuss IP and finFETs at advanced nodes with Warren Savage, president and CEO of IPextreme; Aveek Sarkar, vice president of engineering and product support at Ansys-Apache; Randy Smith, vice president of marketing at Sonics, and Bernard Murphy, CTO of Atrenta;. What follows are excerpts of that conversation.

SE: What happens with the next revs of finFETs? Does it get easier to design them into SoCs?

Murphy: Not necessarily. One theorized problem with finFETs is the fault model. If you look at a multi-fin finFET, you potentially can have a whole bunch of new failure mechanisms. This is theoretical because no one really knows yet.

Sarkar: When we design standard cells, traditionally people never worried about voltage drop. They expected a drop and they designed for that. That’s a lot different with a 700 mV supply. The IP providers are becoming much more sensitive in terms of what environment their IP is going to operate in. In a soft IP environment they’re very concerned about how a standard cell is going to be used. How is it going to be used, how are you going to simulate it, what package are you going to use?

SE: That leads to the next point. As we head into quadruple and even octa patterning, at least some of the chip will have to be done in a different architecture. How does that affect the IP?

Sarkar: If you add another layer with an interposer, there is concern about what happens with a voltage drop. And there is concern about heat.

Savage: The great thing about getting 3D to work in a commercial sense is that it really changes the analog IP game dramatically. Right now IP companies have to guess which processes they’re going to invest in. It could be a one-way ticket. If you get to the wrong node your company can die. With the advent of 3D, it allows some of these older, less-deep submicron nodes to have a life for that IP rather than forcing it into the digital IP domain where you design once and sell many.

Murphy: That’s 2.5D as well as 3D.

Savage: It could even be the high-performance stuff.

Smith: When you look at drive strengths, no one wants to decrease the clock cycle. We are facing increasing complexity. On the other hand, we’re seeing some of the longest runs in silicon we’ve ever seen—at least logically. Extracting out the network becomes really critical. You can’t do it manually. You’re connecting hundreds of blocks and you can’t be worrying about all of these physical effects. You have to have a fabric architecture that can deal with that in a reasonable amount of time. You can’t take 25 iterations to figure out all the things you need to insert to make this work.

Sarkar: When we first started looking at dynamic voltage drive, we looked at it statistically. Power is a statistical problem, unlike timing. With finFETs, especially with multiple cores, we began looking at whether we can extend this statistical approach. In the past if you 100 cycles of analysis you can cover 100% of the design versus 50% of the design with 50 cycles. Traditionally that was manageable. They would do two or three simulations to cover the entire chip. But if you had just time for one simulation, could you cover 100%? When you look at the package on top of that, there is a really significant amount of innovation that has happened there. All of a sudden there are different package architectures, so essentially you can have an extension of the chip inside the package.

SE: That affects the operation of all components, from the IP to the processor to the memory. When you can characterize the IP, is that something you have to consider?

Sarkar: With soft IP you can work with that, but with hard IP you definitely need to be worried about it.

Smith: Soft IP has a level of configurability. It’s not a set of Verilog that doesn’t change. There’s a ton of it that is highly parameterized.

Savage: The architecture can change because the clocking has to change to reflect the different characteristics of the process itself. Are there really tools coming to help characterize the IP? I haven’t heard a lot about the tooling.

Sarkar: If you take a package of design and you’re aware of how the customer will design the package, you can extract that portion of the package and push it down with that IP and analyze that.

Savage: Then we can get to the point where the IP designer can design the constraints.

Sarkar: Exactly.

Savage: That can be pushed to the integrator.

Sarkar: And when you talk about IP models, they take into account the package and then they write out some models. We don’t have a standard model, so when you plug it into the SoC model there are constraints. Are you sourcing more power in one part than another? That’s one aspect. A second is the TSV. We used to have one power supply. Now we have as many as 200. With finFETs they’re all diode-based, and they’re not very efficient so they’re bigger. Now you have larger devices. And you also have IP vendors started to worry about ESD analysis. They have always relied on the I/O guys to provide all the protection. They can’t do that anymore.

Murphy: I have not seen very aggressive combinations of functionality in 2.5D or 3D. I’ve seen DRAM. At some point analog will come, or MEMS. Some companies are saying you will mix and match and pathfind and have logic and memory all over the place. The question is whether it is real.

SE: Can we move to a platform-based integration architecture with 2.5D and 3D?

Sarkar: Some of these problems are global. Power is a global problem. You can’t partition it. Timing you can partition. EM is a local problem. Voltage drop is definitely not local. Neither is thermal. You need abstract models of this, and that model needs to be accurate enough to capture all of these effects. Platforms are definitely a direction people are heading in. We’re starting to see system-level analysis in the stacked die. The interposer guys are going to get models from everybody so they can do thermal analysis.

Smith: That only works if the model is granular enough, though. If you have model where everyone assumes worst-case parameters for what they have, it leads to overdesign. It has to give you more accurate scenarios and use cases. If you use the model under different conditions to get the right value, that’s fine. But you can’t just assume worst-case thermal.

To view part one of this roundtable, click here.

To view part three, click here

 



2 comments

[…] view part 2 of this roundtable, click here. To view part 3, click […]

[…] To view part one of this roundtable, click here. To view part two, click here. […]

Leave a Reply


(Note: This name will be displayed publicly)