More Than Moore

Experts at the table, part 2: The impact of process technology on choosing IP; where 2.5D chips will show up first; silicon photonics gains traction in cost-resilient markets.


Semiconductor Engineering sat down to discuss the value of feature shrinks and what comes next with Steve Eplett, design technology and automation manager at Open-Silicon; Patrick Soheili, vice president and general manager of IP Solutions at eSilicon; Brandon Wang, engineering group director at Cadence; John Ferguson, product manager for DRC applications at Mentor Graphics; and Kevin Kranen, director of strategic alliances at Synopsys. What follows are excerpts of that conversation.

SE: Is the starting point the foundry or the IP or the application?

Soheili: Choosing IP is a byproduct. The big issue is the implication of the process on the application itself. If you’re in mobility you can’t use a high-performance process. If you’re in a high-performance market, you can’t use a process that’s defined for mobility. And then you have capacity issues and reliability issues. There’s a lot of effort in the industry to do bit-cell compatible processes and technologies. We’ll see some of these problems being addressed with industry standards in 28nm. 14nm will be much more complicated. But the industry has a way of sorting this stuff out. So we may be able to port 14nm IP. It won’t be as easy as 250nm, but we’ll cross that bridge when it’s necessary.

Ferguson: Usually by the time that correct has happened it’s no longer the leading-edge node anymore.

Kranen: The early going is very expensive. As people grind down the corners, it becomes a mature process and suddenly it’s not as expensive.

SE: Do we have any real indication yet whether moving to 2.5D will be more expensive or less expensive?

Kranen: The only place we’ve seen real evidence, which is based on yields, is in the FPGA market. That’s been a really good application because it does let you build a really big die and it gives you the interconnect that you need.

Wang: There are five segments to look at with stacked die. One is FPGAs, and 2.5D is done because it’s a very large die. Supercomputing is the second segment where this has begun. That’s happening now. Another segment that’s very structured is memory. We’re also seeing the introduction of CMOS image sensing in 3D. So 2.5D and 3D are not just future approaches. They’re in production now. It isn’t as big of a challenge for the EDA industry because it’s so structured. You can build a single structure and repeat that. The last segment, the SoC segment, is for mobile and IoT. Mobile is all about low power and cost. The power numbers are as good as a single SoC, but stacked die need to have the same cost as chips being produced now, which are very competitive because of the volume and scalabilty. There are still some challenges on the cost side. I don’t think 2.5D is a good candidate for mobile, but 3D is. The last segment is the IoT. That’s where it gets very interesting. In the mobile space, the majority of applications are digital. In the IoT there are a lot of sensors. That’s where alternative integration will play a role.

Kranen: Does the IoT need multiple die, or can they get by with older processes with analog?

Wang: Semiconductor companies like to put out a standard package and make the rest of the world an application layer. There are a bunch of startups writing code. That’s what they want to see. The MCU companies want five platforms, but none of these platforms will be a good fit in terms of cost. Once the killer app comes out, the volume will increase because there are so many products that some of them can be integrated. The mobile chip is perfect for Moore’s Law. This is MCUs, 50k, 100k. Those are very process insensitive. They can be done at 130nm and they can connect to image sensors, MEMS and all the rest of the parts that are better off at different processes.

Eplett: The first customers are going to be networking—those are the ones that have to go to 2.5D. Every customer that has been focused on cost has been a really tough sell. The guys who need high-performance processors but can’t afford to move to the next node—those are the guys we’re trying to work out a solution for right now.

Soheili: We’re working with manufacturers trying to future-proof technologies. We’re working through thermal, packaging, test, assembly—all of these have big question marks. There are also signal integrity questions for on-die, off-die, because there are lots of connections and peculiar materials that don’t have much history. On top of that, there are the known good die issues. That adds to reliability and test and cost. When you add it all together, the memory guys are deeply invested in coming up with a solution. On the other side, networking guys can see terabytes of bandwidth. That’s a lot of consolidation, integration and data, and it has meaningful relevance. Cost isn’t the No. 1 issue for them. It’s throughput. Beyond that, mobile has a long way to go. We haven’t seen any traction there yet. In the IoT world, memory guys are looking at what we used to call tiles. So maybe you have a processor tile, an NVM tile and an analog tile. Can you assemble these in a way that makes sense? A lot of these are integration-driven. If you want to do a wearable, for example, you want to save a little space so cost is not as much of an issue initially, but eventually it will be. Once the volume is there, we will find ways to make it more efficient. 2.5D is definitely effectiveness first, efficiency later.

SE: There are some other options, too, such as silicon photonics. How real is that?

Ferguson: It’s still very early. There are certain applications where photonics will continue for some time, but the jury is out as to when or if it will make it into the mainstream. There is always something else competing with it. Making dramatic changes in how we do things is a risk factor, and it will always be measured against whether there are ways of extending what we’re doing with a smaller risk. If we don’t find anything else, this is in our back pocket. But we may find something else first. We want to be there in case it is used, but we’re not going to bet the farm on it.

Soheili: Chips in high-end applications, like HPC and high-end servers, are adamant they are going to use it and they’re spending a lot of money on it.

Ferguson: Yes, there will be a common set of design applications.

Wang: Photonics goes together with 2.5D.

SE: Where is the starting point? Will it be the high-end applications?

Kranen: They’re the ones with the money to pull it through to the other side. And they can’t quite get what they want out of gigabit Ethernet.

Wang: Low power in the data center will be a driver. The data center is a significant percentage of the electricity.

To view part one of this three-part roundtable discussion, click here.