Experts At The Table: IP

Last of three parts: Managing risk with a disaggregated supply chain; why and where the IP stack is growing; the changing economics of stacked die; thermal, power and economic context.

popularity

By Ed Sperling
Low-Power Engineering sat down to talk about IP with John Goodenough, vice president of design technology and automation at ARM; Simon Butler, CEO of Methodics; Navraj Nandra, senior director of marketing for DesignWare analog and mixed signal IP at Synopsys, and Neil Hand, product marketing group director at Cadence. What follows are excerpts of that discussion.

LPE: The supply chain needs to function almost like an extended IDM model, right?
Goodenough: Yes, but it’s not a new concept. All of this was done in the automotive industry 25 years ago. The semiconductor industry is transitioning there. People are trying to manage the risk of taking a product out, and they are dependent on a lot of moving parts. Their goal is to understand and manage the risks.
Butler: But it needs to come in a consistent way. You don’t want to be on a plane every week. You need to have an abstraction that gives you the visibility you need without having to have a VPN license.
Hand: You need to set up a hosted design chain for the customer. Everyone is working within that common collaborative environment so that when something goes wrong it can be quickly addressed. As there are new revisions, they automatically drop into that environment and the customer sees them. That’s a trend that’s happening now.
Butler: That might be true if you’re both working on parts of the SoC. But if you’re a systems house and you’re assembling, then it’s a different tool set.
Hand: That’s correct. But the trends in the microcosm of IP are beginning to move into that realm, as well. Once you get into a system context, the EDA/VIP world doesn’t really fit into a system environment with their supply chain. That’s a challenge we have to resolve.

LPE: How does that affect growth of IP?
Hand: It affects everything up and down in the stack. It goes down to integrating into the SoC with RF, RF-like technologies such as optical, data converters, analog—all of that is starting to come as IP instead of standalone chips. The software and firmware stacks are more of an IP area. And once that gets solved, the next thing is how you build that into the system level models and supply chain models that are required for that. But we’re at such a low level on the IP side that there’s a lot of integration that has to happen.
Goodenough: I just came from Linaro. When we look at the new IP, it’s the software IP and analog IP. It’s the next logical thing to do. It’s the up-and-coming thing where people are looking to reduce cost. It’s no longer a real differentiator, so you just outsource it. But then you have to look at managing those software communities. It’s open-source software communities and making sure the platform and the instruction set and the memory maps of the platform architecture are being consistently reflected up to the software community and the operating system guy, so that when you plug those things together they work.
Hand: And in some cases that IP may become standalone and part of a 3D stack, in which case you have to manage that whole supply chain. How do you get that integrated onto the stack? In some cases, because of cost, risk or performance, you may not want to integrate some of this IP natively into the SoC.

LPE: Analog is a classic example of that, right? In 2.5D, you may want a whole separate chip at an older process node.
Nandra: Yes. We do build stuff that integrates into more nodes, but we also have customers that would like to put their analog into a 65nm power management IC, including the rest of the interfaces, and then the rest of the SoC at 20nm or 14nm. At 65nm, the power management is leading-edge technology. There are still some design challenges, although they’re not so difficult if you’ve worked at that node before. The point about stacking and packaging is quite interesting. From a signal integrity perspective, a lot of these things become easier. You don’t have long wires or cables anymore. You just have some communication going through the software and a via. The challenge becomes thermal dissipation between the substrates. You have a substrate at 65nm and one that’s at a smaller node with very different thermal characteristics. Someone has to figure out how to widen the memory lines so they don’t fuse together.
Goodenough: It’s a new context. They’re just wires, but they’re wires done in a different way.
Hand: The other challenge is a business one. Who owns the risk? If you have Wide I/O and a Wide I/O memory chip, the memory chipmaker says this is a known good die but it couldn’t be tested out on the landing pads. So it gets thinned out, stacked on seven other dies and then you finally do a test of the whole stack. It doesn’t work. Who owns the problem? You’ve got eight memory chips and an SoC, and it’s packaged and pinned out. Who owns the cost?
Nandra: From a practical packaging perspective, all of these technologies in 3D IC and wide I/O are really expensive. We’ve had similar discussions on Wide I/O. It’s a throwback technology with significant performance, but you have to invest heavily in your package. When it comes down to high-volume parts, people aren’t going to pay the money for this.
Hand: It depends. It’s the overall cost of the system that’s important. If you can get the overall power down and performance up, companies will invest. We’ve got customers investing in this now because it’s a way of differentiating. If you’ve got smartphone SoC vendors and they can differentiate with better power and performance and win the socket, they’ll do what they have to.
Nandra: With Wide I/O, that roadmap has been pushed out as people try to make LPDDR meet that requirement. Today, JEDEC is looking at LPDDR 4. That will push out Wide I/O further. From a technical perspective I totally get why big companies are looking at this technology. They’re also looking at fully depleted SOI, for example. But it’s not going to make it into a tablet or smartphone.
Goodenough: It’s a question of when the cost is right.
Hand: For many customers, LPDDR 3 will solve their immediate problems. But if you look at the trend, this is already happening. To get terabit per second performance out of memory you have to go to stacks. It’s not a question of whether the cost equation will work for this. It’s just a matter of whether it’s next year or the year after that.

LPE: We’re looking at a complete bifurcation of the market—those who do massive volume versus those who work in lower volumes.
Goodenough: It’s not so much volume as how much you can recover from your investment in how you make your silicon. Whether that’s a micro on a board, a processor in an FPGA, a custom chip, all that matters is how much profit you’re recovering. If you can only recover a wafer-thin margin you’re not going to be investing in new technologies.
Hand: Then you need volumes of 50 million units a year to get your money back.

LPE: But this does play into subsystems, because you can integrate all the pieces and achieve much greater volume, right?
Goodenough: It’s no different than boards. You’re seeing that happening in SoCs and FPGAs. Whether it’s going onto a board, a hard block in an FPGA, a soft fabric in an FPGA or a custom ASIC, they’re all basically different compile points that end up as a piece of silicon with a different price and a different energy envelope. And if you go to China with a standard part, you can probably turn a board around in two or three days. That’s a big difference from spending 2.5 years doing a custom IC.
Hand: You’ll have much of what was on the board in an SoC itself. Whether that’s integrated into an SoC or a stack or just integrated parts, it depends.
Goodenough: But if I’m a customer I don’t really care about that. I only care about how much power does it use, how much does it cost and what’s the form factor.
Hand: There have been many chips in the telecom world that make no sense to manufacture if you measure them by consumer SoC metrics, such as how many units have shipped. But the value you get out of each of those means they can make the economics work. Going back to context, there’s an economic context that people building a system are operating in. If you’re providing IP, you have to provide the right deliverables in the economic as well as the technical context. That’s what will drive subsystems more than anything else, too—the economic context.

LPE: And time to market is part of that economic equation?
Hand: Yes. If most of your chip is good enough to get the job done and you can do it in a few months of integrating extra pieces, while assuming everything you didn’t touch works perfectly, that’s a compelling argument.
Nandra: We see that with smartphones and tablets. In China, customers are starting to get into the tier-two markets. They’re all about derivatives. The idea is that you do reduce the cost of the IP.
Goodenough: You have to maintain IP, though. The IP may be fixed but the context is evolving. You have to evolve your IP as that context changes from four-layer boards to two-layer boards, or 32nm to 14nm. IP has a long lifetime and you have to anticipate where it’s going to land.
Butler: What’s particularly interesting is the IP view of defect tracking. A defect in IP never really goes away. There’s always somebody using it somewhere, and you need to know. It’s different from software where the lifetime is project-based. What we need is something that tracks bugs in IPs that goes into a system context so you get all your dependencies.



Leave a Reply


(Note: This name will be displayed publicly)