Experts At The Table: Designing At 28nm And Beyond

Last of three parts: FPGA co-processors; virtualization; designs at 14nm; risks and guarantees for IP and subsystems; the value proposition for stacking die.

popularity

By Ed Sperling
System-Level Design sat down to talk about design at future process nodes with Naveed Sherwani, president and CEO of Open-Silicon; Charles Janac, chairman and CEO of Arteris; Frank Schirrmeister, group director of product marketing for Cadence’s System Development Suite; Behrooz Zahiri, vice president of marketing at Magma (and currently director of marketing at Synopsys), and Charlie Cheng, CEO of Kilopass.

SLD: SoCs have always been on the high end of the cost curve. Will that change as they become more mainstream?
Schirrmeister: There are FPGA SoCs, which may have a dual A9 subsystem on which you can run Linux. And you also have 4 million gates in the large version where you can put your own RTL into it. That’s what’s approaching the ASIC side. You can get it pre-defined and add your components in RTL into the programmable fabric. And that’s already integrated into the system level with virtual platforms and high-level synthesis. Those are making these designs accessible to different markets.
Cheng: There will not be generic SoCs replacing ASSPs. Cost is very high at the system level, and every integration point costs something. It’s hard to say a generic SoC has a place. Every SoC I know is slated for a specific market.

SLD: We start doing different tradeoffs as we move down the curve, right? Time to market is arguably as valuable as paying an extra dollar for a chip.
Janac: Mobility SoCs are selling in huge volumes. That will continue to grow. But they’re designed for a specific purpose. What we’re starting to see is people building FPGA co-processors that can manage the functionality of the SoC at a very low cost because they are in huge volumes and there are very big contracts that can drive the price down. They are using those co-processors to take the SoCs into markets that they were never intended for. The FPGA business is getting very interesting.

SLD: Does software change, as well? Do we move away from a general-purpose operating system, or do we still have a big operating system and many little ones?
Janac: Virtualization allows you to run multiple operating systems at once, invisible to the user and the application software.
Schirrmeister: I was at an automotive conference recently and one executive was talking about a hypervisor to switch between different operating systems even in the car. They’re looking at things like that. But if you look at some classic semiconductor companies making application processors, they’re starting to differentiate in software. Android democratizes everything, and then you add other software. The differentiation can be in hardware and software.
Zahiri: One area that hasn’t been tapped is software controlling the power. Our high-end customers are doing 50 to 100 power domains, mostly for mobile processors. We’re enabling people to design this way. And yet there’s not a significant way from a software point of view where, if you’re dialing your phone, another part of the chip should be shut down. There’s nothing software has done in a big way to control the power.
Janac: It’s a cultural problem. The software people don’t understand the hardware and they don’t want to use the APIs that are available. There is a gap. Computer science graduates want to write Java.
Schirrmeister: When engineers write their iPhone app for the hardware, there is one example where they ignored an API and it sucked the battery dry within an hour. Those are things that need to be validated. They’re an integration problem of hardware and software. I have seen software controlling power domains in the wireless world, though. There are power management ICs that can pull down areas of the processors and certain power areas to a lower voltage. That is software controlled. From my perspective, integration is a huge issue there at several levels. There is the software-hardware integration. There is the issue of verification that the application is running correctly on the chip, which has become so big at an advanced technology node. Then there is the subsystem integration and validation. It’s quite a challenge.

SLD: And from a verification standpoint, you’re never really done, right?
Schirrmeister: The designer has to be confident at the end of the day that it’s not the last thing he does in his career to tape out that chip.

SLD: As we move to 14nm and beyond, will we be involved with the chip at the same level or will it be more an integration of platforms?
Sherwani: We are already doing 14nm chips in development. I don’t see that happening. Our customers are willing to pay the money required, even though there will be fewer customers who do that. But even at 14nm there are still a whole bunch of companies doing chips. There are applications that need that.
Janac: But you do need more and more volume. At 90nm you needed about 100,000 units to break even. At 65nm you needed probably 6 million units. At 40nm, you needed between 10 million and 15 million units. At 28nm you need 50 million to 60 million. At 20nm you will need 100 million units. The markets to support those volumes are fewer and fewer, which is why I see the SoCs and 3D silicon will be prevalent in markets that don’t justify those very complicated deep-submicron dies.

SLD: But you may have a 14nm known good die that is part of that chip, right?
Janac: Yes, you become an assembler.
Sherwani: The other problem we see is that IP companies are not willing to warranty their IP. That’s one of the big problems of known good die. If I buy $5 million to $6 million in IP, one piece of IP out of 100 or 200 pieces that doesn’t work can force me to re-spin that chip. Yet, most IP vendors do not warranty their work. They are not willing to pay us a re-spin cost even if we can prove their IP is the problem. That means I have to go to a model where I can save myself from that problem, but at the same time Open-Silicon has to warranty the chip. That’s why we want to go to known good die. Once IP is proven in a piece of silicon I don’t want to re-integrate again and again. Every time I re-integrate it, I have a chance that I have missed a problem. The world is moving toward 3D kinds of chips that will allow us to address smaller markets but still have high volume for chips. The same chip will go into multiple 3D chips.

SLD: You basically define what a derivative chip is, right?
Sherwani: Yes.
Janac: About 25% of our revenue comes from being able to link die together inside a system in package. It becomes one of the key enablers. You don’t have to be part of the 28nm or 20nm problem because some things like analogs and modems don’t require it. You maintain Moore’s Law by stacking the die.



Leave a Reply


(Note: This name will be displayed publicly)