The semiconductor industry is swinging back towards ASICs from more diversified approaches such as FPGAs.
True to its cyclical nature, the semiconductor industry is swinging back toward ASICs from more diversified approaches such as FPGAs.
This dynamic is evident at companies such as Apple. “At one point we thought Apple was being a contrarian,” said Drew Wingard, CTO at Sonics. “Everybody else on the systems side was shedding their silicon people. The easiest counterpoint to what Apple was doing was Nokia, which had sold off their ASIC group to ST at one point and then they divested their internal modem IP team to Renesas and completely got out of this whole chip thing.”
This happened at the same time Apple began designing its own application processors, transitioning away from the original iPod, which used what is now called an application processor from PortalPlayer to some joint development with Samsung. These were originally relatively small deviations from what Samsung was already building as an application processor, but then Apple started taking on more responsibility, employing more of its own silicon designers—now reportedly up to 1,200—and evolving its relationship with Samsung into what appears to have been a foundry relationship. Sources say it now has a similar relationship with TSMC.
“We thought that was a contrarian move,” said Wingard. “At the time it seemed that way. But in the aftermath of all of that, what have we seen? We’ve seen in the smartphone/tablet space, companies like Freescale, TI, ST, Toshiba—and now it appears Renesas—have completely abandoned that segment of the market trying to beat it. What’s the guidance they give the market? Pretty much what the execs at TI said was, ‘This is going vertical. If you don’t win Apple or Samsung, there’s not enough business in the rest of the world.’”
He added that Apple and Samsung have very strong systems businesses that are also doing application processors for phones. “The question we ask ourselves is, ‘Is this simply a facet of the application processor for smartphone and tablet space?’ That has been driving a large amount of the innovation in the SoC business for the last 10 years. We’ll find out very quickly the answer is no, it’s not the only place it’s happened.”
Another business that TI effectively exited much more quietly is multicore DSPs for base stations designs, Wingard said. “If you look at the general-purpose networking and data networking, as well as the base station and head-end devices for cellular and cable networks that are being built right now, all of a sudden those have gone back to being ASICs. In fact, LSI which essentially abandoned the ASIC business five or so years ago, is back to being a major supplier of ASICs. They are publicly talking about doing designs for a number of the major communications companies. Everybody knew they were still doing stuff for Cisco, but if we look at who they’ve talked about, you’ll find a who’s who list in the networking space.”
Wingard isn’t alone in seeing those kinds of shifts back to ASICs. Eran Briman, vice president of marketing at DSP maker CEVA, said wireless infrastructure is now at an inflection point due to the proliferation of heterogeneous networks, also known as HetNets.
“The infrastructure now has to adapt to different conditions, which includes different numbers of users, a mix of users—4G, 3G, WiFi—and they need a lot of flexibility to support tens or hundreds of users,” Briman said. “On top of that, you want a single architecture because you don’t want to be writing software for one architecture versus another, and the market is much more cost-sensitive than it used to be. So what’s happening now is OEMs are taking over and defining the SoC, then going out and looking for ASIC vendors who have been waiting for an opportunity. We’re actually seeing a resurgence of the ASIC model in that space.”
There’s a fundamental business shift behind this, as well, according to Wingard. “The reality is that the economics of building these highly integrated SoCs have gotten out of whack with the business model for selling them, and it’s the case that a $200 million development program to build one of these advanced SoCs doesn’t really make sense for a semiconductor company to pursue if what you’re going to end up selling the chip for is the number of square millimeters of silicon inside the package,” he asserted.
However, this is how OEMs are used to buying chips, and causing them to change that behavior is not something that they do willing or overnight. As a result, companies like TI say, ‘I’m not going to do this anymore.’ So what happens? “The chips still need to be built, now the pendulum swings back and the OEMs have to step up and try to design those chips themselves. Do they want to get a back end silicon team and figure out how to procure wafers and package them? No, of course they don’t do that. They end up working with someone who is good at doing that. Hey, we call those ASIC vendors. There’s a bunch—more than a handful of very large base station-class designs being pursued by companies like LSI right now. As we look around we’re seeing this more and more,” Wingard added.
Chris Rowen, Cadence Fellow and co-founder of Tensilica, said there’s always a bit of an ebb and flow in terms of what people choose for an implementation style.
“It reflects two opposing forces that have been at work over 15 or 20 years in the business and that is it’s sort of economics versus Moore’s Law integration. On one side of it, of course, it is expensive to design things with 100 million or 1 billion or 10 billion transistors in it, and not just because of the hardware costs but also all of the uncertainties and efforts associated with integrating those pieces and getting the baseline software together. People would love more and more to be on standard silicon be it FPGA or some application-directed platform if they could get their job done because it is clearly cheaper in risk and in out-of-pocket dollars and in time to do it that way,” he said.
At the same time Moore’s Law is a “jealous mistress,” Rowen said. “Moore’s Law fundamentally says integrate or die. Therefore, the natural number of chips that go into the average system is going down. More and more it is single-chip cell phones, single-chip televisions, single-chip network switches, single-chip PCs, because you can do it and that clearly is going to be the way you get to the lowest cost, the lowest power, the greatest battery life and the smallest form factor. But people want a lot of different systems. Not only do consumers want to buy different things, but chip vendors want to sell different things because that’s how they differentiate. [As such], there is an absolutely compelling motivation for them to build single-chip systems with different features than what the other guy has got, and it can’t all be software. Therefore there are compelling reasons at every level of the product pyramid from the most sophisticated to the most mass-market to find ways to differentiate.”
FPGAs: The perpetual bridesmaids
It is also interesting to look at the evolution of the story around FPGAs, he said, because the simple version of the FPGA story, which emerged 15 or 20 years ago is, ‘Gosh, we’re so good at riding Moore’s Law you’ll be able to do more and more in an off the shelf FPGA and it has great tools and it’s going to replace ASIC.’ To an extent it is true, however, in any given application the ratio of power or cost or silicon size between an FPGA doing a given system function and a much more dedicated optimized ASIC during that same function is big — it’s 5X or 10 X — and that ratio seems to be more or less constant. The reason why the FPGA guys have grown has been because of all the different systems that are built. [But] what fraction of all the systems are in fact things that fit within that envelope where the cost and power and size of an FPGA is adequate?”
This is weighted by the number of system designs, he said. Looking at volume designs, while some FPGAs get used there, “the more the volume is, the less likely you are to be able to tolerate that five or 10 X,” Rowen pointed out.
He believes the trend away from FPGAs is happening in the volume markets with pressure to have more application specific SoCs. “The vendors may call them standard products, so it’s a little bit different from the world of the ASIC business of 10 or 15 years ago when there were lots of vendors and people who would hang out a shingle and say they would design a chip for you and it is just for you and it is just for your one application. Even the big system companies think very much more in terms of platforms than ASICs.”
Further, Pranav Ashar, chief technology officer at Real Intent, pointed out that FPGAs are not going to be able to service the mobility market because they have some hard limits in terms of how big a design that they can capture. “It runs hot and it’s going to be big, so it’s not going to be able to service the mobile market. It’s just not possible at this point. In the processors like the ARM ASICs and A7 we’re talking about hundreds of millions of gates and FPGAs are not there. It’s going to take many generations of technology node iterations before it gets there, and even then the ASICs are going to be orders of magnitude ahead.”
In addition, the functionality that these devices include is increasing by the day. For example, he said, “an ARM A7 is a 64-bit processor, and in the iPhone 5 there is a second chip — the ARM M7 — which is solely for the purpose of sensing for motion detection, for audio, etc., and today it is a discrete chip. But going forward, as you move into the next technology node or let’s say you want to put all of this into a smart watch, the next goal at Apple and at other companies is going to be to have a single chip solution for the A7 plus the M7. If you look ahead three or four technology nodes, bringing all of that into a single chip is going to be the goal and FPGAs are not going to be able to do that.”
What has also facilitated this marginalization of FPGAs is the fact that as the networking or telecom domain or as mobility domain matures, the community basically figures out the components that will be needed on those SoCs and it becomes more like a virtual platform, Ashar said. “One of the knocks against ASICs has been the latency that it requires to get an ASIC out from scratch to finish and FPGAs basically shorten that cycle. You can do the verification in situ, and to easily change things if they don’t work out in the FPGA context and so on. That problem has been mitigated to some extent because the SoCs today are more like the application-specific platforms so a lot of the architectural decision making and conductivity based and underlying backplane type things don’t need to be invented from iteration to iteration — all of those things are sort of legacy items as far as architecture is concerned. That has mitigated some of the lead-time it takes to bring an ASIC out, and the unpredictability for a company like Apple or Samsung to bring an ASIC out, so they can get to market on time. The planning or implementation of these complex SoCs is now an established process.
At the end of the day, the evidence is in pointing to the platform-based ASIC taking hold (again) this time through the mobility domain. “Volumes are mind-boggling and that whole thing has created an opening for a proliferation of a platform-based ASIC type of business model for a number of other companies. So while the cost of masks and such is going to be going up over the next technology nodes, as a lot of these architectural processes and design processes get better understood it’s going to be very feasible for companies to base their business models on application-specific or domain specific ASICs,” he concluded.