How Much Will That Chip Cost?

From the leading edge of design to older process nodes, development costs are being contained much better than the initial reports would indicate. But not always for the obvious reasons.


From the most advanced process nodes to the trailing edge of design there is talk about the skyrocketing cost of developing increasingly complex SoCs. At 16/14nm it’s a combination of multi-patterning, multiple power domains and factoring in physical and proximity effects. At older nodes, it’s the shift to more sophisticated versions of the processes and new tools to work within those processes.

Despite an industry-wide sense of despair, though, progress is being made on all fronts—either with tools or methodologies or platform approaches that rely heavily on reuse. While projections show it will cost as much as $300 million to develop new SoCs at the leading edge (see chart below), the real numbers are generally much lower—generally between $20 million and $50 million, providing there is plenty of reusable IP. And at older nodes, improvements in processes can allow chipmakers to stay exactly where they are and still eke one or two generations of devices out of the same node. It’s not all good news — it’s still expensive, difficult and often frustrating — but neither is it all bad news.

Source: IBS

“There’s huge variation in the top-line numbers,” said Steve Carlson, group marketing director in Cadence’s Office of Chief Strategy. “If you develop a new product from scratch it can be hugely expensive, but most folks use existing IP and software and a lot of infrastructure that comes off the tab. We’ve seen where it can come down to $20 million. So while the trend is that all things get bigger and the numbers at the top of the curve vary from company to company, the shape of the curve is the same for most of them.”

Economic tradeoffs
As SoCs have become more complex, so too have the economics surrounding them. That includes everything from which features and IP to integrate, how quickly to get to market and with what specs on power and performance, how many layers of metal, how memories are configured and how many memories are used, and which markets will be targeted. Each of these carries a price, and they can add up to some very large numbers.

“There are two models that can work,” said Drew Wingard, chief technology officer at Sonics. “One is the single-category, single-company model, where only one company makes everything. The second is a category killer—a superchip—where you don’t know exactly what the chip will be used for. But there is no way a semiconductor company can get a return on a $500 million investment for a chip, while for a system company that price tag might look cheap if it provides an extra 10% in sales on a $600 appliance. A systems company also can do it more cheaply than a company that only makes chips because they would have to overdesign it.”

In addition, it’s possible to take some of what’s developed and stretch out the development costs by using it as a platform in multiple markets.

“The most successful companies are the ones with a platform that involves one base chip design,” said Kurt Shuler, vice president of marketing at Arteris. “Then you can take a derivative of that for a different business unit. Big companies do that, and even those with a chip design in one business unit share it with another business unit, as long as they’re in close communication.”

Shuler noted that Chinese companies have become particularly adept at extending those platform designs by continually refining them through a series of derivatives, each of which improves on the previous generation.

Choosing nodes
Much attention has been showered on the leading edge of design where some of the toughest — and most interesting — problems are being solved. There are several high-profile reasons for angst at that level. First, extreme ultraviolet (EUV) lithography is late, which means that double patterning is required at 16/14nm, with triple or even quadruple patterning at 10nm. Second, while finFETs reduce leakage they are still more difficult to design because of higher thermal density and physical effects such as electromigration, electrostatic discharge and electromagnetic interference. And third, there are other effects such as process variation that continue to grow at each node after 28nm.

“We’re seeing a whole bunch of companies retrenching to embedded flash and MEMS sensors for the Internet of Things,” said John Koeter, vice president of marketing for the Solutions Group at Synopsys. “We’ve seen a lot of companies, particularly in Europe, ditch applications processors and focus instead on 65/55nm.”

Koeter said at mainstream nodes—40nm to 65nm—the price of a new chip is roughly $40m to $50 if it’s from scratch. But yield is high at those nodes, and the software development cost is lower because those chips are not at the leading edge of functionality.

“The design may not push the gigahertz range, but that doesn’t mean it’s not a sophisticated design,” Koeter said. “Instead of pushing forward, the goal may be to minimize the number of metal layers.”

Trading off performance and functionality against cost is becoming more commonplace. “The high-performance processing that is possible at 16/14nm is not required for most applications,” said Taher Madraswala, chief operating officer at Open-Silicon. “A lot of even the most advanced designs can live at 28nm for a long time. Architects will have to think harder about parallel execution again, or they will have to find new materials like graphene or other materials. And we will begin to stack die vertically or horizontally. If you look at the problem statements solved by big companies at advanced nodes, the big thing was high-resolution video. But only a small portion of the algorithms really need to run fast, which is where they will need 16/14nm. The rest can remain at older nodes.”

2.5D sightings
That explains why there are reports are beginning to trickle into toolmakers of 2.5D chips moving into production in less price-sensitive categories, such as networking chips. And while it may take time before this technology hits mainstream, the market has been waiting on the sidelines for volume production of this technology for the past few process nodes.

“The market likes diversity and innovation, and the only way to achieve that is by decreasing cost and lowering risk,” said Mike Gianfagna, vice president of marketing at eSilicon. “That requires better design tools and simplified methodologies. For 2.5D to be real and widespread, it has to allow the leading edge to become standard—so maybe you have two or three choices—and then use an interposer to hook up a display technology.”

Most industry experts believe the move is inevitable, with full 3D-ICs using through-silicon vias likely within the next five years. Even with the cost of complex chips at advanced nodes considered manageable for some companies, time to market pressures plus new architectural opportunities for mounting memory directly above or below processors to relieve congestion and improve performance are big opportunities for improving performance, lowering power and reducing area.

“2.5D will become particularly important with the Internet of Things,” predicts Sonics’ Wingard. “The trouble with the IoT is that it demands premature integration. You have to know what the end application needs, and right now the system guys—the ones who define that—aren’t sure, and the chipmakers aren’t close enough to the customer to determine that. So the pendulum will swing back hard and fast. 2.5D will allow silicon producers to produce parts of that and use an interposer to stitch it together. Right now the main reason you do integration is cost, with power and performance No. 2 and 3. But with the IoT, the apps are limited by no one knowing what to build, and 2.5D can handle everything but the cost side.”


cd says:

Hi Ed! Nice article!
It appears that for some chip makers the transition cost per wafer goes a different direction than the cost of developing new products for the 16/14nm node. Micron ‘s transition cost per wafer from 20nm to 16nm is only ¼ of the transition cost from 25nm to 20nm. I think, also Intel said that they can use about 80% of the equipment again for 14nm. After that the transition cost to 3D will skyrocket. We should see this trend in the capex for these companies.

Leave a Reply

(Note: This name will be displayed publicly)