Semiconductor R&D Crisis Ahead?

Too many choices and uncertainty turn ROI for new chip architectures into riskier gambles—and force a rethinking of what’s next.

popularity

Listen to engineering management at chipmakers these days and a consistent theme emerges: They’re all petrified about where to place their next technology bets. Do they move to 14/16nm finFETs with plans to shrink to 10nm, 7nm and maybe even 5nm? Do they invest in 2.5D and 3D stacked die? Or do they eke more from existing process nodes using new process technologies, more compact designs and improved architectures?

These are wager-your-company kinds of choices, but there are way too many of them—and too many uncertainties tied to each of them—to make anyone comfortable that they’ve made the right ones. As the economics of Moore’s Law slips from the semiconductor industry’s primary guideline into folklore—at 16/14nm, the cost per transistor is no longer less expensive than at the last node even though it is technically possible to extend it at least several more nodes—the number of options is increasing along with the required investments. One bad investment can do more than kill a chip. It can destroy a company.

What’s more, even strategies to trail behind the leading-edge designs are becoming more difficult to determine. Companies that waffle between market leaders and so-called “fast followers” told Semiconductor Engineering their decision-making process is sometimes paralyzed by rising complexity—along with their ability to invest in new R&D because they don’t know which way to go next. R&D requires critical mass from a number of sectors working in sync — an entire ecosystem — to be effective. As the market splinters or pauses, reaching critical mass is much more difficult without multiple deep-pocketed industry consortiums or government funding. And even when billions of dollars are poured into technologies, its getting so complex that it doesn’t guarantee success, which is what has happened with next-generation lithography.

“There’s a new reality of how hard and how bewildering the choices have become,” said Mike Gianfagna, vice president of marketing at eSilicon. “We see it in our customer base. It’s a bet-your-company decision. Do you go finFET or 28nm FD-SOI? Or do you use nine levels of metal instead of eight?”

These kinds of decisions affect the entire supply chain, from chipmakers to EDA tools. “The EDA guys are working hard to product a flow and generate IP that are optimized for a given recipe, but the problem is that the recipes are too varied. There is a bewildering set of selections, and the only way to deal with it is to get very close to the foundries and try out a number of implementations. From a business standpoint that’s risky and time-consuming,” Gianfagna said.

Those choices are exploding everywhere, including at existing process nodes. Steve Carlson, group marketing director in Cadence’s Office of Chief Strategy, said there are about 10 active process nodes, and for each of those nodes there two to three options based on performance, power, or embedded capabilities.

“It’s a lot harder to do the pathfinding process,” Carlson said. “And then for the system there’s the question of what else can be integrated. You’ve got sensors an MEMS being integrated at 65nm, and then you’ve got different flavors at 28nm. On top of that there are 2.5D and 3D integration options, which include everything from glass interposers to a continuum of alternatives in the package. The whole packaging realm has fragmented. And with the Internet of Things, you’ll see lower-cost, older nodes being used for edge devices and more sophisticated technologies further up in the cloud.”

This has prompted some soul-searching among EDA companies, as well, which have to be at the most advanced nodes to work with leading-edge companies—but not necessarily on all the processes available for those nodes.

“There is a divide between emerging, advanced and established nodes,” said Saleem Haider, senior director of marketing for physical design and DFM at Synopsys. “But it’s also not as clean as you would like to have. At the very ends of the spectrum those differences are very pronounced. But it’s a continuum. People continue to innovate at advanced nodes, but there also is more emphasis on design at established nodes. Silicon will continue to scale, but the question is whether that is economically viable for many companies.”

In fact, there are looming questions about whether shrinking features is going to be commercially viable for more than a handful of chipmakers beyond 10nm. Industry insiders say that teams responsible for shrinking features consistently have earned more than those designing new chips, but at 20nm the ratio was inverted as more emphasis gets placed on new architectures and designs than on shrinking geometries. The good news is that architects are now closely aligned with designs well beyond the initial phase. That can be bad news if something goes wrong along the way, as well.

In many cases, simpler is better
For chipmakers, the key is staying focused on what a chip really needs to accomplish and figuring out the best way to get there at the lowest cost. This is something of a radical shift in thinking for an industry that has been racing to the next process technology since the mid-1960s. But two things are changing. First is the cost of getting to the next node. It’s no longer as simple as creating derivatives from the same design using the next process technology. Massive amounts of R&D need to be done on interconnects, process technology, lithography, high-mobility materials (electrons don’t move as fast at 10nm), new dielectrics, finFET structures, packaging, and even test.

The second is the enormous amount of interconnectivity—loosely and mostly erroneously labeled the Internet of Things because it’s generally focused on the interaction between people and things—which allows computing to be done within a device, on the edge of a network (the new term is fog servers) and using various types of cloud servers, where performance is critical, bandwidth may be a limiting factor, but battery life is not an issue.

“These new process geometries are fantastic as a driver of innovation and they typically represent where the mainstream will be in just a few years,” said Taher Madraswala, president of Open-Silicon. “However, when new nodes are created there is always a lag between their availability and their broad use.”

He said this is driven by a number of factors, including whether the right set of IP has been ported, tested and is available, as well as the mask and wafer costs at new nodes, the effect of all of this on chip design costs and whether there are other alternatives to achieve desired performance goals.

“Lately, this lag time has increased. For someone building an ultra-high volume and highly competitive product like cell phone chips, even a small reduction in part cost or power can offer enough motivation for ‘early adoption’ of a new node. However, for many mid- to high-volume ASIC applications, there are other alternatives to the higher design costs of the latest process nodes. Over the past several years, we have seen an overall industry reduction in the rate of transition to the newest nodes enabled by other alternatives. In fact, it’s broadly speculated that 28nm may very well be the longest-lived process node we’ve yet to see — and we are seeing strong interest in 28nm from our customers, but less so for deeper nodes. Our current ASIC projects reflect this, as well.”

That seems to be a common theme these days. As Satish Bagalkotkar, president and CEO of Synapse Design, routinely asks his customers: “There are a lot of technologies available, but is 28nm enough for this application? Or does it even need to be at 28nm? Can it be done at 65nm? The 20nm process (including 16/14nm finFETs) is only for people on the network or the server. The outside world is getting more diverse and simpler, which is why you’re going to see 28nm survive for the next six years or more. But if you can do it at 65nm, the cost goes down and the chance of being successful with a chip goes up. A 28nm chip costs $30 million to create from spec. At 14nm, it’s an order of magnitude higher. The days of more horsepower are gone. We have more than enough compute power. That means you need a very specific reason to go to 14nm.”

That reason is typically the performance needed on the network or the server, or for highly compute-intensive applications that need to be done locally. For others, he said 2.5D is looking more and more attractive, and that companies are beginning to migrate to that architecture.

“R&D future is all in the architecture,” Bagalkotkar said. “It’s about making things relevant, and what you are trying to do defines the architecture.”

But just as there are more choices, there are plenty of opinions that don’t coincide.

“The investment and returns in chip R&D for advanced nodes is still pursued by most leading-edge companies for higher performance, greater functionality and lower power reasons,” said Pravin Madhani, general manager of the place and route group at Mentor Graphics. “We are already seeing tapeouts at 14nm and design planning at 10nm. All these are because the end customers want better power, more functionality and higher performance in the same die size. The only way to achieve these is to go to a lower node. Initially, the investment and cost may be high but that is easily outweighed by the ability to win or retain a lucrative IC slot in a huge volume device. Most of the top 20 semi companies may do fewer chips at leading-edge nodes, but they are certainly moving to leading-edge nodes to stay competitive.”

Integration becomes even more important
No matter what node they’re working at, the entire process from initial concept all the way through to silicon has to be more efficient. That means shaving costs wherever possibly, often by simplifying the integration process.

“You have to start looking at architectures and other design innovations,” said Charlie Janac, chairman and CEO of Arteris. “You have to interconnect more efficiently than before, and the interconnect has to do more. And you have to make sure it all works together, which is a challenge because in an SoC no one owns all the IP. You’ve got Synopsys PHY and I/O, an Imagination GPU, an ARM processor, a CEVA DSP and Tensilica (Cadence) configurable processors. Someone has to stitch this stuff together.”

This has created significant interest in architectures and approaches that can pull things together quickly, including multiple standardized platforms for 2.5D and 3D packages, on-chip networks such as those made by Arteris and Sonics, and third-party IP that has been well characterized and tested across many designs at many different process geometries.

“There are huge opportunities here,” said Janac. “But a lot of the success of the industry comes down to the mentality of the people involved in it. If you look at China, this is what’s going on now. There’s a can-do attitude and optimism that resembles Silicon Valley in the 1980s. They’re willing to try new ideas, kill bad ideas as fast as possible, and experiment to reduce costs.”

All of this has some interesting implications for R&D, as well. Rather than working on big designs, what’s becoming evident is that development of smaller pieces that can function independently and be integrated is a requirement. The divide and conquer approach has moved beyond just verification into the development of pieces that can stand on their own, be integrated quickly, and do all of that without destroying the power budget.

“Process scaling is one important factor, but the methodologies to build large SoCs are not viable for even the highest-volume applications,” said Drew Wingard, CTO at Sonics. “The market opportunity is there at the most advanced nodes but the number of chips needed to recoup NRE are 200 million to 300 million per platform. If you miss by just a little bit, you could lose the whole company.”

For systems companies, this formula may still make sense because they can absorb the cost of chip development in the cost of an entire system. This is the same formula IBM used to apply to software and services back in the 1960s and 1970s, when it leveraged one to pay for the other. But as the industry splintered into chipmakers, IP developers and EDA companies in order to improve efficiency at every level, in accordance with Moore’s Law, that kind of thinking disappeared from the semiconductor industry. With increasing complexity and new vendors jumping into the semiconductor market—Apple, Samsung, Google, Facebook and Amazon—the economics have shifted again. But big chipmakers may not head down that same path because it’s harder to compete against system makers developing exactly what they need for a specific device with very specific functionality and connectivity.

“The implication of big IDMs backing away from the bleeding edge are profound,” said Wingard. “The amount of money that companies can hide in methodologies is significant, and that’s how big companies pilot new methodologies. That leaves a lot of the midsize companies, and there is not as much advancement, which makes it more and more challenging to do R&D. How is that R&D going to be paid for?”

He said the fast follower rule was six to nine months after the leading edge companies moved forward. That appears to be headed for change. “At the time of launch, the cost per transistor for a given node is always higher. Eventually 14nm will be cheaper. But it won’t happen in six to nine months like it did at previous nodes.”

And the risk of all of this is that as companies waffle on what to do next, they pull back the reins on advanced research. How that will affect future designs at all nodes is uncertain, but big chipmakers say privately they are very concerned.

Related Stories:

Billions And Billions Invested

Are Processors Running Out Of Steam?

Atomic Layer Etch Finally Emerges

The Bumpy Road To FinFETs

IP To Meet 2.5D Requirements



Leave a Reply


(Note: This name will be displayed publicly)