Inflection Points Ahead

The amount of uncertainty across multiple segments of the IC industry is unprecedented, and disruption is almost guaranteed.

popularity

By Ed Sperling
Engineering challenges have existed at every process node in semiconductor designs, but at 20nm and beyond, engineers and executives on all sides of the industry are talking about inflection points.

An inflection point is literally the place where a curve on a graph turns down or up, but in the semiconductor industry it’s usually associated with the point at which a progression of any sort shifts direction. Those shifts are occurring now in hardware, in software and on the process side, and each of them has significant business ramifications. Taken individually they are a huge challenge. Taken together, they have the ability to disrupt the entire ecosystem, reshaping who calls the shots, who reaps the lion’s share of the profits, and potentially even who survives.

Some industry executives say this is more of the same, and that inflection points have been with us for some time.

“I’m not the most zeitgeist-sensitive person, but I have noticed that ‘inflection point’ is the season’s new manager buzzphrase of choice,” said one system architect at a major chipmaker, who asked not to be named. “Is CMOS running out of steam? Possibly. But I would submit that if 20nm is barely better than 28/32nm, that’s because we are tending asymptotically toward some limit. The inflection point, if there was one, was years ago when we stopped accelerating—perhaps between 90nm and 65nm, and maybe earlier than that.”

Tom Ferry, senior director of marketing at Synopsys, takes a similar stance: “If you look at 20nm and compare it to other nodes it’s hard, but not horribly different. Double patterning and the limits of CMOS are big concerns and moving lithography to 14nm extends it three to four generations beyond what it was designed for. But the litmus test is does it affect the designer?”

Others insist that there are more factors conspiring now than at any time in the past to create problems that can’t be solved by the usual methods, tools or partnerships.

Process
Process technology is at the root of this change in thinking. The difficulty and repeated delays in developing extreme ultraviolet lithography to replace 193nm immersion technology has forced the semiconductor industry to grapple with the reality of double patterning at 20nm, and possibly multi-patterning at 14nm. This adds cost, complexity and variability into designs—as well as a huge incentive for stacking die that incorporate chips or IP developed at older process nodes.

“At 20nm, we have invested in R&D efforts jointly with EDA companies to develop tools and flows capable of coping with double patterning,” said Indavong Vongsavady, CAD director of technology and research at STMicroelectronics. “We already have exercised a double patterning-aware place and route flow on a few internal examples and test vehicles. We should be able soon to measure the impact on a larger designer community as we are currently ramping up more designs.”

And that community already is bracing for the worst.

“We’re going to have polygons on the same layer now on different masks,” said Carey Robertson, director of product marketing at Mentor Graphics. “As masks shift in the x, y and z direction you get variation. But now they’re going to be on different layers, so already you start with more corners. You have four to six mask variations per layer, which means you’re already at 28 to 42 more corners than in the past. Then you have to grapple with temperature variation, which is going through the roof, along with distribution of capacitance and resistance.”

In simple terms, what used to be a fixed number has been replaced by a distribution. So if an analog designer wants two identical circuits he now will have to grapple with two distributions, instead, and account for the variations. That’s assuming, of course, that the circuits are on the same mask layer. If they’re on different masks it becomes more challenging. And to make matters worse, not all foundries are willing to share that kind of information so design teams may not even know which mask layer they’re on.

Hardware
An obvious alternative to dealing with these kinds of problems is to build upward.

“To take the entire chip to 14nm is unnecessary,” said Naveed Sherwani, president and CEO of Open-Silicon. “But so far, the methodology, the tools, the teams and the processes are not there. Still, this is an inflection point. It will affect time to market and it will affect the overall cost. If IP is available as a known good die you will not pay NRE costs. That is a major change.”

Sherwani said that the amount of content that changes when chips move from one node to the next is generally less than 20%, and of the total content between 60% and 90% is non-differentiating.

“If people could buy an SoC platform they would buy it, but so far nobody offers that at the SoC level,” he said. “What’s changed is that in the past there was no good way of connecting die. With 2.5D and microbumps it’s now possible to do that, so you can move from a planar SoC to an interposer model. If you keep putting more stuff in a planar SoC the die cost will not justify the return.”

He’s not alone in that assessment. Rahul Deokar, product marketing director for the Encounter IC digital group at Cadence, calls Wide I/O between different die “one of the killer apps.”

“You connect through faster, the wires are shorter, the delays are less, resistance decreases and power goes down,” he said. “Plus it’s getting harder and harder to migrate to lower process nodes. At 14nm cost and yield are looking like a pyramid, where only the biggest IDMs are at the top of the pyramid. And with double, triple and quadruple patterning, we’re seeing that even some of the IDMs that used to their own manufacturing increasingly are hesitating.”

Software
One factor that has changed significantly is the cost of software development—drivers, RTOSes and other embedded code needed to run on chips and to provide a programmable workaround when something breaks. But what’s also changing is who’s calling the shots in design.

The most obvious example is Microsoft, which has turned the starting point for an IDM on its head with its Windows 8 operating system and its decision to build hardware that can run it. Google also is dictating how the hardware should be developed, although hardware vendors such as ST and MIPS are working with Google for a more optimal, lower power version of the Android.

“The real inflection point is not the move from 28nm to 20nm,” said Charlie Janac, chairman and CEO of Arteris. “It’s the shift from putting out a chip and then writing a program to run on it to creating a software use case and find the hardware to run it. Microsoft has been wanting to get into the mobility space with Windows 8, so it build the software architecture for ARM and ARM rewrote its own code. The system houses are now driving hardware—Microsoft, Google and maybe even Facebook. This change is huge. It makes it possible to move things around with only a few incremental changes so you can get to market much more quickly.”

Architecture
Less obvious, but equally important, are some of the changes that are going on under the hood. Just shrinking feature sizes no longer results in increased performance or reduced power. In fact, the opposite is often true. At 28nm, and particularly at 20nm, gains in performance and a reduction in area and power must be driven by architectural or material changes rather than simply shrinking the size of the transistors.

Whether these are inflection points in their own right, or the result of other inflection points, is debatable. But it’s clear that big changes are required, notably in the area of coherency. Multicore heterogeneous processors, GPUs that share some of the processing, and a large memory footprint has forced a rethinking of how to take advantage of all of these resources. At issue is the fact that most software cannot scale symmetrically, and clock frequencies cannot increase significantly without cooking the chip.

That has raised a huge interest in coherency and the best way to achieve it. “What’s changed is that things like coherence and sharing have moved from ‘nice to have’ to ‘must have,’” said Drew Wingard, CTO at Sonics. “You must extract away the memory system so the operating system doesn’t have to worry about that. This is a tough challenge, though, and it raises a lot of thorny issues. You basically have to teach other technologies about how memory is organized.”

Coherence used to be something that was considered at the processor core level. It is now the focus of attention at the system level because of the number of devices that share the memory. That means that if processing is being done on the GPU and the CPU, not all of it has to be coherent.

“What we’re heading toward is architectural changes driven by software’s need to use larger parallelism and heterogeneous parallelism,” Wingard said. “The meeting point is the chip.”

Business
Taken individually, each of these changes is significant. But taken together, they are potentially disruptive. There are more choices, more uncertainty about what works, and more variables to work with.

“We’re all used to variables in the semiconductor industry, but the magnitude and number of variables is something we’ve never seen before,” said Jack Harding, chairman and CEO of eSilicon. “TSMC has about a dozen flavors of process technology at 28nm and 24 different libraries. That’s 480 different possibilities. Add in similar options from GlobalFoundries, Samsung and UMC, as well as variations in voltage and temperature and suddenly you’ve got tens of thousands of variations. And you’ve got smart architects with spreadsheets working on $200 million chips. They may be able to select a chip that works, but for that amount it can’t just work. It has to be optimized.”

Harding says there is a value inversion where complexity has outstripped any of the individual supply chain members. Some of this will be handled by stacked die, in which IP is hardened into die and assembled for re-use.

“The driving force from the commercial side is to make sure the chip is perfect and to make fewer chips,” he said. “This will drive a power shift in the industry. But for that power shift to work, everyone has to be performing at top levels.”

Cadence’s Deokar predicts these changes will unfold in two phases. The first will be incremental improvements to existing tools for vias, timing and analysis of everything from IP to hot spots. The second phase, he said, will be when there are no restrictions or guidelines and each chip is a blank slate that draws on a huge stock of available, pre-made components.

“That will require a major paradigm shift in EDA,” he said.

Who actually benefits from all of these changes remains to be seen. But one thing is clear—these shifts are numerous, fundamental and they affect every part of the ecosystem. And for the foreseeable future they will continue to drive widespread uncertainty force companies to reposition themselves, and invite rampant speculation that will keep everyone on their toes.



Leave a Reply


(Note: This name will be displayed publicly)