Why Is My Device Better Than Yours?

As the industry runs out of comparative metrics, differentiation becomes a serious issue.

popularity

Differentiation is becoming a big problem in the semiconductor industry with far-reaching implications that extend well beyond just chips.

The debate over the future of Moore’s Law is well known, but it’s just one element in a growing list that will make it much harder for chip companies, IP vendors and even software developers to stand out from the pack. And without differentiation, there are only two options—change the way things are done, or wage war in the market over price.

The basis of differentiation in technology has always been metrics, whether they’re measured in MIPS, MHz or process geometry. But the metrics that have worked for many years are becoming less relevant, non-applicable, or more difficult to define.

Differentiation? Source: Android.com

Which one is better? Source: Android.com

“The path forward is metrics that apply to systems,” said Antun Domic, executive vice president and general manager the Design Group at Synopsys. “There will still be basic measurements at the chip level and the semiconductor level. But what do we measure with the performance of a system? In an SoC you may have 10nm with certain clocks connected to a 40nm interface. The path forward is metrics that apply to systems. If you go back 20 years ago, the standard measure we used was Dhrystones. We need more realistic assessments of what the user sees.”

Manufacturing metrics
Lithography was the first hint that a big shift in metrics was approaching. With 193nm immersion lithography incapable of a single pass exposure after 22nm, the only options are multi-patterning and different light sources. EUVhas been the option of choice, but it is so late that is unlikely to be commercially viable until 7nm at the earliest, and maybe not until 5nm, at which point even EUV will require multi-patterning.

Since the introduction of Moore’s Law in 1965, newer was always considered better. With classical scaling, the next process node always offered improvements in performance, area and power. But classical scaling ended at 90nm, making it more difficult to eke out improvements in each of those areas. At 20nm, which was the introduction point for multi-patterning—the reason that Intel built its first finFET at 22nm—that became even harder.

The introduction of finFETs at 16/14nm by commercial foundries does provide a way to reduce leakage current and improve performance, but cramming more finFETs together also increases dynamic current density. That has led some companies—STMicroelectronics, Samsung, among them—to offer alternatives at 28nm using fully depleted silicon on insulator. It also has raised interest in new approaches to packaging, adding fuel to 2.5D and 3D die stacking rather than shrinking features.

For those companies that do continue shrinking features, at 10nm there are other issues to contend with. The inability of electrons to move through smaller wires at the same speed and over longer distances requires the introduction of high mobility materials—largely untested at this point with unknown effects. And the myriad of physical problems such as electromigration, electrostatic discharge, increased damage from single-event upsets, not to mention process variability, increased defect density and inflection points in metrology all raise issues about the benefits of moving to the next process geometry.

So what exactly is the last process node? The answer is unknown. But more importantly, by the time that question is answered it may not matter because many SoCs will be a combination of technologies.

“The calling card is no longer feature size or even clock rate because you might have a 16/14/10nm chip surrounded by other stuff at different process nodes,” said Mike Gianfagna, vice president of marketing at eSilicon. “How you measure and categorize that is a problem. So at one level you’re talking about how to segment the market, whether it’s a monolithic chip or 2.5D or 3D. And there will be more flavors in between those, whether it’s on a single substrate, on a chip, or in a stack. Those are implementations. But when you talk about the end application, the market could get very complicated.”

There is a growing agreement that markets matter more than process geometries.

“You may have a chip with an N node, whether that’s 14nm or 28nm, and N-1, N-2 and N-3,” said Asim Salim, vice president of manufacturing operations at Open-Silicon. “It’s hard to have a generic answer about what’s different because that varies for different market segments. But you do need a lower barrier of entry for all of these markets, which is still not an easy task because the infrastructure is not available yet.”

Architectural shifts
One of the tricks that companies such as Armand Intel have initiated since 90nm is adding more cores. Simply turning up the clock speed no longer works because it generates too much heat given the increased density on semiconductors. In fact, clock speeds have been static for the past decade.

Rather than cook the chip, or require massive heat sinks or fans, which can eat through battery life, they have added more processor cores. In fact, there are consumer devices with 8 and 16 cores on the market today. But unless they are being used for embarrassingly parallel applications, such as image or video rendering, most of those cores remain dark. In spite of big advances in parallel programming, very few applications can effectively use even four cores, and most don’t require more than two.

That has prompted a couple of unique approaches to using cores more effectively. ARM’s big.LITTLE heterogeneous cores, for example, allow systems vendors to utilize the core that works best for a particular task. Intel’s burst mode processing, meanwhile, allows multiple cores to tackle a compute-intensive task for short periods of time. But there are limits to both approaches.

“People always have worked to oversimplify metrics,” said Chris Rowen, a Cadence fellow. “You buy a PC with the most megahertz or the biggest RAM. But when you don’t have that kind of correlation, people will make up other metrics—some of them simple, some of them nonsensical—or they will pay attention to other things, such as battery life, how responsive a device is, or how it will impact your wallet. Most cores in devices are completely useless and essentially never get turned on.”

He said this is particularly confusing in the post-PC world, where the smart phone is now the PC replacement. Trying to apply the old metrics to the new realities doesn’t work.

Boundary issues
This is particularly apparent when it comes to the Internet of Things. The new wrinkle is connectivity. While bandwidth is a well-known metric, it typically has been “in addition to” rather than “part of” the performance metrics for a device. That no longer holds true as the IoT changes the formula for where data resides.

“What we’re seeing now is more fluid boundaries that define what a system is,” said Drew Wingard, chief technology officer at Sonics. “We happily jumped from PCs to smart phones, but it also colors the way we think about this. As we see an explosion in smart appliances and wearables in the IoT, the metrics will be based on what you are trying to build. And if chips for smart phones are commoditized, you will be forced to look at other issues. There is no single metric to track, though. A system is a multidimensional design problem. This is the end of the ‘CPU is king’ era.”

It’s also the end of seeing the world through the lens of a processor. What makes a chip different may not be one processor. It may be a bunch of them working together or separately.

Some metrics still apply
Metrics that do still work—sometimes—involve battery life in mobile devices, and operating current in data centers. These are real numbers, but they’re not necessarily the same everywhere. A smart phone battery will last longer when it’s connected to a strong WiFi signal than when it’s constantly searching for a signal, for example. And use cases vary greatly from one person to the next, one region to the next, and one company to the next, making it difficult to arrive at averages.

Moreover, while battery life may be a critical marketing approach in wearable electronics, and efficiency is critical in hard-to-reach places, such as a sensor in outer space or an implantable medical device, the metric is far less important in most automotive applications, home electronics, or anything with a plug. An LED bulb that costs $1 more per year to run isn’t going to register on most consumers’ radar.

“Power is one aspect that will become more important in many markets,” said Charlie Janac, chairman and CEO of Arteris. “Some chips will be differentiated based on superior architecture, and others will be based on time to market. There are markets that are relatively slow, but in some of them time is critical. In those markets, chips will be differentiated based on availability. If it takes you two years to get a chip done and someone else can do it in 12 months, they’re going to win. The only way to do that is with a platform strategy.”

That platform approach also allows companies to target narrow market slices that they couldn’t with more general-purpose chips. “You can add and subtract blocks and software for specific markets,” said Janac.

But all of this is very market-dependent. Open-Silicon’s Salim noted that in some markets the differentiator is cost per quote. “They may not care if it’s 16nm or 28nm, as long as it meets the specs,” he said. “In other markets that doesn’t work.”

Also important are metrics such as address space—the number of bits in software. While Apple stunned the smart phone world by creating a 64-bit address space in iOS, that opened the door to better memory performance.

“The implications for a software migration path are how much memory you can reasonably put in a device,” said Cadence’s Rowen. “To get past 4 gigabytes of flash, you need to use 64 bit addresses.”

Another metric that matters is quality, although that’s generally a combination of reliability and usability over time. There are cases, though, where quality is apparent immediately. Synopsys’ Domic said that while more functionality is being built into smart phones, the quality of the voice call hasn’t improved. “You have to remember that you are building a phone,” he said.