Plotting The Next Semiconductor Road Map

Industry leaders examine enablers, implications, and perspectives for a changing technology ecosystem.


The semiconductor industry is retrenching around new technologies and markets as Moore’s Law becomes harder to sustain and growth rates in smart phones continue to flatten.

In the past, it was a sure bet that pushing to the next process node would provide improvements in power, performance and cost. But after 22nm, the economics change due to the need for multi-patterning and finFETs, and they keep changing at each new node along the ITRS roadmap. While finFETs made static leakage current a non-issue at 16/14nm, the leakage problem resurfaces again at 10nm—and grows worse at 7nm. More colors and pattern layers are required, and it takes longer to design and manufacture chips. In addition, thermal issues caused by dynamic power density as well as static leakage become increasingly problematic and harder to predict and verify at each new node. So from design through manufacturing, costs rise per design and per transistor, increasing the sales volume required to recoup that investment.

Economics are changing on a macro level, as well. The migration from mainframe to minicomputer to desktop PC, and then from notebook PC to smartphone and tablet, can be charted almost as a straight line. After smartphones, however, there is no single compute device. Instead, there are many customized solutions for vertical markets, each with its own unique requirements and demands. The upshot is that no design works across hundreds of millions or billions of devices anymore, and there is less technology reuse and commonality from one market to another. Moreover, computing is becoming far more distributed within these markets. Volumes are lower per design even though designs are becoming more complex and the overall number of chips is increasing.

Looked at from a high level of abstraction, there are inflection points appearing in both technology and business, and the synergies and interactions between them. And while some of the pieces are still being developed, it’s becoming clear that technology and business have advanced enough to change what we can do with that technology, how we interface with it, and where the future opportunities and challenges will be.

“There is no question there is an enormous inflection,” said Aart de Geus, chairman and co-CEO of Synopsys. “The semiconductor industry has had only two major phases. The first one was computation, and the killer app was the PC. But after that it continued with the Internet, servers, the cloud, which are all about computation. That is still continuing, and with big data that will continue massively, but it has flattened in terms of economics. The second phase, after a fairly major semiconductor break in terms of growth rates, was mobility. The killer app was the smart phone, which was the platform for enormous numbers of applications.”

De Geus said both phases had enormous impacts on other technologies, particularly software. But while these phases are far from over, they are no longer exhibiting the same kind of exuberant growth as in the past. (See chart below)

Screen Shot 2016-06-11 at 10.55.48 AM
Fig. 1. Smart phone sales over time. Source: Statista.

“The relationship between the hardware platform and the software is changing toward an opportunity, for the first time, to look at truly digital intelligence as now becoming practical, possible, cost-effective and clearly as big a transformation as we’ve seen in the first two,” he said. “So why does it not feel like it’s up and to the right yet? Because multiple things are coming together at the same time, which are not here yet in large volume from a semiconductor perspective.”

He predicts it will take several more years before the economic benefits of this shift are apparent in the semiconductor industry.

Wally Rhines, chairman and CEO of Mentor Graphics, agrees. But as he notes, there are winners and losers in technology and business disruptions. “It’s a more disruptive period than we normally have because there are more things changing. The good news about all of this disruption is that’s the only way we grow our business. Established design tools have fairly flat markets—simulation, place and route, and PCB. These markets don’t grow particularly. What grows are new capabilities. While the PCB design market hasn’t grown, all of the signal integrity products that go with PCB design have grown. And while the simulation market hasn’t grown, the things you add onto simulation to handle new problems have grown. Emulation has grown significantly. So with every generation there is something new. At every node, 10nm, 7nm, 5nm, there will be new tools because there will be new problems.”

The enablers
Alongside these macro shifts, semiconductor technology itself is in transition. There are new tools, materials, methodologies, and standards, as well as new approaches to building and packaging chips.

One of the foundation pieces in this shift is 5G, the fifth-generation wireless specification developed by the Next Generation Mobile Networks group. It will increase Internet download speeds to at least 10 gigabits per second with latency of less than one millisecond. Intel says it will open the door door to network functions virtualization (NFV) and software-defined networking (SDN). Qualcomm believes it will enable a raft of new services, connect new industries and devices, and change the user experience.

“There is still a tremendous gap between the level of data people want and what they get,” said Steve Mollenkopf, CEO of Qualcomm. “5G will enable a lot of industries to take advantage of mobile.”

Screen Shot 2016-06-12 at 7.35.53 AM
Fig. 2. 5G improves scalability. Source: Qualcomm.

This isn’t just another baby step forward, though. “5G will be as disruptive to the data industry as data was to the wireless industry,” said Sanjay Jha, CEO of GlobalFoundries.

Moving data is just one piece of the puzzle. Storing it is another, and memory scaling has run its course. This is what’s behind the move to —arguably the most significant shift in direction in memory in years—along with HBM, the Hybrid Memory Cube, and new entrants such as and 3D-XPoint.

“3D NAND is a huge one, and we’re right in the middle of that now” said Dave Hemker, CTO of LAM Research. “We’re at 32 layers now. People are talking about going to 48. In the labs, they’re looking at 60-plus or 90-plus. What’s fun about being at this stage is we don’t know, and the answer always is more surprising than you would expect. You do something and you think it’s amazing, and five years later you’re still doing it and improving it.”

And it may go up from there. “The way I look at it, from an equipment point of view, is that you have knobs you can play with,” Hemker said. “The number of layers is the obvious one, but there also is the thickness of the layers and the actual pitch of each of the cells. There is also talk about taking whatever number of layers you’re comfortable making and stacking them. So you do it once, then repeat it. That raises questions about how you’re going to do overlay. It’s complicated, like everything else. There are ways you can play around with the storage device itself in terms of being able to get better retention and multi-level cells. The scaling on that has room. We’re just at the very beginning. It’s not a one- or two-generation thing.”

Another piece of the puzzle involves packaging. Wires and interconnects have shrunk to the point where resistance and capacitance are serious concerns. Driving electrons through increasingly narrow wires increases heat, which in turn reduces reliability over time. On top of that, the benefits of shrinking analog components has been argued for at least the past decade. And while finFET and gate-all-around FETs are viable ways to control leakage and improve performance, they are more difficult and expensive to design and manufacture.

The alternative, and one that has been gaining traction in markets where performance is a critical issue, is advanced packaging, whether that involves 2.5D or fan-outs or something in between. There is work underway across the semiconductor world to create the ideal packaging solution, but exactly that is remains uncertain. There are myriad ways to package solutions. (See three examples below.)

Screen Shot 2016-06-10 at 1.43.43 PM
Fig. 3: AMD’s Fiji solution. Source: TechSearch International/AMD

Fig. 4: Intel’s Embedded Multi-die Bridge. Source: Intel

Screen Shot 2016-06-10 at 1.49.54 PM
Fig. 5: SPIL’s fan-out package on package (FO-PoP). Source: SPIL

At this point there is too little data to conclude which is the best approach for a particular application. But the amount of work being done across the industry, from universities to EDA companies to chipmakers to foundries and OSATs is significant and growing.

The implications
Once the infrastructure is built and all the pieces are in place, there is no limit to where technology can go. For instance, there are plenty of studies that automobiles are an expensive but poorly utilized asset. A person may drive a car for an hour or two a day, leaving it idle the other 23 hours.

“There are 250 million cars in the United States, which are used an average of 7% of the time,” said GlobalFoundries’ Jha. “So 93% of the time the invested capital is not doing much at all. Seventeen million cars in the United States could fulfill all of our needs. With scheduling, that’s probably more like 50 million cars. So you can decrease the output by 80% and fulfill everyone’s needs.”

Jha said that’s just the beginning of the changes wrought by this shift. The real estate taken up by parking garages and parking spaces in cities could shrink substantially because 50% of the cars in a city are looking for a parking space. “The United States is losing billions of dollars to congestion in city centers.” Moreover, if cars can be called up within five minutes, he said it’s possible for people to live further from the city center, which ripples out to other markets such as real estate and retail.

Another important shift will come once there is enough intelligence built into voice technology to be able to use natural language as an interface. Rather than asking a device several times to complete a function, the ability to just talk to it and have it recognize what you are saying will spawn dramatic changes.

“What we’re anticipating is the intelligence in devices will go up,” said Simon Segars, ARM’s CEO. “If you take your phone today, you’re still typing on a little keyboard and maybe shouting into it. The voice recognition in my car works usually, but there are times where it will dial the wrong person—quite embarrassingly. There’s an inflection point coming in the amount of intelligence you can put into embedded devices, particularly phones and cars, where the user interface changes. As voice, gesture and image recognition get better, there are technologies coming that mean the way we interact with computing changes. Even when you’re interacting with a computer today you realize that’s happening. That will change. We will get to the point where people get the benefit of technology without worrying about the technology.”

Segars said one obvious change will be the ability to get more consistent responses, regardless of whether commands are given in a quiet room or a noisy environment. “That radically changes the way people interact with technology,” he said. “If you look at most things in your house, the interface hasn’t changed very much. I have a TV that will do voice recognition, and we end up switching it off because something will happen on the TV and it thinks it’s a voice command. With improvements in that, the way you interact will become more natural.”

Other markets are beginning to benefit from these kinds of advances, as well. Medical, which has long been viewed more for future potential than near-term profit, is getting to the point where life-saving benefits are being documented as a direct result of new technology.

“Using the semiconductor process, Berkeley Lights is working with one professor to create create microfluidic optical process for single-cell development and to show cancer,” said Lip-Bu Tan, president and CEO of Cadence. “We have an announcement with UCSF. We are using massively parallel processing on a real tumor. You can see the cancer cell jumping up and down in the machine. This allows you to take a good cell and transfer it back rather than using chemo, which makes your whole body weak. That’s really exciting. It’s a huge sequence you have to analyze, and there is a lot of data. The big guys are spending a lot of money to own that space. It’s not just cancer, either. There are a lot of diseases that can be identified earlier, along with the drugs to treat them. You can have a lot of machine learning and intelligence to make the right decision. The whole medical field is interesting, and the margins are much better.”

Machine learning, deep learning, and neural networking all loosely fit under the heading of artificial intelligence. There are sub-categories of each of these, but the fundamentals from a semiconductor perspective are clear. Processing speeds at the right price points with low enough power provide a path for connecting more devices across more markets, and one that computing has never achieved before. Ultimately, they all will change how technology is used.

Demands at the semiconductor and system level will continue to grow, and security will be required as a horizontal and vertical component in every design. But what is changing at the most fundamental level is that as all of these capabilities improve, computation is no longer the end goal. And increasingly it will be distributed across machines working together, whether that is asynchronous or synchronous, rather than confined to a single device. Computation is an enabler, and enough other pieces are progressing to the point where computation can be handed off from one device to the next, analyzed across networks and, more importantly, across markets and in different locations.

In the past, it was the capability of the pieces that drove design, from semiconductors to software to networking. In the future, it will be end market needs that drive technology design. As technology progresses, the fundamental interaction between people and machines is shifting. To paraphrase the late Steve Jobs, it’s no longer just about how to use machines. It’s now a question of what can they do—and perhaps, what else can they do.

Related Stories
Rethinking Processor Architectures
General-purpose metrics no longer apply as semiconductor industry makes a fundamental shift toward application-specific solutions.
IC Industry Waking Up To Security
More companies recognize cybersecurity needs to be built-in from the beginning.
What Will China Do Next?
IC M&A activity hangs in the balance between the yuan devaluation and U.S. interest rate hikes.

Leave a Reply

(Note: This name will be displayed publicly)