Why better performance is back in vogue.
By Ed Sperling and Jeff Dorsch
An explosion in IoT sensor data, the onset of deep learning and AI, and the commercial rollout of augmented and virtual reality are driving a renewed interest in performance as the key metric for semiconductor design.
Throughout the past decade in which mobility/smartphone dominated chip design, power replaced performance as the top driver. Processors had sufficient performance to accomplish tasks such as internet surfing and video playback, while power budgets were limited by the size and weight of the battery. Power is still important, but performance increasingly is the metric companies are focusing on as the amount of computing that needs to be done continues to grow exponentially.
“The minute these applications become viable, the industry says, ‘What about something smarter?’” said , chairman and co-CEO of Synopsys. “A lot smarter is 10X, 100X in computation. And we’ll find ways to deliver it, banking on sophistication of the silicon, which will still be at a premium.”
But achieving that computational performance is becoming more difficult. At the leading edge of , performance doesn’t automatically improve anymore just by moving to the next process node. And shrinking features adds a slew of challenges that need to be addressed, such as RC delay in thin wires, contact resistance issues, and memory bottlenecks—all of which can affect performance. Add to that list software, which still isn’t being written to take full advantage of the hardware, and the problem begins looking more like a hydra-headed beast than a neatly defined challenge.
So rather than fixing everything with a single step, such as migrating to the next process node, there are now many smaller steps that collectively can add up to better performance.
“It’s not just one thing,” said Gary Patton, chief technology officer at GlobalFoundries. “There are process knobs, where you improve drive current. These future nodes are becoming dominated by middle- and back-end of line resistance/capacitance. But the worst-case corner is going to limit what can be achieved from a performance perspective. There is much more focus on how we control variability. Local layout effects have been a huge issue at 10nm, and the reason people have not achieved the performance gains they were hoping to achieve. Even simple things like random fluctuations can cause problems—one device that is different than the rest of the device and ends up gating performance of the chip.”
Process
Moving to the next node provides greater density, which means more transistors to throw at a compute problem. While that is comparatively straightforward for processors, it’s much harder with SoCs, particularly at the most advanced nodes, because there are issues such as contention for memory, increased dynamic power density that can affect signal integrity, and thermal hotspots that can cause short-term functional and long-term reliability problems.
“As we scale down, the incremental gain in performance is more difficult to achieve,” said Kelvin Low, senior director of foundry marketing at Samsung. “This is why there is a push to different device architectures like nanowires and nanosheets. In both cases, the surface area increases. With finFET, you have a 3D gate structure. With nanowires and nanosheets, the gate is all around, and performance will scale per unit area. But it’s a lot more complex. Manufacturing complexity is increasing. As you saw at Semicon West, this is not business as usual.”
High-mobility materials can help in this area, but there are tradeoffs anytime new materials are introduced in terms of cost, defectivity, ease of working with the material, reliability and availability.
Not everyone is at the leading edge, of course. In fact, most chipmakers are still working at older nodes, so for them advancing to the next process node still pays dividends in terms of performance. But even there, the speed at which they are moving from one node to the next is on the rise.
“The migration pace is increasing,” said Walter Ng, vice president of business management at UMC. “But not everything needs to move to 3nm. There also is a focus on the right performance for less money. Packaging and back-end improvements are fertile ground for that. Taking a lot of processing capabilities and making them more widespread may help, too.”
Packaging
One of the very big knobs to turn when it comes to performance involves packaging, as well as chip architectures that can take advantage of advanced packaging.
“With system architectures you can look at the problem in a different way and not just rely on silicon technology,” said Samsung’s Low. “So you can partition a system to achieve system-level performance scaling. We’re doing that with 2.5D and HBM and HBM-2. You get a system-level performance increase and your process technology costs do not shoot up as much. That becomes a partition problem. It’s a distributed processing approach and it is an important enabler. But you also have to look beyond the package to communication between chips. High-speed SerDes IP is now at 28G and 56G, going up to 100G in the future.”
At this point, there are three basic approaches to packaging. One is a fan-out, which is essentially a denser version of a PCB in a package. This approach takes advantage of shorter distances and faster interconnects. TSMC commercially pioneered fan-outs with its Integrated Fan-Out (InFO), but many other foundries and OSATs are working on similar approaches, with an emphasis on even higher density.
Rezwan Lateef, vice president and general manager of litho products at Ultratech, said wafer-level fan-out packaging can provide 20% better signal integrity, 10% smaller packaging, and 10% better thermal performance. A key reason is that logic die can have more than 1,000 I/O points, presenting a challenge in making all those connections.
“As memory moves to quicker baud rates, advanced packaging will be needed,” he said.
A second approach is 2.5D, where chips are connected using an interposer or some sort of silicon bridge. The big sticking point there has been the cost of the interposer, but most foundries are confident the price will drop significantly over the next couple years.
“There are other directions in bonding and interconnects that also could mature in the next couple of years,” said UMC’s Ng. “We believe that over the next few years, 2.5D and other solutions will become much more mainstream. Right now, we have a number of customers that are very interested, but cost points prohibit adoption across a wider swath of products right now. It’s in use at the high end, but the next step will be in midrange systems where there is a lot of volume.”
A third approach is full 3D, and there are several ways to do this. One is to stack die on top of each other and connect them with through-silicon vias. A second is to monolithically build the chip using TSVs. And a third involves turning one chip upside down on top of another and heating it just enough so that the copper in one metal layer fuses to the copper in the other.
Steve Teig, chief technology officer at Tessera, calls the latter approach “physical 3D,” and he said the interconnect density is massively higher than by using TSVs. “This way you can have ultra-high bandwidth throughput at low power and your pipe can be as big as a sensor. We’ve been doing this with image sensors and each pixel has access to computing. So you take chip A, which you build up to the interconnect level, and then take chip B, which you have built up to say metal level 7, and then you fold one on top of another. So you have copper on one chip connected to copper on the other, and then you heat the chip just a little to fuse the copper wires together. So instead of having two seven-layer chips you have a 14-layer gadget, and you get ridiculously high throughput.”
Currently, advanced packaging accounts for about 0.1% of all semiconductor units, according to William Chen, an ASE fellow. “Smart things” for the home and other applications “will grow through system-in-package and heterogeneous applications,” he predicts.
“Wafer-level packaging (WLP) is a very, very important part of SiP, of heterogeneous integration,” Chen said. “It’s important for the three cornerstones of the connected world: wearables, smart homes, and intelligent robotics.”
Heterogeneous integration will be a market driver through 2020, particularly with the rollout of the IoT, he noted. “We are working across the whole community. WLP is becoming mainstream.”
Business issues
The cost of developing chips, from design through manufacturing and yield has always been a consideration, but with huge volumes much of that cost has been relatively easy to amortize over time.
As the mobility market continues to flatten and new markets begin demanding optimal performance, solving this problem using traditional scaling becomes riskier.
Wafer prices are skyrocketing for advanced fabrication nodes. A wafer with devices that have 28-nanometer features costs about $4,000, while a wafer full of 7nm chips will go for about $12,000, according to Gartner analyst Sam Wang. “Die size is increasing. Die cost will continue to go up.”
So how to improve performance while also controlling cost is becoming a much more application-specific decision. Aram Sarkissian, general manager of EAG Laboratories, called it “a bifurcation of the industry,” with some companies designing small chips for the Internet of Things, using chip-scale packages, while others are developing large chips for custom form factors with stacked die. He noted that multicore processors are often found on huge die, with all of the associated power consumption and heat dissipation issues.
“We’ve got a lot of experience in how to control the power and heat dissipation,” Sarkissian said. “But we see more challenges in smaller packages. Engineers are experts at assembling advanced packaging, but they’re not so good at taking packages apart.”
Which route to take is a matter of end market, return on investment, and in some cases a lot of guesswork.
“The issue is economics—the cost and return on doing a 5nm chip,” said GlobalFoundries’ Patton. There won’t be a return for everybody. That’s why we placed a bet on 22FD and our next FD road map. We think that people will look at the cost of doing design for these fast nodes. There are a ton of people still back at 40nm and 28nm who haven’t made a decision about where they’re going to go. They could go with finFETs, which do offer good performance. But they’re locking themselves into something with high design costs and high complexity. Or they can take the FD-SOI route. That’s much easier to design in at a lower cost point. You can do forward and reverse body bias on transistors. You can get to very low voltages. We’ve demonstrated 0.4 volts. We’ve also demonstrated very low leakage. We can get down to 1 picoamp per micron.”
Related Stories
Photonics Moves Closer To Chip
Government, private funding ramps up as semiconductor industry looks for faster low-power solutions.
GPUs Power Ahead
AI, ADAS, deep learning, gaming, and high performance computing continue to drive GPUs into new application areas.
How To Choose A Processor
There is no clear formula for what to use where, but there are plenty of opinions about who should make the decision.
My first job in chip biz was with Westinghouse in Youngwood PA and our group was sent to Baltimore to build the “first dedicated IC plant in world” at Elkridge. Wafer size was 1.3 inches for some reason. All negative resist and aluminum metalization was evaporated from pure aluminum until IBM told us to add silicon to alum. Seems like yesterday.
Wafer size changed often from 1960 to 2000 but stabilized slowly so that now 450 mm change is mostly a liars poker game it seems. Any news on that front?
450 seems to have fallen off the map, but there is talk about panel-level packaging. How big those panels will be isn’t clear at this point. This has come up in several sessions at conferences in late 2015 and earlier this year.
Thanks, bad news for early 450 mm equipment developers.
28nm wafer prices were at $3000 a year ago… I am quite sure they are lower now. The $4000 price looks highly optimistic.
One of the main obstacle in performance and low power in any system is connecting wires and of course as mentioned in article copper connections. A better substitute could be migration towards NOC (Network on chips) or using of optical fiber technology.