Quantum Shifts


By Ed Sperling
Intel, STMicroelectronics and some of the leading memory providers already are working on 10nm process technology, and advanced researchers in universities and industry-leading companies are looking at 7nm, 5nm and even beyond.

Those who have glimpsed this technological future have similar observations. There is no single technology problem that has to be solved at these nodes. Instead, there are groups of problems, ranging from new transistor structures, new materials and RC delay caused by thin wires to quantum effects such as charge-release delay from memory and telegraph or burst noise, which involves step-like voltage transitions at random intervals. There are lithography issues such as double, triple and quadruple patterning and directed self-assembly. And beneath all of this there is a cost equation that indicates the majority of companies will never reach the most advanced nodes because it will be too expensive—at least for the foreseeable future.

So how far does the current roadmap extend? The path appears to be solid until 10nm. After that, the semiconductor industry will move squarely into the atomic realm—assuming it continues shrinking features. And considering companies are now developing test chips at 14nm, this reality is approaching very quickly.

Just to put it in perspective, the atomic physical limit of Moore’s Law involves wires that are one to two atoms thick, and transistors that are just one atom, based on Coulomb diamonds, according to Gerhard Klimeck, director of the Network for Computational Nanotechnology and a professor of electrical and computer engineering at Purdue University. So what exactly can we expect to unfold? What follows are some of the best guesses from leading companies and researchers.

Structures, materials and lithography
While Intel made headlines with its TriGate or finFET transistors at 22nm, because it reduces current leakage, it’s not a permanent fix. Beyond 10nm, it appears that the finFET may be replaced by carbon nanotube FETs, or tunnel FETs at 7nm or 5nm.

“You’re going to see less fin and more nanowire,” said Greg Yeric, senior principal design engineer in ARM’s R&D group. “It also may end up being in combination with SOI (silicon on insulator).”

At least part of the uncertainty going forward involves lithography used to create the masks to build these devices. Extreme ultraviolet technology was supposed to hit the market at 45nm. It now appears it will miss the introduction of the 14nm process node because power source isn’t high enough to be commercially viable. If that happens, semiconductor manufacturers may be forced into triple or quadruple patterning. Even putting cost aside for a moment, it’s almost impossible to design transistors that require very tight pitches using four mask sets.

“There are a bunch of second-order effects that will start showing up with triple patterning,” said Yeric. “If you have to tune finFETs in pairs, you’re going to start leaving power on the table because you’re only doing half the tuning. On top of that, you’ve got some rectangles on the same mask and some on different masks. That doesn’t scale well.”

Rectangles aren’t the only things that don’t scale well. While it’s theoretically possible to shrink transistors for many more process nodes, wires are another matter. Thinning out wires increases resistivity, reduces performance, increases heat and increases electromigration.

This is true in the wires on a chip as well as the interconnects. Applied Materials is working on ways to extend copper down to 5nm, using a combination of different dielectric materials at different thicknesses, complete with protective processes to protect the integrity of channels, and new ways to manufacture the copper to reduce electron scattering.

Mehul Naik, distinguished member of the technical staff at Applied Materials, said copper should be usable at least down to 7nm. At 5nm there may be a need for a material change, which will be the subject of much discussion at the upcoming IEDM conference in December. Among the candidates is graphene, providing electron scattering can be contained.

New approaches
Most of these problems involve continued shrinking of features, however. Not everything is moving linearly. Even at 14nm, not all of the pieces in a chip will use 14nm process technology—and that’s using existing manufacturing processes on planar devices. In addition, there are other alternatives available that can greatly improve performance and reduce power, and ultimately the cost of designing chips.

One such approach is using more transistors that are less accurate, and potentially more chips. Jan Rabaey, professor of electrical engineering at the University of California at Berkeley, calls it “neuro-inspired computing,” which makes up for accuracy with redundancy.

“We can keep scaling to atomic dimensions,” said Rabaey. “Or we can do things differently.” Differently in this case means a statistical plot instead of exact numbers, which works well for things such as gaming, video and audio, which are the most power-hungry and performance-intensive functions in mobile devices these days.

A second approach is to mix and match components, either with stacked die or around a high-speed network—or both. What makes this approach so compelling isn’t just the performance or the power reduction, both of which are possible when the size of the signal pipes are increased using either interposers or through-silicon vias. More important is the ability to quickly churn out derivative chips based upon pre-verified platforms that are customized for specific market segments.

This business approach is potentially as revolutionary as Moore’s Law because it allows logic platforms and memory to be created at the most advanced nodes but time-consuming analog pieces to run at older process nodes.

“If you look at Broadcom’s and TI’s offerings for WiFi connectivity, they have an entire catalog of radio-based subsystems,” said Jack Browne, senior vice president of marketing at Sonics. “This allows the business development guy to look at 10 radios, sit down with the customer and determine that this is the combination they want, and then quote a price at the first point of contact. And then they can put the chip together in 90 days.”

This network-on-chip approach has steadily been gaining ground as more and more cores and IP are added into SoCs because it makes it much simpler to hook everything together, and then reconfigure the pieces quickly—just like the nodes on a data network. And just like the larger data networks, these on-chip networks can be robust, blazing fast and allow load-balancing for software.

“From a software point of view, the interconnect becomes the memory map because all the registers are consistent,” Browne said. “So if you’re a PC maker, you can develop one driver and only turn on the right pieces for a particular model.”

The other big company in this space, Arteris, has a similar outlook for the NoC fabric. “It’s the glue,” said Kurt Shuler, Arteris’ vice president of marketing. “When you have two chips that are designed independently—which is what will happen with stacked die—then you may have one chip designed last year and one designed this year.”

He said there will still have to be new standards in connectivity for stacking of die. “Initially they will be tightly coupled by one company, but over time they won’t be as tightly coupled. In this scenario NoC technology is a given because it adds the flexibility. But we will still need standards for physical connections and how traffic moves back and forth.”

Leave a Reply

(Note: This name will be displayed publicly)