New Approaches To Better Performance And Lower Power

The classic approach of just shrinking features doesn’t work anymore, but there are still a lot of options on the table.

popularity

By Ed Sperling
Until 90nm, every feature shrink and rev of Moore’s Law included a side benefit of better power and performance. After that, improvements involved everything from different back-end processes to copper interconnects and transistor structures. But from 20nm onward, the future will rest with a combination of new materials, new architectures and new packaging approaches—and sometimes all three.

There is no indication that feature shrinks will end. Cost will force some companies to continue along the same path they have followed since Moore’s Law was first devised in 1965. But it won’t be the only path forward. In fact, many of the improvements in performance and power may show up at 20nm or even older nodes, particularly as the Internet of Things drives volume production of chips at 40nm and above. And with 28nm expected to be a very long-lived process node, there is talk that some of even the most advanced changes will occur there, as well.

New materials
Mike Splinter, Applied Materials chairman and CEO, said future improvements in ower and performance will rely on materials engineering, not feature shrinks. And he said the key driver for those changes will be the hotly contested mobile electronics market, where battery life is critical and performance is a differentiator.

“The issues are about power and performance and optimizing those two,” Splinter said. “Scaling does not change the performance of a device any longer.”

The numbers presented by other Applied executives back up those statements. Kathryn Ta, managing director for strategic marketing in Applied’s Silicon Systems Group, said that in 2000, 15% of the boost in performance was due to materials. She said that number is now closer to 90%, and the number of new materials is increasing at each node.
Moreover, the semiconductor industry has only scratched the surface here. The future of the fins on finFETs will likely come from the III-V classes of the Periodic Table—gallium arsenide, indium arsenide and indium antimonide, for example.

The economics of materials are changing, as well. For example, FD-SOI has a reputation for being a more expensive approach than bulk silicon, but the total cost of design and development is actually lower at 28nm using FD-SOI than bulk CMOS at 20nm with double patterning and 16nm or 14nm finFETs. STMicroelectronics has been touting results that show FD-SOI leakage and performance is comparable to finFETs, particularly when back biasing is used.

Other materials also are finding their way into designs for very specific uses, as well. Consider man-made diamond substrates, for example. Diamond will add about $10 to the cost of a chip, but it’s also incredibly efficient in getting rid of heat—something that has been a big problem in very densely packed devices and stacked die. While a copper-tungsten heat sink will run about 30 cents in comparison, the effectiveness is significantly higher—something that is tough to put a price on in complex designs.

Diamond has been popular in high-performance networking backbones and military equipment for just that reason. “If your chip is running at 120 degrees, you may be able to drop the temperature by 60 degrees using a heat sink,” said Adrian Wilson, head of the technologies division at Element Six, which is owned by DeBeers. “But with a diamond, you can lower that by about 115 degrees.”

Even better, the heat can be channeled in a specific direction so it can be cooled using a variety of techniques ranging from microfluidics to a heat sink located away from a critical signal path.

New architectures
Still, materials are only part of the answer to improving performance and power. Solving bottlenecks in the architecture itself is a big problem, particularly with thin wires wrapped like a tangled knot around memories. The thin wires themselves are a problem because they increase resistance. But so is the length of the wire, which increases the distance a signal has to travel and the amount of energy required to drive that signal.

Xilinx recognized this problem—and addressed it with a new architecture it introduced this week. The new UltraSCALE architecture is built around an interposer, with a distributed, hierarchical clock structure for further reduce congestion.

“We saw the problem as interconnect bottlenecks,” said David Myron, senior director of FPGA product management and marketing at Xilinx. “With the new architecture we’ve been able to utilize silicon at 90% and improve static power by 35%. What’s behind this concept is that we re-thought routing. As a result we get better packing density, lower power because of shorter wires, and a 2x improvement in performance or a 50% decrease in power.”

How this approach fares in the market remains to be seen. Stacking die—both vertically and in a planar configuration—remains a major option for improving energy efficiency and/or improving performance. It’s also a logical next step, particularly for integration of analog IP, which is why other companies are working on their own versions of 2.5D and 3D stacks. Some of those are being done in conjunction with finFETs, FD-SOI, and other new materials. But because of the performance and power benefits, Xilinx has started billing its new architecture as the first programmable 3D-IC and all-programmable SoC.

Packaging and integration
Possibly the furthest along on this path is the Hybrid Memory Cube consortium, which has begun stacking memory on a logic platform with through-silicon vias to connect the different layers. Micron, which has been talking about the architecture for the past couple of years, is pushing beyond a laminate package into a fully integrated “chip scale package.”

“The big need now is for a low-cost interposer,” said Scott Graham, general manager of the hybrid memory cube. “Maybe it will be made out of a different material than 300mm silicon.”

SK Hynix also is working on thinner packaging to address low power consumption, form factor and stackability. Woong-Sun Lee, 3D project manager at Hynix, said higher density packages will begin showing up next year, and current research is for fan in package, fan outside of package, stacked die using TSVs, and embedded die within a substrate.

Conclusions
All of these options are under consideration, but all of them present challenges. Top on the list is cost, which is why there are so many test chips being created these days and so few actual production chips. Even with finFETs, the only volume production is at 22nm by Intel, and that doesn’t require double patterning.

Having a number of good options to weigh is better than having too few. Still, from a tools perspective, this makes it difficult to be
t on the winning approach, so most of the tools vendors are holding back committing to anything except finFETs at 16nm and 14nm. Nevertheless, they’re all watching closely because with this many options the number of viable choices has to narrow.



Leave a Reply


(Note: This name will be displayed publicly)