The Growing Legacy Of Moore’s Law

The impact of transistor density now stretches from manufacturing equipment to water-cooled mainframes.

popularity

By Ed Sperling
Moore’s Law has defined semiconductor design since it was introduced in 1965, but increasingly it also has begun defining the manufacturing equipment, the cooling needed for end devices, and both the heat and performance of systems.

In the equipment sector the big problem has been the delay in rolling out extreme ultraviolet (EUV). Moore’s Law will require tighter spacing than a 193nm wavelength laser can etch at 22nm and beyond, and at present the only alternative is double patterning. As double patterning implies, it requires a double pass of the laser, as well as a much more complex mask set, and significantly more time and expense per wafer. Moving from 40nm NAND flash to 22nm will require six extra steps. With logic and DRAM, that same shift will require an extra 10 steps each.

Moving to 3D die stacking will alleviate some of this problem. At least the analog and some of the IP can be manufactured using older process technology. But for the memory, the logic, and the processors, being able to etch more efficiently and quickly is a requirement.

Applied Materials, for one, is well aware of this trend. The company’s rollout this week in Japan of a new etch machine, Centris Advantedge Mesa, raises the number of process chambers from four to eight. That basically doubles the etch speed, thereby neutralizing the effect of double patterning. As a selling point it also cuts energy consumption by 35% in the etch process, uses less water and reduces carbon dioxide emissions.

“This is a Moore’s Law machine,” said Thorsten Lill, vice president in Applied’s etch business group. “The goal is to decrease the cost by 30%.”

In addition to increasing the number of chambers, Lill said one of the advantages of this approach is a decrease in the number of defects. He said that when EUV finally does become commercially viable, it will complement the double patterning that will already be a proven technology.

This same focus on density is forcing other changes across the supply chain, as well. Large data centers have started to add water cooling—something they eliminated with the advent of client/server computing in the 1990s—because the density of the semiconductors and the density of blade servers inside of those cabinets makes it almost impossible to cool the uppermost servers in a closed server cabinet. Coupled with increased utilization from virtualization and cloud technology, the amount of heat being generated is moving beyond the capabilities of forced-air cooling.

In response, IBM has begun offering a water-cooling option for its new mainframes, which the company says will boost performance and reduce cooling costs. “We’ve reached the heat transfer limit of air,” said Jack Glass, director of data center planning at Citigroup. “We’ve also reached the acoustic limits. The machines are getting too noisy.”

For related reasons, ARM is winning a bigger toehold in the enterprise server and networking market where it has had almost no influence for decades. “One of the main reasons why people are looking at ARM in servers is power,” said Pete Hutton, vice president of technology and systems for ARM’s advanced product development. “We’ve already shown that in Linux implementations we can improve power by two to three times, and that’s even without us doing software optimization.”

In a data center with thousands of servers and server racks, the cooling costs from that kind of power reduction can equate to millions of dollars a year—enough to warrant serious consideration for Linux-based applications.

In the consumer space, this same density is forcing even more radical changes in design. Multicore is giving way to many-core processors, SoCs are utilizing multiple voltage rails, sleep states and an increasing number of power islands, and designs typically are running at lower voltage than in the past.

This has made it extremely difficult to create designs from scratch, ushering much of the design industry toward 3D stacking. And it has increased the focus on power management around all chips because thermal issues become much more important when silicon die are layered on top of each other.

“The big issue is how you get the heat out so that non-volatile memory doesn’t fail,” said Hutton. “We’ve been looking at how you can aggressively turn off parts of the SoC and have more active control of the chip.”

Perhaps even more troublesome is the other part of the Moore’s Law economic equation that often gets hidden from the rest of the world, namely how to reduce other components in conjunction with the reduction in silicon. Aveek Sarkar, vice president of product engineering and support at Apache Design Solutions, said pressure will continue to cut costs on every side of an SoC, including the package.

“We’re going to see demand for one less layer of package, which means you have to do more with less,” said Sarkar. “You will be forced to squeeze a design into an available number of layers that is far less regular in terms of its structure. That will require a much more detailed power signal integrity analysis than in the past.”



Leave a Reply


(Note: This name will be displayed publicly)