Different Tradeoffs

Changing priorities, different communications approaches, and new computing and storage schemes are altering some of the basics in IC design.

popularity

By Ed Sperling
The push to “smaller, faster and cheaper” hasn’t changed since ICs were first introduced, but the context for those requirements is beginning to shift—with enormous consequences.

What was once done on multiple chips continue to migrate to a single chip or package because of cost, but in some cases the decisions about goes where go well beyond an individual device to include a network of systems. Power and heat have forced some of those decisions. Others are being driven by shorter market windows that affect business decisions about exactly when to move to smaller, faster and cheaper, and whether to keep a design in two dimensions or move to three. In some cases, it even has evolved into a tradeoff about sharing resources to make up for additional costs elsewhere in a design.

“Form factor is everything in a lot of these cases, and you’re being forced to make tradeoffs involving a lot of different pieces,” said Mike Gianfagna, vice president of marketing at Atrenta. “But that requires you to know exactly what you’re doing. A lot of times you don’t. What happens when you reduce the number of layers? Do you know the impact on the system? You may not. But competitive pressure is also forcing you to rethink everything.”

Rethinking designs
Some of these changes are as fundamental as where the processing gets done. While the concept of cloud computing has been around since the days of time sharing on mainframe computers in the 1960s, the ability to offload processing and storage on the fly—and to load balance across compute farms around the globe—adds a modern twist to it all.

The result is a handheld device with the performance capabilities of a compute farm—but with the design focused far less on local processing and storage and more on communication and battery life.

This is evident with a number of upcoming communications schemes and protocols in the handheld market. LTE Advanced, for example, which is expected to find its way into smart phones and base stations over the next four years, focuses on reducing power while increasing performance. One of the best ways to do that is by shifting what processing is done where.

“One of the key decisions is how much processing and intelligence is in the cell phone versus the cloud,” said Graham Wilson, a product marketing manager at Tensilica. “You also have to understand deeply what cores are being used for. There is no room for fat. We’re also going to see a big shift in infrastructure from homogeneous to heterogeneous.”

That means rather than a giant cell tower on the highest hill or building, smaller boxes will be mounted on houses and strung together in a mesh network. “Every house will have its own femto cell or pico cell box so they’re less reliant on the macro cell and they work off each other,” Wilson said.

That changes what resources can be committed within a design to processing, to communication, to storage, and where it can be done best—whether it’s a central processing unit or lots of smaller processors for individual uses. It also boosts the ability to cut some costs in different places than just by shrinking the process geometries in a design.

The Low-Latency Interface working group of the MIPI alliance, for example, is currently working on a new standard that allows DRAM memory to be shared between two chips. NoC technology vendors, in particular, have seen this push because it requires a highly efficient network-on-chip infrastructure.

“The big advantage is that it allows you to get rid of an entire memory chip,” said Kurt Shuler, vice president of marketing at Arteris. “The modem and the application processor are sharing the same memory. You also reduce the number of pins, which is important because it allows you to use those pins for other things.”

He notes there is a very slight performance hit. But the ability to eliminate an entire memory chip can save a couple dollars in a design. Multiply that times millions of units and the savings are huge—far greater than just shrinking the features on a die.

Rethinking packaging
Stacking die offers another alternative to improving performance and time to market, but the tradeoff will be in cost unless additional components can be eliminated. Adding an interposer layer or TSVs will be expensive—at least initially—even though 2.5D and full 3D stacking hold the promise of dramatically improving performance through shorter distances, bigger pipes for data, and lower power because signals will not have to be driven as far.

While this packaging approach is still under development, foundries report that chips are rolling out using this approach. “This is already happening,” said Luigi Capodieci, R&D Fellow at GlobalFoundries. “It’s mostly a decision of which design processes to use in the chip, and that decision will have to be made by the chip designers.”

Stacked die also allow IP developed at older nodes—particularly analog—to be attached through Wide I/O to other chips developed at more advanced processes. That, at least in theory, substantially reduces the time it takes to design a chip because much of it can be based on what has been previously developed.

“Re-use leads to a reduction in time to market,” said Shrikrishna Gokhale, COO and managing director of Open-Silicon’s India unit. “This opens up the lifecycle of different IP and puts the emphasis on packaging and re-use.”

It also puts greater emphasis on software-hardware co-design, he said, and requires more emphasis on defining partitioning earlier in the architecture phase. In addition, it requires a rethinking of what gets done where. Some portions of the design that used to be in separate locations now have to be co-located in the same place because of the constant need to update models and data for both hardware and software teams.

“The logic front-end design needs to be done at the same location as the software,” he said. “That’s less important at the back end, which is the physical implementation.”

Other tradeoffs are less obvious, though, particularly to design engineers. One involves weight.

“Half the weight of a tablet is the battery,” said Drew Wingard, CTO of Sonics. “You can’t afford to add a bigger battery so you have to do an increasing amount of computation with lower power. That means you look at more efficient ways of doing that computing. One is using the GPU as a general-purpose CPU, which allows you to get a lot of performance at low energy.”

He noted that utilizing the GPU requires it to be easily accessible to software developers. And it requires much better management of clock domains, voltages and on-off functionality within an acceptable power budget. And to be really energy-efficient, users need to be able to easily input their own usage models.

Rethinking manufacturing
Some of the changes that are under way are forcing a major shift in manufacturing, too. Staying on the Moore’s Law road map has always been a given for high-volume digital designs, but with double patterning required at 14nm and the delay in extreme ultraviolet lithography, alternatives are being considered that could have ramifications throughout IC design.

“Double patterning is the biggest issue we’re dealing with right now,” said Jean-Marie Brunet, director of product marketing for model-based DFM and place and route integration at Mentor Graphics. “We’re even looking at triple patterning, but there is no way to have density balance between the layers when you do that.”

Lars Liebman, an IBM distinguished engineer, said his company has been working on commercializing self-assembly for finFETs because even multi-patterning isn’t sufficient beyond 14nm. That has implications throughout the design chain. For one thing, it can increase the density on existing process nodes. For another, many of the tools for automating design, particularly on the DFM side, will need to be rewritten.

Conclusion
Area, power and performance have always been the standard metrics for tradeoff in any IC design. What’s changing significantly is why those tradeoffs are being made and where the benefits will show up. Changes targeted at an individual chip in the past, or even a block or subsystem, may now be aimed at a much broader level.

The good news is that infrastructure changes—everything from manufacturing approaches to communications networks—evolve much more slowly and deliberately than those made in the individual device or chip. The bad news is that sometimes that moves so slowly that it can affect what’s done elsewhere in this much broader system. But some change is underway at every level, and managing that change—and the tradeoffs it will demand—will be much more challenging in the future.



Leave a Reply