Systems & Design
SPONSOR BLOG

What Else Can You Cram On A Chip?

As the number of transistors goes up, so does the number of engineers.

popularity

Gordon Moore should be proud. At every process node, the number of transistors goes up, but so do the number of engineers you need to develop a chip.

This may not be immediately obvious to anyone who’s actually working on a new chip. You’re probably part of a team that uses fewer engineers than several years ago, purchasing off-the-shelf IP blocks, and leaning heavily on design automation tools to make sense of this unbelievably complex project. But when you consider just how difficult it’s getting to create a new chip—multiple power domains, timing that is complicated by multiple cores and shared busses, more and more functionality, low-power requirements, verification and debugging—and how many people are working to make the pieces work together across these new flows, that’s more engineers than anyone would have dreamed of when Moore’s Law was first put to paper in 1965. Lots more.

In fact, when you really dig down into Moore’s Law, this economic formula isn’t quite so simple and clear-cut as it sounds. On paper, the number of transistors does double every couple years, more or less. That number and formula has been rewritten several times since it was first introduced, but the general scenario is the same whether it’s every 18 months or 24 months. What isn’t so clear is exactly who is realizing the savings.

Yes, it does cost less to manufacture a wafer with reduced line widths, if the yield is high enough. But that’s a big if. Defect density is higher at every process node, which is something most foundries and IDMs don’t like to talk about. A defect at 130nm may go unnoticed if it doesn’t interfere with a chip’s performance. That same defect at 65nm may have a completely different impact, and at 32nm it may wipe out multiple chips. That costs money.

Tools also are more complex than they were in the past. Standards like TLM 2.0 do make it possible to re-use functional, power and timing models, but you have to create those models in the first place. It requires a persistent level of training and retooling for engineers to make that happen, and that takes even more money.

Nowhere is that more evident than in multicore chips. It can be argued that Moore’s Law really ended after 90nm, because technically multiple cores are multiple chips on a single piece of silicon. This is an argument you probably don’t want to get into, though. It’s like taking a definitive position on whether yeast is alive. Yes, it produces alcohol, but what else does it do? After 90nm classical scaling ended, chips ran too hot, and the only way to solve the problem was to combine multiple chips and lower the clock speed. Is that one chip? Well, maybe.

One thing we know for certain, though. It’s a nightmare to write code that can scale on multiple processors. It takes a lot of software engineers a lot of time to do the same thing one engineer could do on a single core. And that costs even more money.

No one has ever done a full analysis of the costs behind Moore’s Law, but the numbers certainly aren’t as clean as the proponents of this equation would have you believe. As the front and back ends of chip development continue to merge with things like restrictive design rules and design for manufacturing, it may be high time someone really looked at the total economic picture.

What do you think?


Tags:

Leave a Reply


(Note: This name will be displayed publicly)