Getting chips out the door more quickly will require proven third-party IP, a mature ecosystem and a new architectural approach.
Speeding up production has been a mantra dating back throughout recorded history, and presumably well before that. That’s what technology was created for—roads, bridges aqueducts, computers, the Internet, and everything that connects the real to the virtual world.
A speech by Nvidia chief scientist Bill Dally at DAC that complex SoCs should only take a couple weeks to design by two guys in a garage—and that there should be a market for hard IP—may seem a bit self-serving. But he does have a point, at least for some markets. While you probably don’t want a chip for your molecular modeling workstation created in two weeks, or one slapped together in a fortnight for your latest smartphone, you might not notice much difference if the SoC is in your refrigerator and maybe not even the one in your DVR.
It’s not that SoCs shouldn’t be differentiated at all levels. It’s that all the stuff inside that SoC doesn’t need to be built from scratch. Whether it’s soft IP that’s proven in silicon and extensively characterized, or hard IP is a matter of preference and debate. But the reality is that it takes too long to put the basic pieces together, even at older process nodes, and longer still to verify that it all works.
Clearly, this is getting some serious attention these days. The whole IP business is exploding, and the amount of third-party IP now finding its way into complex SoCs is increasing rapidly. Two years ago the number was less than 50%. Some predictions are that it can go as high as 90% in the future. And the IP blocks are getting bigger and more integrated, first for subsystems and next for platforms.
This doesn’t mean there will be fewer tools sold or less work for engineers. The opposite will likely be the case. But it may mean that an engineer who builds internal IP for a chip company either needs to look for a job at a commercial IP vendor, or expand his or her skill set for integrating third-party IP. In the future, IP will be created in a way that can leverage economies of scale, with enough flexibility built into the platform, the network IP connecting it and the IP itself to be able to optimize it for unique purposes.
The emphasis on speed of development is real. Time does cost money. It shows up in NRE, multiple respins, poor yield that has to be fixed, and reliability issues that have to be addressed before and after production. And it shows up in terms of market windows that are caught or missed.
This may explain why so many inquiries have surfaced recently about stacked die, and why new approaches to die stacking are starting to appear—a combination of planar plus vertical. Rather than just using it for the most advanced chips, where performance and power are the drivers, the focus now is on time-to-market issues such as running analog IP at older process nodes, reducing congestion issues and improving throughput.
The big challenge here is more of a supply chain issue than a technology issue. Who’s responsible for ensuring that parts work together has never been fully worked out, but at least now there appears to be more impetus to solve those problems. No single company can turn out a design in two weeks, but a collection of companies working together conceivably would make that possible.
Dally may have sounded a trumpet, but it will require an entire ecosystem—including buy-in from Nvidia and many others—to bring it to fruition.
—Ed Sperling
Leave a Reply