Time to market and standard IP make it tougher to differentiate the platform, but there are viable options.
Time-to-market pressures and complexity have put the squeeze on design teams. They have to bring incredibly complex SoCs to market on time, make sure they’re functionally correct and work within a tight power budget, and they have to come in on or under budget.
Amazingly, they’re still able to accomplish this, thanks to some heroic efforts on the part of engineers and some incredible advances in automation tools and methodologies. But the other part of the equation that is getting squeezed is differentiation. If you’re trying to get a chip out the door quickly, you have no choice but to gravitate to more standard parts and IP. It’s pre-tested and verified, even though integration isn’t always perfect, and it’s certainly better characterized than internally sourced IP.
So how exactly do you differentiate a chip? There are three distinct ways emerging, and all of them are viable.
1. Software. This is almost a universal answer when executives are asked the best way to differentiate. But there’s a tradeoff with software in terms of reliability and complexity. The more complex the software, the more likely it is to fail at some point—and the more it will need constant updates and workarounds. While that may help with time to market, and in some cases overall cost, it’s not always easy to maintain backward compatibility once all the bug fixes are done.
The classic example of what can go wrong here involves assembly code written for government mainframe computers back in the 1960s. It needed so many fixes for so many years that it eventually had to be replaced because the people who fixed it in the first place had retired. This may sound absurd in light of two-year mobile device cycles, but many data centers are still running decades-old code. NetWare 386 still has a market.
The simpler the code, the more likely it will work. Real-time OSes and embedded executable code is the best example of how to make sure it works. Getting the software to do too much without breaking it apart into separate units—the divide and conquer strategy that has made verification possible—can quickly turn into a nightmare. And if you look at what’s going on with centralized software management these days, think about Freddie Krueger as chief programmer.
2. FPGAs. They’re big, they’re expensive, and they’re slower than an ASIC. But the nice thing about FPGAs is they’re programmable, which makes them cheaper to develop if you don’t have the volume to support a $100 million ASIC/ASSP/SoC design.
FPGA prototypes have been a top choice for complex designs for years, and the popularity is growing as chipmakers are being forced by foundries to assume more of the risk for bad yields—a problem that will only grow worse as the industry shifts to 450mm wafers from the current 300mm size. While the goal initially was to turn these prototypes into ASICs, the reality is that many of them will live on as FPGAs because they can be modified for specific markets much more easily.
The big question for FPGAs is whether they will ever become a standard platform for stacked die. One way to deal with differentiation of these packages is to program the firmware in the logic rather than relying entirely on software or different analog IP. The other option is simply to streamline the logic into what makes sense in a simple digital platform, and move the programming upstream. It’s impossible to say which route will become the most popular because this market is still in the test-chip phase with insufficient data to do all of the tradeoff analysis. But it’s certainly an interesting possibility.
3. Stacked die. That leads of course to the last option, which is to modify packages with pre-tested IP from older process nodes, slapping them together almost like high-rise cities off a base logic platform.
What’s attractive about this approach is it greatly extends the life of analog IP written using older process technology and it can be used to target very specific market segments quickly on a well-tested base platform of logic, memory, I/O, power management and system software. What’s less attractive, at least so far, is the lack of experience in this market, the added cost of either an interposer or TSV engineering, and the need to realign the supply chain to work closely together to manage risk.
Still, these are three viable options for differentiation, and ultimately they all could find their way into a single design. Differentiation is far from dead. It’s just…well…different.
—Ed Sperling
Leave a Reply