Is 2.5D Cheaper?

We may never know. But even if it is, companies may not recognize it.


For the past several years, as 2.5D was being tested, the most common response from chipmakers and tools vendors was that the interposer used to connect various die in a package was far too expensive. It was basically the same argument as mask costs are rising too high to continue building complex planar SoCs at 16/14nm, or that FD-SOI is more expensive than bulk silicon at 28nm.

The criticism in all of these cases may be misplaced. There are certainly increased costs with each new generation of complex SoCs, but there also are some savings. In some cases, those savings can be significant.

With the second generation of high-bandwidth memory (HBM), the price, performance and power are all lower than DRAM and the form factor is significantly smaller. Kevin Tran, senior manager of technical marketing at SK Hynix, said during a presentation this week that a four-chip stack of HBM-2 memory is roughly equivalent to 160 DDR4 modules. On top of that there is increased bandwidth, lower latency, significantly lower power and a much smaller form factor.

And that’s just the beginning of what he said will follow the same downward price curve as DRAM. “When this reaches mass production we will be able to fine-tune the industry,” he said. “Right now HBM is becoming a new layer between main memory and cache.”

There are two issues here that obscure any benefits. One is that it’s increasingly difficult to make comparisons of technology because the speeds and feeds don’t necessarily line up. It’s fairly straightforward to say DDR3 is slower than DDR4, but throughput in and out of memory has proved to be a big bottleneck, which is why it has taken so long to even consider moving to DDR4 in the server world. In some architectures, DDR4 is no better than DDR3. And how that compares to HBM, using an entirely different conduit for signals and potentially a different architecture, isn’t as simple as comparing the specs for each.

Second, this isn’t just about the cost of the memory or the interposer. It’s also about the way costs are apportioned to various departments inside of chipmakers. There has been a concerted effort by chipmakers to break down engineering silos, mixing engineers with such skills as software, front-end and back-end hardware, analog/mixed signal and power throughout the design flow. In contrast, there has been far less effort spent on understanding how more fundamental business processes need to be adjusted.

This is certainly the case with 2.5D and fan-outs. Smaller chips yield significantly better, even if the cost of putting them together is higher, which is the reason that Xilinx and Altera began building 2.5D chips in the first place. And if they can be hooked up relatively efficiently, with less effort spent in debugging power and thermal issues, then the overall cost of designing, building and manufacturing these chips should continue to drop. Just as designs increasingly need to be looked at from a system level, so does the cost structure for designing and manufacturing them.

This isn’t helped by rampant consolidation across the semiconductor industry over the past couple of years. It’s hard enough to integrate technology and engineering resources. Fusing together the business side can be even tougher.

Business processes within a big systems company can be every bit as complex as the products they are creating, but typically those business processes are adjusted much more slowly than on the technology side. This is a big discrepancy. Business needs to move in sync with its products, and the business of technology needs to move at a much faster pace than most other businesses to remain competitive.

Related Stories
Will 3D-IC Work?
Racing To Design Chips Faster