The Turning Point

Why MCM are becoming less expensive than single die solutions.

popularity

By Javier DeLaCruz

In the epic battle of cost and performance, MCMs (multi-chip modules) had generally lost to SoCs (systems on chip) due to higher package-assembly costs and lower performance. The tides are turning.

Four factors have been in play in recently:

  1. Package assembly costs of MCMs have been dropping in recent years.
  2. MCM package technologies are becoming commonplace instead of being relegated to certain applications such as memory or image sensors.
  3. Emerging technologies are eliminating performance limits on MCM’s.
  4. Tapeout costs are increasing exponentially as wafer technology nodes shrink.

Tapeouts that only cost about $200k a few nodes ago now run in the millions of dollars at 28nm. This means companies will need to have fewer tapeouts in order to support their product lines. The risk with each tapeout is higher and there is not much room for adding different flavors of product unless there is a considerable market for each individual product.

MCMs may serve as a solution for this issue. One approach for MCMs is to create a base chip that can interface with several different devices to make a family of products. The base chip can remain the same, but it can be paired with complementary devices and potentially different packages to gain the variety of interfaces or functionality needed to serve multiple markets or customer requirements.

What drives the cost?
The interconnect between these chips is the main cost factor. Direct die-to-die wirebonding can be the least expensive, assuming the die are planned for MCM integration. That means there are no funky wirebond angles or dense traces on a laminate acting as jumpers to move a signal from one side of the die to the other. Mixing interconnect methodologies such as flipchip and wirebond can drive up costs, but even this option is becoming mainstream.

The single most expensive cost driver is poor planning. If two die are designed in isolation, integrating them into a low-cost MCM will be more expensive (possibly much more) than if at least one of the die was designed with an awareness of the interconnectivity of the other die in the MCM package.

Area Benefit
Package-on-Package (PoP) technologies have been mainstream for some time. The last Apple iPhone A4 processor used a flipchip die on the bottom package and two wire-bonded stacked memories for the upper package. The benefit here was driven mainly by board area, but this volume application is also helping to drive down cost for this approach in the industry.

Applications such as high-end network processors are not as cost sensitive. For these applications, the adoption of MCM packaging really has been driven by area reduction and external pin count reduction. By bringing the memory devices into the package, the number of balls needed to connect to the PCB is considerably reduced. While this does not appear initially to be a cost-incentivized action on the package, it does reduce the overall system cost. Depending upon how this is executed, bringing external memory (bare die or packaged memory) into the MCM actually may be a much less expensive option under some circumstances.

What will drive MCM adoption?
Cost and area savings are clear and valid reasons for moving towards MCM’s, but a drive towards flexibility may be the next large factor. The ever-increasing cost of a tapeout means that it may no longer be practical to have a family of parts that serve various segments in a market. Instead, a single tapeout that integrates other devices to add flexibility may offer the opportunity to have a family of parts with a single tapeout.

This is a new angle on design and re-use. Design and re-use was a cornerstone of SoC IP, but now the re-use may come from die that are shared across multiple packages in order to mix and match interfaces or additional functionality. We have already seen many of these strategies in the market as well as in our own activity. This driver may not tackle the unit-cost challenge but it certainly addresses the issue of rising NRE costs for the most advanced nodes. The difficulty here is changing the mindset of ASIC design teams so they start with this end-game strategy in mind.

This MCM activity requires much more concurrent activity with what was once considered downstream activity such as packaging, thermal management, signal integrity, etc. Some companies will understand this and take advantage of it sooner. Once they’ve established a critical mass of complementary die to use in adding flexibility to their ASICs they will have an advantage over those who underestimated the extent of the potential benefits of MCM adoption.

–Javier DeLaCruz is eSilicon’s Semiconductor Packaging Director.



Leave a Reply


(Note: This name will be displayed publicly)