中文 English

Architecting Interposers

It’s not easy to include interposers in a design today, but as the wrinkles get ironed out, new tools, methodologies, and standards will enable it for the masses.

popularity

An interposer performs a similar function as a printed circuit board (PCB), but when the interposer is moved inside a package the impact is significant.

Neither legacy PCB nor IC design tools can fully perform the necessary design and analysis tasks. But perhaps even more important, adding an interposer to a design may require organizational changes. Today, leading-edge companies have shown that interposers can work, enabling them to build bigger and more capable systems than if they were to rely on technology scaling alone.

The interposer is a foundational element of 2.5D and 3D packaging technologies and in the future will enable a chiplet market to develop. “We can define 2.5D packaging as a multi-chip(let) design that uses an interposer as an intermediate level of interconnect between the chip(lets) and the organic BGA/LGA package,” says John Park, product management group director for IC packaging and cross-platform solutions at Cadence. “The interposer material can be silicon with through-silicon vias (TSVs), organic redistribution layer (RDL), or glass. In the case of silicon interposers, they can be passive or active (device layers).”

The task the interposer performs is to electrically connect signals in different chips/chiplets. “Typically, interposers provide a bridge highway between the ASICs or chiplets,” says Yun Chase, senior staff, signal and power integrity applications engineer for Siemens EDA. “It requires that high-speed data must be communicated by the PHY bridges, so minimizing the signal layer transition between the PHY is critical.”

The interposer reduces the number of signals going off package. “Things are closer, and there are less parasitics, so you may think that design and analysis would be simpler,” says Manuel Mota, product marketing manager at Synopsys. “But think about high-bandwidth memory (HBM). This uses parallel interfaces for die-to-die (D2D) communications. The number of signals and the number of bumps that cross from one side to the other is much larger than when you were going outside of the package using some sort of serialization. A DDR memory interface typically has less pins than you will find on an HBM type of memory. So there is a different aspect of complexity when you get into this very crowded interposer and dies on top of it.”

Plus, this is being done to increase performance and decrease power consumption. “Interposers are mostly used to reduce the power needed to transfer huge amounts of data between different dies,” says Andy Heinig, group leader for advanced system integration and department head for efficient electronics at Fraunhofer IIS’ Engineering of Adaptive Systems Division. “The power reduction can only be realized if the signal swing is reduced to the lowest level because there is a strong correlation between the signal swing height and power consumption. This means that signal integrity problems are very significant in interposer designs.”

The incorporation of an interposer brings together elements of packaging, board design, chip design, and probably most importantly, system design. “This is not a serial process where you design your chip and then you think about the package and interposer,” says Synopsys’ Mota. “The interposer is central to the design of the system, and it needs to be aware of the dies that go onto it. The converse is also true. It requires a parallel approach or co-design of the whole system. Different teams need to be working closely together and they need to have a common language.”

What makes it different than a PCB? “An interposer design might be similar to a complex PCB design, but the complexity of advanced silicon interposers with TSVs and 3D integration far exceeds what is done on a typical PCB,” says Brad Griffin, product management group director for multi-physics system analysis, Cadence. “Design and analysis tools need to support much larger data sets and need to rely on large simulations using multiple compute engines in parallel. So while there may be similarities, this is a whole new world.”

Design flows
One of the buzzwords these days is ‘shift left‘, and this is certainly true when it comes to integrating an interposer into the development flow. “A lot of problems are fundamental to the structure of how you are going to architect this multi-chip module, and you need to answer some questions early on,” says Marc Swinnen, director of product marketing at Ansys. “The thermal issue is obvious. You don’t want to take two very hot chips and stack them on top of each other. Which ones are the hot chips and where are the hotspots in those chips? Then, you need to think about the power distribution network and what sort of voltage drop to assume. All of that needs to be planned before you lock down the architecture. It is in the prototyping stage where a lot of this is happening, and a lot of the physics are pulled way to the left (shift left). This has to be analyzed very early on.”

Thermal is an increasing challenge because it means that the individual dies are coupled even if they are not physically or electrically connected. This makes partitioning of the analysis problematic. Their close proximity also increases the likelihood of other forms of interference between dies. “Some people are looking at integrating photonics into the interposer,” says Kenneth Larsen, director of product marketing for Synopsys. “They are very noisy. There’s a lot of noise in the systems, and that impacts performance of the entire system.”

Several issues have to be dealt with. “Early prototyping/planning with predictive signal integrity (SI)/power integrity (PI) and thermal analysis using system technology co-optimization (STCO) techniques (see figure 1) is key to preventing the wrong floorplan scenario being implemented,” says Siemens’ Chase. “PCB and package-based workflows are architected to manage longer signal channels and non-Manhattan geometries that traverse multiple substrates such as motherboard, daughter board, and backplane-based systems.”

Fig. 1: Co-optimization is necessary when interposers are integrated. Source: Siemens EDA

Fig. 1: Co-optimization is necessary when interposers are integrated. Source: Siemens EDA

But they may present challenges for existing PCB tools. “Some of the older tools have problems scaling up to 10,000 to 20,000 connections, and what we are seeing with interposers is that the connectivity is tremendous,” says Synopsys’ Larsen. “We are talking about hundreds of thousands, or millions of connections, and this is beyond many tools. You may have to consider a combination where you take IC tools and do what they are good at, such as dealing with these scales to take the interposer through analysis, optimization, and integrate that back into a more traditional packaging design. Organizations need to consider when to have discussions with architects, packaging leads, and silicon leads on what the appropriate tooling that is required.”

This is not ideal. “Design flows will look like a hybrid, neither fish nor fowl, as we are converging domains,” says Keith Felton, product marketing manager for high density advanced packaging at Siemens EDA. “The PCB workflows are capable of multi-substrate signal paths and capable of managing different substrate materials and stackups. There are materials and structures that are silicon or silicon-like, and for these IC design tools are best suited for their extraction and modeling.”

Until new tools are fully capable, it creates organizational challenges. “It is not a technical problem specifically, it’s an organizational problem,” says Ansys’ Swinnen. “This new technology won’t fit in the organizations they have. You have companies that approach it from two sides. First, you have companies that are more PCB-centric and approach it from the packaging and PCB side. They are now adding more detail about the chips. But you have other companies that are more chip-centric, and they view it from the chip side as adding more packaging and PCB type information. The two worlds are being pulled closer together.”

Additional aspects will be necessary in the flow, as well. “You can have thousands of signals going from one die to the other, and failures will happen,” says Mota. “How do you handle that? You do that with redundancy. We already see this with parallel interfaces like HBI or AIB and HBM. You need to have redundancy and the ability to do test and repair after assembly. That gives you some additional robustness.”

New tools are an essential element for increased adoption. “The EDA tools have part of that burden to carry because they should make this easier,” says Swinnen. ” EDA tools have traditionally been quite separate between PCB, on the one hand, and chip on the other. It will require a new generation of tools specifically targeting 3D IC. Existing tools are somewhat myopic in the way they approach this, and they need a broader scope.”

Those tools also need to push into new grounds. “The physical size of a silicon interposer is limited to usually 1.5X or 2X reticle size,” says Park. “Because of this, companies want to evaluate moving some chiplets off the interposer and onto the laminate BGA. They need to electrically validate the performance, such as placing a high-speed SerDes on laminate BGA versus on the interposer. This is truly a system-level challenge.”

Vertical integration
The companies that have been successful using interposers also happen to be vertically integrated companies that can manage everything in the flow. They are tasked with the optimization of the complete system.

We have also seen one case, HBM, where the necessary standards have been put in place and this enables the design and fabrication of memories to be separated from the design of the chip and the interposer. Foundries also are attempting to provide proven interposer designs that can be re-used, reducing the risk associated with adopting this technology.

But there is a desire in the industry to completely break this apart. This would allow the chip design to be conducted by one or more companies, and the interposer design and integration to be conducted by a different company.

This requires not only more standardization in the tools, but also in the methodologies and interconnects themselves. “If you’re going to make a generic chiplet that can be used on any interposer, there are going to have to be standards,” says Swinnen. “You need some sort of guardbands that are built in. For example, you may define the voltage drop allowances. You have the same thing with PCBs, where you have the same sort of problems. Your voltage has to be within a certain level to even get the individual chips to work properly.”

If the interposer and packaging market finishes up looking like the PCB market today, the notions of integration change. “One of the interesting trends with chiplets is that you can design a system with some amount of capability for one market, and then re-use most of the package — perhaps just removing one die to address the needs of a different market,” says Larsen. “This brings in a new dimension to complexity. You may have one interposer being used for multiple markets, meaning you plug in different dies depending on the end market.”

New tools and new models are required. “Signal integrity problems during the exploration phase arise due to missing predefined models of the interconnect,” says Fraunhofer’s Heinig. “During the verification phase, whole path parasitic extraction methods are missing. Currently, no real co-design methods for chip-package optimization are available on the market. Also, tools for early design explorations are not available but necessary.”

Conclusion
Tools that exist today provide enough capability to enable sophisticated, vertically integrated companies to make everything work. New tools, methodologies, models, and standards are required for this to become a mainstream technology, and that will take a concerted effort by the whole ecosystem. It has been demonstrated that it can be done, and this now needs to be broadened for the more general methodology to emerge.

Much of that work is underway, and when costs associated with the interposer are brought down, it will usher in a new generation of high-performance, low-power devices that can be customized at will. This is an essential element of More than Moore, and the rise of chiplets will dramatically reduce NRE costs associated with this type of design. This has been the goal of the DARPA Common Heterogeneous Integration and Intellectual Property (IP) Reuse Strategies (CHIPS) program that was launched in 2017, and it may be getting closer to reality.

Related
Interposers Knowledge Center
Top stories, blogs and white papers on interposers.
HBM3: Big Impact On Chip Design
New levels of system performance bring new tradeoffs.
Chipmakers Getting Serious About Integrated Photonics
This technology could enable the next wave of Moore’s Law. What’s needed to make that happen?



Leave a Reply


(Note: This name will be displayed publicly)