Design For Advanced Packaging

Stacking die is garnering more attention, but design flows aren’t fully ready to support it.

popularity

Advanced packaging techniques are viewed as either a replacement for Moore’s Law scaling, or a way of augmenting it. But there is a big gap between the extensive work done to prove these devices can be manufactured with sufficient yield and the amount of attention being paid to the demands advanced packaging has on the design and verification flows.

Not all advanced packaging places the same demands on tools and methodologies. Requirements for a 2.5D package differ greatly from those of a monolithic 3D IC. There also are chiplets, various types of fan-outs and fan-ins, system-in-package approaches, as well as package-on-package and direct-bond approaches. Depending on the packaging type, a mix of PCB and IC design techniques and tools may be necessary. And no matter which type of package is used, adding formal verification methods may required.

So where is the industry in adapting or adding the necessary tools and flows to make this technology available to the broader industry?

Market leaders always have been the first to move to the latest node because it provided them with scaling, power and performance advantages needed to maintain their competitive edge. “Monolithic scaling is coming to the end of its life for most people,” says Keith Felton, product marketing manager in the Board System Division of Mentor, A Siemens Business. “7nm is wickedly expensive, the yield per wafer is not that good, and you have to make millions of them in order to cover the NRE. When you have a chip that large, you are often better off breaking the design up into smaller blocks where you can use the appropriate node point or technology for that part of the chip, and then integrate it together on a silicon interposer. You get something that is much cheaper. You can bring to market faster. And if you want to do an update, you can just replace one or two of the chiplets and have a new product rather than having to respin a whole new SoC.”

While some of this may be a projection into the future, it’s what is driving the industry today.

“We now have some options that are currently fairly high cost, but there are a lot of advantages to them,” says John Park, product management director for IC packaging at Cadence. “Over the past few years we have been in a transition from what was a small PCB, to starting to look like a big integrated circuit.”

Park lays out the path that the industry has taken from leadframes, to ball grid arrays (BGA) and now to 2.5D and 3D technologies (Figure 1.)


Fig 1. Evolution of packaging technologies and development flows. Source: Cadence

With 2.5D you might move memory from the board and integrate it next to the processor using a silicon interposer, which can reduce the latency by shortening the distance and widening the data pipes. “What tool do you use for the implementation of the interposer?” asks Park. “What do you use for the routing and once routed? How do you tape it out? It needs to be taped out in an IC format. Historically, packages were taped out in a PCB format, such as Gerber or IPC2581.”

That will have a large impact on tools. “You need a PCB-like technology to do some of the routing because they are a little more advanced when it comes to interactive and manual routing than traditional IC tools that tend to be more batch application,” he notes. “But I also need some IC technology. I need to create mask layers and GDS because they will be manufactured using an IC design process. Once we go to 3D IC, that is purely an IC process. It goes from planning to signoff, and includes timing analysis. Plus, you need multiple-die LVS checking. The package designer changes from a board designer to a chip designer. It also extends into the ecosystem and each new packaging variant requires a reference flow and an associated PDK.”

This is more than just a tool change. The culture is changing along with the tools. “It blows my mind to see how little rigidity or formality there is in verifying package-level designs for assembly,” says John Ferguson, technical marketing engineer for Mentor. “There is a rough design rule manual, and if you follow that you should be able to manufacture it. Users would figure that out by eyeballing it. Nobody really cared much. Now we are talking about hundreds of thousands, or millions, of pins. And the idea of being able to check by looking at them is impossible.”

Helping the industry along is a DARPA program called CHIPS, which is driving the notion of chiplets. “In the past, all of the IP was on the same node,” explains Park. “Now you are blowing that apart and rebuilding it in a node-independent manner. SerDes can be 28nm, memory could be 32nm, video chip at 7nm, etc. I have that kind of flexibility. But it is more complicated than that because a chiplet is a third- party version of IP that has been physically realized.” These aspects of the flow will require some additional work at both the physical and the protocol levels.

Models and abstractions
Does the whole package need to be treated as a single chip? “We already have a challenge today doing analysis and verification of 100 million-gate designs,” says Frank Malloy, application engineer for 3DIC layout and verification at Synopsys. “Now you stack another 100 million gates on top, and if you try to treat it as one huge design your memory usage and runtime will be out of control. We need abstractions to be able to model and encapsulate certain parts of the design and to reduce the impact on memory and runtime.”

But there also is information that is critical and must be shared between the pieces. “IR drop analysis is extremely critical in today’s complex designs,” says Malloy. “Now you have to calculate IR drop across a large die when you have another die on top of that, which has to feed power and ground from the package through the bottom die to the top die. The top die IR drop is going to be affected by the IR drop of the lower die, so we have to do a multi-die IR drop analysis.”

Pulling those pieces together under one design environment is an attractive way or reducing complexity.

“A model-based interface is an elegant solution for anyone trying to integrate multiple dies into a system and trying to deal with these kinds of interactions,” says Karthik Srinivasan, senior product manager for analog & mixed signal solutions at ANSYS. “IR drop can be done in an extracted way, but for someone who is assembling a system and having a true 3D IC—with one die interfacing to the bump and other die getting fed through the microbump—they need to know the loading on the die in order to do a true IR drop analysis. You need a concurrent simulation.”

Today, those abstractions are not standard. “Some of the necessary abstractions do exists today, but each vendor has their own specialties and their own ways of doing things,” points out Ferguson. “Between the foundries and the users, over time that will coalesce, and we will all land on the same set of design practices.”

Eventually, standards bodies will get involved. “There are standards bodies, such as Si2, which are trying to come up with an IP-free definition of some of these abstracts,” explains Felton. “However, there are a lot of formats that exist today. They may not be ideal, but it is everything from LEF/DEF files, GDS files, comma separated values spreadsheets, AIF files to BGA.txt files. You have to be careful, early on, that you are not too restrictive. This might force a user into a particular use model. We have seen customers that are very diverse in the way they attack the same problem with different forms of data. What they want is a solution that is as open as humanly possible, so they are not forced into a restrictive data flow.”

Interfaces
Before the notion of chiplets can become a reality, standard interfaces may be required. “High bandwidth memory (HBM) is an early example,” says Park. “It was somewhat easier because it was just a memory interface that targeted a specific application. A chiplet interface has to be more generic.”

The DARPA CHIPS program is tackling that problem. They have selected Advanced Interface Bus (AIB) as a physical-layer interface, developed by Intel for die-to-die connection in their Embedded Multi-Die Interconnect Bridge (EMIB). Intel has made AIB available, royalty free, through the DARPA program. Other companies are developing lightweight protocols that run on top of this interface.

But there may be the need for multiple specialized interfaces. “HBM is a highly parallelized interface where you are moving massive amounts of data without resorting to high-speed IOs,” explains Felton. “It gives you throughput with nowhere near the power consumption and hence less thermal issue. There is PAM4, there are a lot of protocol interfaces out there. Depending upon the type of chip and its function, a chiplet will support one or more of the standard interfaces depending upon the required performance.”

Tools and flows
Today, the package has to be designed, and the design may need to be partitioned. Routing may involve several die. And analysis has to take into account everything in the package and more.

“A few years ago, a package engineer spent 90% of their time doing the implementation,” says Park. “That included tasks such as routing the design, creating the power planes, and doing electrical characterization. If you talk to the same people today, that part of their job is less than 50%. They spend time early in the process working with the chip teams on a pathfinding stage. They are trying to figure out the best packaging technology that will work for that chip based on cost, performance, physical characteristics, power dissipation.”

This gets complicated on multiple levels. “You could have half a dozen chiplets, you might have different types of memory, be it stacked or side-by-side, and you may be looking at using an interposer or an embedded interposer bridge,” adds Felton. “You are basically dealing with multiple levels of substrate integration, side-by-side, stacked, embedded, and you need an environment where you can quickly evaluate these different scenarios to see what they give you in terms of the overall goals.”

But the design flow is where the major impact will be seen. “We have modified every single tool in the chain, from implementation through to verification, and that includes physical design, static timing analysis, parasitic extraction, design rule checking (DRC) and LVS,” says Malloy. “Each one of those tools has been enhanced to support 3D design. Most designs are being done separately today, but at some point in the flow you put them together. Then we need to read both dies and look at optimizations between them. Where should we move the bumps so that we get the shortest wire length through both dies? Where should we move bumps or gates so that we have the fastest timing through both of them? We recently enhanced both extraction and analysis to be able to look at both dies and see the capacitive coupling that can happen on wires between two dies. Now these dies with hybrid bonding are very close together, so the topmost metal layer of both can interact and have capacitive coupling between two completely different designs.”

A lot more is to come. “You no longer have just two dimensions, you now have a third dimension,” says Park. “In theory, you have 20+ metal layers to play with because you have two dies face-to-face. If I place two blocks next to each other on the same chip, but because of other restrictions they are too far away, I could move a block to the chip above it. How does that work physically? How does it work thermally? How does it work electrically? Routing becomes a three-dimensional problem. If you run out of routing resources on the bottom chip, even though you are trying to connect two devices on the bottom chip, you have the potential to via up to the top chip and find a routing resource on that chip and then via back down. You have to do timing closure across two chips stacked three-dimensionally.”

Conclusion
Retooling for advanced packaging is just getting started. While EDA companies cannot stop investing in following the latest implementation nodes, they also have to invest heavily in design flows for the new packaging technologies. Unlike updates for the latest node that only affect the back-end tools, design for packaging will affect everything in the flow and add some requirements for completely new tools.

How long do they have to get these in place? “In the past companies were collecting data but not really planning anything in the near term,” says Ferguson. “Today, while it is still experimentation, it is no longer just kicking the tires. They have decided to buy a car, and they are trying to decide exactly which car to take home.”

 

 

 

Related Stories

Analog Migration Equals Redesign

Multi-Physics Combats Commoditization



Leave a Reply


(Note: This name will be displayed publicly)