Confusion Grows Over Packaging And Scaling

The number of options is increasing, but tooling and methodologies haven’t caught up.

popularity

The push toward both multi-chip packaging and continued scaling of digital logic is creating confusion about how to classify designs, what design tools work best, and how to best improve productivity and meet design objectives.

While the goals of design teams remains the same — better performance, lower power, lower cost — the choices often involve tradeoffs between design budgets and how much of that cost can be amortized through volume, and how far existing tools and methodologies can be stretched to handle multi-chip architectures. There also are more choices than ever before — full and partial nodes, and versions of all of those that are optimized for power, performance or cost — as well as multiple packaging options that span everything from 2D to 3D fan-outs, and full 3D-IC stacked die. Each has its pros and cons, but no single solution is perfect for everything, and not all of the pieces are in place or work together for all of them.

“There are several types, like MCM, interposer, InFO, or a real 3D stack with through-silicon vias, devices sitting one on top of other each, each with its own challenges,” said Anton Rozen, director of VLSI backend at Mellanox Technologies, in a presentation at the recent Ansys IDEAS Summit. “In common, we need to solve the power delivery for such huge devices — and of course, the heat dissipation.

Partitioning of systems across multiple chips, and in some cases multiple dimensions, opens up a whole new set of challenges and opportunities.

“Some of the exploration tools are potentially useful there,” said Rob Aitken, R&D Fellow at Arm. “But are we going to look at individual blocks and cells the way that we’ve done historically with large ICs, or are we going to split things up at the level of cores, or at that larger sub-grouping/chiplet type approach level? These are questions that are unanswered at the moment because the number of possible ways of doing these things is immense, and the winners have yet to be determined. We will see over the next few years where all this technology goes.”

The potential problems and uncertainties span the entire design-through-manufacturing chain, and design teams need to look both left and right to understand where they might run into problems.

“Currently, more or less each new design needs adoption or modification on the 3D process themselves,” said Andy Heinig, head of the efficient electronics department at Fraunhofer IIS’ Engineering of Adaptive Systems Division. “Often it is linked with the number and position of the TSVs, for example. TSVs induce different stresses, and these issues currently can be solved only by adaptation of the manufacturing process. This is later directly linked to tools, because some of the problems can be later solved by tools. If enough data available regarding the influence of the TSVs on the silicon, it can be predicted and solved by tools. But up to now, not enough data are available, and the tools tools are not ready to predict and solve the issues. Maybe new classes of tools are necessary, because on classical packages the prediction of stresses also are an unsolved problem. There are a lot of non-linear behaviors involved. Fraunhofer is working on AI-based tools to solve these issues.”

The big picture
The good news is there are plenty of options. The bad news is there may be far too many of them.

“As we begin to disaggregate what were once large, single process SoCs, into discrete components on often heterogeneous processes, and then recombine them into various forms of stacked elements, there are too many choices to easily make well-informed decisions,” said  John Ferguson, product marketing director, Calibre DRC applications, at Mentor, a Siemens Business. “How do we inform the designer better on the tradeoffs between power, timing, signal-integrity, reliability, area and 3D footprint? Having more options is great until you have to make a final decision.”

One solution is to push up the abstraction level, at least initially. Harry Chen, IC test scientist at MediaTek, said an integrated methodology is needed to analyze the whole package and die together for thermal performance, and for aging. “And then, from a test perspective, 3D-IC introduces new defect mechanisms. But how do we analyze that for reliability, quality, and developing test strategies? It’s all brand new.”

That has pros and cons, as well.

“Looking at the abstraction levels, if you think about designing a chip — starting from the standard cell or macro and larger functional blocks — there are existing methods for doing extraction in the context of power integrity or thermal or electromagnetics,” said John Lee, vice president and general manager at Ansys Semiconductor. “We think the context of reduced order-models — where you model the physics of a block in 3D-IC with heterogeneous dies, but that team may or may not be part of your company — that becomes an issue. The models need to include the complexity of the physics, but some forms of encryption and abstraction/obfuscation also will be needed. It’s still early days, but that’s certainly a challenge. Also, the social aspect of having different chip teams working together, or a separate interposer team, is very analogous to the divide seen in some companies between a chip team and a package team. So that concept, like digital thread and handoff that are popular outside of EDA, will need to be added into this 3D-IC tool chain.”

Multi-chip challenges
Combining multiple chips in a package is nothing new for the chip industry. What’s different is the tight integration of leading-edge chips with those developed at other process geometries.

“The semiconductor industry has been building multi-chip modules (MCMs) for a few decades now,” said Rita Horner, senior product manager in Synopsys’ Design Group. “However, the complexities of these designs has been increasing exponentially in the last decade, with the continuous demand for higher performance, lower latencies, and lower power. Engineers are no longer able to use their legacy package design tools because of the database size limitations and the lack of automation in these tools. The integration of each high-bandwidth memory (HBM) adds at least 1,024 die-to-die connections. Advanced networking and high-performance computing chips split the SoC into multiple dies assembled in the same package, requiring thousands of more die-to-die connections. Manually making these signal connections, along with signal shielding and power and ground connections that are required, is not practical.”

This has a direct impact on cost. “Design and verification cycles are increasing, which are driving higher design costs,” said Manuel Mota, product marketing manager in Synopsys’ Solutions Group. “The industry needs tools to handle complex designs that can handle large databases, with lots of automation, as package designs are approaching chip-level complexities. The 3D-IC market needs tools that either have full integration capabilities for all stages of design, from exploration to validation and analysis, or at least more standard interfaces that would make it less painful to transition between tools for the different steps in the design flow.”

Reliability of the individual chips in context of other chips is another issue that needs to be addressed. One of the reasons on-chip monitoring has gotten so much attention is that these multi-chip implementations are beginning to be used in safety- and mission-critical applications because of the performance and customization options available. But they also need to be monitored from the inside, because the leads for testing often are unavailable once they are packaged together.

“Embedding a fabric of in-chip sensors is a key strategy to address these issues to give you visibility of conditions deep within the 3D-IC, both in the debug/bring-up phase and in mission mode with real-time monitoring and corrective actions,” said Richard McPartland, technical marketing manager at Moortec.

Different options
Not all 3D packages are alike, and not all of them are truly 3D.

John Park, product management director for IC packaging and cross-platform solutions at Cadence, pointed to a number of possible packaging options.

 

Fig. 1: Various versions of 3D packages. Source: Cadence

 

 

 

Fig. 2: Timeline of different packaging options. Source: Cadence

Growing 3D-IC activity
Activity surrounding 3D-ICs is only expected to increase over time. Some of this is due to the fact that chipmakers, particularly those focused on AI/ML, want to put as many processing elements on a chip as possible. Chips have become so large, however, that they are exceeding reticle size and need to be stitched together.

Using multiple chips in a package is an alternative to increasing transistor density, and it provides a way of actually shortening the distance between components such as processors and memories.

“Once those are in place, the benefits of putting logic near memory are really going to start to drive additional structures,” said Arm’s Aitken. “Dealing with some of the thermal and power will allow us to stack logic on logic, and that will open up yet more opportunities too. We’re going to see a lot of activity driven by getting memory closer to logic followed by, as we deal with the thermal test and so on problems that we’ve talked about, we’ll see a lot of other businesses as well,” Aitken said.

Most of the initial implementations of 3D-IC were for high-performance compute applications, such as chips used in a data center. But the design activity around this approach is increasing. Ansys’ Lee noted that the number of 3D-IC tapeouts within the company’s customer base has doubled over the past five years compared to the previous five years.

“That does indicate that it’s becoming more mainstream,” Lee said. “One interesting use case is taking silicon photonics, and moving that closer to, and within the compute systems. Today, a lot of the photonic systems using a data center are discrete separate parts. At some point, standard, high-performance finFET silicon probably will not be the right carrier for that. But 3D-IC seems like a good method, even if it will compound all the effects like thermal, electromagnetics, and power delivery.”

Synopsys’ Horner agreed. “The need for higher integration levels is applicable across the majority of the semiconductor industry,” she said. “However, the high-performance computing applications, such as AI, high-end networking, 5G networks, and automotive will be the early adopters of 3D-ICs because the smaller technology nodes are not meeting their integration needs. They need to disaggregate the SoC for yield and process reasons, and at the same time, to bring memory closer to the processing unit to meet their data access latency needs. Wireless and consumer applications will follow once the technology is more economical.”

Likewise, Fraunhofer’s Heinig suggested 3D will be seen in systems with very large value where the additional design cost can be ignored, such as mobile processors. “For small volumes the design costs are too high because of non-existent design flows,” he said. “To bring to systems with smaller production volume, a much higher grade of design automation is necessary — especially at a high level where a lot of decisions must be done.”

Ferguson agrees. “Without a doubt, within three years there will be an increase in the number of companies with production offerings from 3D technology, but I believe it will still be relegated to the larger corporations — those that can afford a large team of scientists, analysts, and CAD folks. Beyond that three-year mark will be heavily dependent on how well the EDA tools and the design to manufacturing infrastructure have come together to simplify the decision making processes.”

McPartland also expects further 3D-IC growth in high performance computing especially AI, server, supercomputing, and high-end desktops/laptops.

Bifurcation and integration
As it becomes more expensive and more difficult to integrate multiple devices on a single planar chip at each new node, the future of 3D-ICs looks increasingly bright. The key will be what doesn’t need to be included in a 3nm or 2nm die, and how can it best be packaged to improve performance and power delivery.

Like others, Ansys’ Lee said the real struggle involves the tooling. “The flows and methods are immature in 3D-IC, but they’re maturing quickly. But then once you’ve stamped out all the issues the benefits of having heterogeneous die, and not pushing signals through a package, those benefits will far outweigh any of the other challenges that will be present. I would guess that’s going to happen two years from now,” he said.

Fraunhofer’s Heinig pointed out that in 3nm or smaller chips, the maximum voltages and currents are very limited. “This means to drive standard protocols to the outside of the chip can be supported directly. That also means each 3nm chip needs an additional chip for driving protocols with higher voltages and currents. This type of integration can be done classically with multi-chip packages, but also in a 3D style.”

At the same time, scaling will continue for what makes the most sense. “From what we can see, it is still thriving, and I believe will continue to do so,” said Mentor’s Ferguson. “I do expect what will happen is the amount of a heterogeneous assembly that depends upon those advanced processes will be minimized. Of course, there are still hurdles to overcome to make that happen.”

The majority of the early adopters will need both the smallest technology nodes (Moore’s Law) and the need for higher levels of integration achievable in advanced packages (Beyond Moore), Synopsys’ Horner said. “Even the biggest possible manufacturable die in the smallest technology node is not going to allow these applications to integrate all the features and capabilities that they need in a single die.”

For applications such as AI, Moortec’s McPartland said the demand for increased compute and memory is seemingly insatiable at this moment so both approaches are expected.

Others agree. “Today, and for the next four or five years minimum, both More Moore and Beyond Moore will co-exist,” Cadence’s Park said. “When recouping NRE is not an issue (very-high volume) and performance is at a premium, monolithic SoCs will still exist.”

Shifting everything left
One of the big challenges ahead is how to shift everything left, including advanced packaging, and still maintain quality and reliability. It’s one thing to be able to build 3D-ICs, but it’s entirely different to actually do that. So are designers just swapping one form of difficult complexity and huge chips, for different forms of complexity in multi-die 3D systems?

MediaTek’ Chen does not see this as swapping complexity, because SoCs already are incredibly complex.  “Now we’re putting them into system-in-packages, so it’s just adding more complexity on top of that. The problem is the interaction with the components. Actually, as the complexity increases, it’s not just linear. It’s super-linear, because of the interactions and just more opportunities for things to go wrong at the system level. One of the problems a lot of the system integrators see is the phenomenon called no-trouble-found, meaning every component comes in well-tested, but you put the system together and the system fails. The big problem there is how to quickly diagnose what the real problem is. It definitely involves the software, as well, and that’s still a very challenging problem. In terms of shifting left, one of the ideas the industry is starting to work is adding more observability into all the components. So even the components have to become system-aware in terms of the way we test them. These monitors/sensors capture internal conditions and collect them, store them, and then are correlated with a sort of system behavior. If something goes wrong, there is a way to do data analytics. This usually involves machine learning techniques to bring that data back to help you figure out what can be done to improve the design or test process, so that it’s less likely for this type of problem to happen in the future.”

There is general agreement that moving to multi-chip(let) packaging doesn’t necessarily make things easier. “Known good die and post-assembly yield are still issues,” said Cadence’s Park. The question is complex and design-size-dependent.”

Fig. 3: Shrinking in 3D. Source: Cadence

Still, the beauty of the SoC and the fabless ecosystem has been the reliability of the process and system, Ferguson pointed out. “From one node to the next, the complexities increased, but the flows used to design a reliable chip only changed incrementally. This helped to make the transition process relatively easy. As we move to a 3D, chiplet-based system, there will likely need to be a much greater shift in how we do design. That introduces risk. Unfortunately, what that means is in the short-term, we’re still in a ‘construct by correction’ environment. We’re going to miss problems, resulting in yield and/or reliability issues. We’ll need to delve deep to determine the root causes, then work together to build safe guards into the process to prevent them the next time. It’s painful and expensive, but it is the path we have to follow for progress.”

Conclusion
At the end of the day, the difficult complexity and reliability of huge chips can be still be done by breaking down the problem into smaller chunks. “This allows reliability to be addressed in a correct-by-construction manner for the constituent smaller dies, and then bringing these LEGO blocks together in the overall multi-die 3D system a lot more confidently,” said Rahul Deokar, product marketing director in Synopsys’ Design Group. “The flexibility and control to handle the unique requirements for the different dies at different process nodes or with different functionality (memory, logic, analog, etc.) enables designers to Shift Left and guarantee the end system quality and reliability.”

As the approaches to 3D techniques in design continues to evolve along the lines of Moore’s Law, More Than Moore, and Beyond Moore, so too does the understanding of the design challenges, and PPA tradeoffs. What the next five years hold for engineering teams to achieve new levels of power, performance and reliability gains is not yet clear, but the numerous approaches should offer more than enough opportunities for differentiation in end products to continue, as well as lots of headaches.

Related
System-Level Packaging Tradeoffs
Growing complexity is creating an array of confusing options.
New Architectures, Much Faster Chips
Massive innovation to drive orders of magnitude improvements in performance.
Custom Designs, Custom Problems
Experts at the Table: Power and performance issues at the most advanced nodes.



Leave a Reply


(Note: This name will be displayed publicly)