Chip-Package-Board Issues Grow

Success will depend on new tools, a better understanding of who’s responsible, and new methodologies for getting designs out the door more quickly.


As systems migrate from a single die in a single package on a board, to multiple dies with multiple packaging options and multiple PCB form factors, it is becoming critical to move system planning, assembly, and optimization much earlier in the design-through-manufacturing flow.

This is easier said than done. Multiple tools and operating systems are now used at each phase of the flow, particularly for complex GPUs and CPUs for the latest handheld mobile devices. Trying to bridge these worlds to be able to analyze tradeoffs between various physical components, software and packaging options is difficult in a single-vendor environment. But these pieces have become so entwined that if design decisions based on those tradeoffs aren’t made early enough, the chip may get fixed—but at the expense of package and board choices.

“This is no easy task because it is essentially a serial process,” said Keith Felton, product marketing manager at Mentor Graphics. “As such, the packaging world comes along too late in the game to make inroads here. However, in the PCB-driven world, co-design is gaining traction in large systems companies where the PCB team has been able to work with the packaging company.”

The number of tradeoffs that needs to be considered up front is growing significantly. While granularity has great benefits in terms of being able to create the right power, performance and cost for a specific market, the number of possible choices, configurations and packaging approaches is enormous.

“If you are designing a band pass filter, do you put it on the chip, or do you put it on the package, or put it on the PCB? If I’m designing an antenna, where does it go? The ability to do this sort of cross-domain tradeoff is a missing piece of technology today,” said John Park, Cadence’s product management director for IC packaging and cross-platform solutions. “In some electronics products companies, their processors have to fit in two, three or four different sizes of smartphones and tablets, set-top TV boxes or watches, among other devices, so when designing the chip, engineering teams now want to determine the right packaging solution based on the end system form factor that that device has to sit in.”

These are the kinds of decisions that need to be made at the architectural level. As such, design teams are looking for a pathfinding environment where, with a very limited set of data or no data, they can start to mock up what the chip-package-board interaction looks like. This has been done to some extent in mainstream desktop tools like Microsoft Visio, PowerPoint, as well as drawing things on the whiteboard, but a more formal methodology for pathfinding exploration early in the design phase can prevent a lot of issues later in the flow.

A platform like that would need to support things like cross-platform floor-planning. “If I want to take this device and move it from this board to the other board, or from this board up onto the package, or somewhere else in the system, you need that flexibility — and a lot of that is missing today,” Park said. “Specifically, if you put together a chip-package-board system, and you want to take something like a band pass filter that’s out on the PCB, and move it on the package, there are some technologies to do that physical remapping between those domains. But it doesn’t map automatically back to the logical domain, so the schematic or connectivity tools that are managing each specific implementation of the chip, package and PCB — that annotation between, ‘Hey, I just made a physical floor plan that moved this structure from this domain to the other domain, I need my schematic to automatically reflect that’ — that comes up today because those schematics are drawn in different tools, using different methods, across different operating systems. This is a big challenge to solve that specific piece of cross-domain partitioning.”

He said that currently there are no formal flows, so it’s all a manual process where someone goes and cuts out a big section of logic from their schematic, then copies and pastes it to a different schematic. After that, they hope they grabbed the right pieces.

All of the tools vendors are working on better automation and methodologies that can tie together chip, package and board or system more seamlessly, and to the winner will go potentially big spoils. Solving these problems represents a significant opportunity as traditional scaling becomes more difficult.

“There are a series of handoffs from the chip level to the package level to the system level,” said Aveek Sarkar, Ansys’ vice president of product engineering and support. “The goal is to harmonize workflow across physics. A lot of this involves wafer-level packaging like (TSMC’s) InFO, which is here to stay. You have use-case analysis, sign-off, power management, sometimes embedded multi-die, power analysis and ESD. That adds many layers of data, so the question becomes how do you do make that into a meaningful analysis?”

Siblings or neighbors
That’s one piece of the problem. Another piece is who is responsible for doing that analysis and overseeing the integration of all of the pieces?

“Co-design sounds good on paper, but try getting it into practice,” Felton said. “Today there are three camps of people — the chip, the packaging, and the board camps — each in their own environment. The packaging and the board camps are more like brothers and sisters, rather than next door neighbors. The IC team is slightly different because they tend to be uniquely spaced, and they tend to do a lot of things script-based, but they’re not very interactive people. It’s always a challenge to find the person who is responsible. Historically, any company involved in co-design, such as a fabless semiconductor company, is designing the chip, and it’s like that famous saying about breakfast, ‘The pig was committed, and the chicken was involved.’ In a fabless semiconductor company the chip team is committed because they are designing the chip, but often they outsource the package or they have a separate group that does some of the package. Even today, a lot of companies still use an OSAT to do it, so they have to interact, and typically they use the good, old-fashioned Excel spreadsheet. But this really isn’t co-design, and making a change is very painful.”

Finding someone who is even willing to undertake the task is difficult. Gaining expertise in any one piece of a chip-package-board/system is difficult enough, but keeping track of the number of dependencies and possible interactions in a complex design is well beyond the capabilities of even the most versatile experts. These systems are too large, subject to too many variables, and not everyone approaches problems from the same vantage point.

“You’ve got to provide the ability to look at the problem from each domain perspective,” said Felton. “For example, someone in the datacoms market whose product is a system that contains one or more PCBs is looking to maximizing the performance of their end product, which is a piece of networking equipment. Obviously, speed is very important, but so is reliability and cost. They want to be able to influence the package ball out. They don’t really care about the chip in the package. That’s really the package guy/chip guy’s problem. What they want to do is to define the ball out on the package so it allows them to get away with as few layers as possible, and get the highest performance data rate down at the board level.”

The packaging team, meanwhile, is trying to keep the number of layers to a minimum because it lowers the cost, while also ensuring that the package performs well. So they are in the middle, trying to optimize both the chip and the package.

“Usually it’s biased towards the chip side because usually the package starts off life at least influenced by the chip design team, even if it’s not completed by them, and is done by an OSAT, for example,” said Felton.

The OSATs, meanwhile, look at the tools that are in the market and see a gap from their side.

“If you look at fan-out, it’s not just about the package,” said William Chen, ASE fellow and senior technical advisor. “The fan-out is on the die. That’s one of the reasons we’re all working carefully to make sure it will be successful. This is really about wafer-level packaging, and wafer-level packaging is coming in.”

But he noted that to make this market really take off will require better tools. All of the major OSATs, as well as the large foundries, are working with top EDA companies to address this issue.

Multi-chip packaging
It is possible to get chips out the door using existing tools, of course. Complex SoCs and advanced packages are being developed today and manufactured with sufficient yield. But it’s painful because the breadth of the problems that design teams are wrestling with are so complex. As long as device scaling provided sufficient PPA improvements, there was little incentive to develop tools needed to automate this process.

But not everything worked out as the industry had anticipated. Lithography advances have slipped for the past 10 years, the number of chipmakers has shrunk, and end markets have fragmented. As a result, there is no clear path to profitability for many companies to develop a 7nm SoC. This has put a renewed focus on multi-chip packaging, particularly fan-out wafer-level packaging (FO-WLP) and 2.5D, which offer potentially significant improvements in power and performance and faster time to market.

“Aspects of interest include better system-level performance, improved latency and power management,” said Patrick Soheili, vice president of product management and corporate development at eSilicon. “There also is reduced board complexity and cost, better leverage of existing die that don’t benefit from process shrinks (for example: AMS, high speed SerDes), more effective IP/block reuse, delivery of increased bandwidth with high-bandwidth memory technology, and better yield and cost management with divide-and-conquer strategies to exploit the yield sweet spot.”

eSilicon began developing test chips for these types of applications in 2011 and has picked up on the unique challenges presented by these designs, including interposer and package routing, design for manufacturability (DFM) for a multi-chip package, signal and power integrity analysis, thermal integrity and management, warpage and co-planarity analysis, and specification. Lots of simulations and test design measurements are required in all of these areas. As a result, eSilicon is developing high-bandwidth interconnect allow multi-die integration without the need for an interposer.

“Today, we are working on several large finFET-class customer designs that include multiple dies integrated with advanced packaging techniques,” Soheili said. “These designs are taping out now, and will go to production in the months ahead.”

Rajesh Ramanujam, product marketing at NetSpeed Systems, also has seen the uptick in early stage floor-planning interest, mainly driven by the non-trivial amount of time it takes to place and route today’s complex designs, which are further complicated by multiple dies and complex packaging.

To address this, the tools have to be able to account for various different physical characteristics of dies, process nodes, packages, and PCB, Ramanujam said. “The timing characteristics of chip I/Os are usually done at a much later phase, revealing issues that are either unsolvable or translate into lower performance. Being able to emulate the various characteristics upfront would help in planning ahead for these circumstances. From an IP perspective, configurable features would help in adapting to these situations.” In particular, IP should be able to adapt to various clocking methodologies, clock skews/delays, unbalance clock trees, to ease the effort from physical design and be able to adapt to various process nodes.

When it comes down to it, the challenges of chip-package-board can be summed up with three challenges, Felton said:
• Who is responsible? The environment has to satisfy the designers’ working practices and needs. It can’t be completely alien or it won’t be adopted.
• What is the best way to get data efficiently in and out of the various environments? That data is essential for co-design, but it also has to be flexible enough to allow for ECOs and other rapid changes. How are those changes accounted for in other environments?”
• Will there be enough confidence at sign-off? At some point designers must make sure that the signals on the die actually go to the right balls on the package, go to the right breakout pattern on the board, and the whole thing is correct and works. This requires LVS across the entire stack of substrates.

There’s no arguing the benefits of early, intelligent tradeoffs in the chip-package-board world, but it must all be tempered with the realities of how design teams are run. And until these issues are worked through, progress in co-design may not reach its full potential.

“While engineering managers wants a co-design solution that does silicon-package-board as a holistic environment, in reality, that’s a mythical person they’re talking to,” Felton said. “That person doesn’t exist beyond management. You will hear a senior vice president of engineering say that’s exactly what they want to do, but you ask which of the teams is responsible, and everybody in the room will look out the window until someone hands down the edict.”

Related Stories
Fallout From Scaling
There are an increasing number of options to deal with the device scaling issues. All of them present challenges.
What’s Missing In Advanced Packaging
When it comes to multi-board and multi-chips-on-a-board designs, do engineers have all the tools they need?

Leave a Reply

(Note: This name will be displayed publicly)