Focus Shifting From 2.5D To Fan-Outs For Lower Cost

Experts at the Table, part 2: Interposer costs continue to limit adoption of fastest and lowest-power options, but that’s about to change.

popularity

Semiconductor Engineering sat down to discuss advanced packaging with Calvin Cheung, vice president of engineering at ASE; Walter Ng, vice president of business management at UMC; Ajay Lalwani, vice president of global manufacturing operations at eSilicon; Vic Kulkarni, vice president and chief strategist in the office of the CTO at ANSYS; and Tien Shiah, senior manager for memory at Samsung. What follows are excerpts of that conversation. To view part one, click here. To read part three, click here.

SE: What’s going to drive down the cost of advanced packaging?

Shiah: The key to getting cost down is economies of scale and driving up volume. This is an area I’m very optimistic about, especially as it relates to HBM, because the industry is focused on AI and machine learning. That’s driving a lot of new applications and all sorts of verticals. We’re going into this wave faster than any previous wave we’ve experienced. Goldman Sachs did an analysis of data center growth, which showed that the percentage of AI servers versus regular servers is growing very fast. What’s going to be driving a lot of growth in data centers is AI applications. The type of hardware that is need for AI, particularly in training, uses HBM, but it is very specific.

SE: There’s AI being applied horizontally across a lot of technologies. There also are AI chips, which may employ AI in their managment. But what’s consistent here is that all of these have a massive amount of data flowing through them, particularly for training. Is this a big market for packaging, or does everyone want a single-die solution?

Cheung: Putting everything on one chip is not possible. From the silicon side, we’ve been working on this in an SoC. In the old days, the foundry guys would put everything onto a single chip. That includes mixed signal, analog, caps and inductors. But it doesn’t work anymore. It requires different packaging integration, different process node integration, etc.

Lalwani: Today, at 7nm, every product we’re developing is also 2.5D. That’s a result of the markets we focus on—data center, 5G and AI. And AI is riding the adoption of both 7nm and 2.5D. That includes HBM, SerDes, and all of the critical IP. You can’t decouple them anymore. It’s not 7nm or 2.5D. It’s both right now, and we don’t see that stopping at 5nm. Going back to the economics, today we use a silicon interposer because that is the state of the art for connecting HBM memory with a large ASIC die. But we’ve expended a substantial amount of R&D resources with our foundry and OSAT partners to do away with interposers by leveraging some of the fan-out technologies. Both were deployed in smart phones.

SE: Does that include pillars on fan-outs?

Lalwani: Yes, exactly. Today, the communication is through an interposer. We’ve already gotten very promising R&D results, both mechanical and electrical, that show it’s viable with HBM. The memory guys don’t have to do anything different. But by eliminating the need for an interposer and leveraging fan-outs, we can scale fan-out technology from where it was originally targeted for smart phones to these adjacent markets.

Ng: We’re looking at what is beyond interposers, as well. Interposers are not new. They’ve been around for a while, but they haven’t taken off in mass markets, primarily because of the economics. We’re looking at new bonding technologies and other approaches, and there are new technologies in the industry that may help us go beyond where we are with interposers today.

Cheung: There’s a lot of room for cost reduction. HBM pricing will come down. The same thing will happen with interposers. The interposer cost today is too high. You have to recover your overhead. But need to figure out how to reduce wafer grinding and CMP. If you’re grinding the wafer down, you’re wasting a lot of time and machine time. Why? That’s one area of cost reduction. There also are a lot of cost reductions we can do for interposers. Today, the system houses don’t know how to use it, and the EDA tools are not to the point where, if you use this, you can get guaranteed results.

Ng: When you get to thinner materials, then you have to deal with thinner wafer handling in the fab. We also have been looking at different materials, such as glass substrates. So there are multiple approaches. Everyone wants to find a cost reduction path, whether that’s a derivative or incremental to what we have today, or whether it’s the next generation. And that window will close. The next-generation technologies will be here if we can’t get a promising incremental improvement soon enough. We’re very actively working in this space. Customers are looking at total cost, meaning the memory, whatever the interconnect method is from a packaging standpoint, the test, the reliability—all of that is their end cost. And everyone in the supply chain is only willing to tolerate a certain cost.

Shiah: Cost will decrease with increased volume. There’s specialized equipment in the back end for testing, stacking, etc., that needs to get amortized. If we have more and more volume, that part of the dedicated equipment gets amortized.

Cheung: With 2.5D, the issues are mechanical stability and density. As you’re getting down to 7nm and 5nm, using ultra low-k material, you need to have a good, solid platform to handle those systems. You can go fan-out, but you need to understand the pros and cons. All of this technology is specific. This is why the OSATs are putting a lot of effort into working with the EDA vendors to come up with a tool. We’re trying to paint a picture of a partition between different technologies so you can say, ‘This is what’s applicable for 2.5D,’ and ‘This is what’s applicable for fan-out.’ There is not one solution for all.


Fig. 1: Different fan-out approaches, including copper pillar formations in die-up approach. Source: ASE

SE: It requires the whole supply chain to make that work, right?

Cheung: Absolutely.

Kulkarni: For the thermal and mechanical, we’re hearing a lot from customers.

Cheung: Yes, and I just came back from a meeting where they wanted a 500-watt solutions. The OSATs are stepping up to work with the EDA vendors, the system thermal-management committees, to solve those issues. That’s from component level to system level. We need a complete thermal management solution. That’s where the industry is going.

Lalwani: If we can focus on those first-order issues, that will go a long way toward meeting the challenges. There are more esoteric issues that customers are facing, but first and foremost, we are struggling with some of the fundamental issues. How do we handle 500 watts of power in our current environment? How do we cool it? We can’t control that all by ourselves anymore. We need our system OEM customers to be part of the solution. It’s not just about the supply chain getting together and figuring out how to serve our system OEMs. Everybody has to be part of the solution.

Cheung: It’s all about collaboration. No single entity can solve all of these problems. We all need to work together.

Ng: Which goes back to the need for an open ecosystem, rather than a locked solution.

SE: There was a move by the IEEE to create a framework for all of this. Has that worked out, or is everything still completely customized?

Cheung: This used to be driven by Intel and IBM. At every advanced process node, they wanted materials suppliers to deliver those materials and they would buy all of the machines to do advanced nodes. That’s no longer the case. We’ve got the Internet, MEMS, modules and system solutions. A silicon SoC is no longer a single solution. It doesn’t work. IRDS (International Roadmap for Devices and Systems) and HIR (heterogeneous integration roadmap) are new ways of looking at this. The whole supply chain has to work together to come up with a solution.

Kulkarni: There is another solution on the horizon from federal aerospace and defense, where it involved a lot of funding and focused on the need for an ecosystem. I see more and more people approaching this as an ecosystem of systems, which for semiconductors includes packaging and processing. We keep a close eye on that. This is the first time I’m optimistic about government funding in terms of creating consortiums and partnerships. Those projects will include a number of companies at this table, as well as the ecosystem and system guys. That will be very interesting to address variations of packaging, simulation, place-and-route, system integration, etc.

SE: Is that DARPA-related?

Kulkarni: It’s DARPA and other defense-related projects. There are a lot of next-generation Air Force projects. There also is an effort underway in flexible electronics.

Cheung: There is a lot of flexible assemblies. How do you bond a chip on a flexible substrate in order to get to two-layer or four-layer devices. If you do a teardown on this, a lot of the interconnects are to the flexible stuff. So It’s not just a flexible interconnect. They want to put chips on a flexible devices, and sometimes security and encapsulations on the flexible technology.

Kulkarni: That’s around the corner. When things are flying very close together, sometimes at mach speeds, how do you do the data management and data analysis and real-time sensor data? That creates many opportunities, from packaging to systems.

SE: Where are you seeing the biggest bottlenecks?

Shiah: The bottleneck is either in compute or memory, and what we’re finding with a lot of customers and people we talk to is that memory is critical to solving the speed problem as well as the accuracy problem in machine learning applications. Google wrote a paper in 2017, called “In-Datacenter Performance Analysis of a Tensor Processing Unit.” It used a roofline model, which is a performance model where you plot the TeraOps, which is processing power against operational intensity. That is the number of ops you perform on each byte fetched from memory. The slope of the line is memory bandwidth, the flat part of the line is the peak processing performance. What they did is use their applications for page rank and language translation and they ran it on a TPU and did the measurements. A lot of these applications fell under the slope of the roofline. That meant it’s memory bandwidth-constrained, and that was their first-generation TPU, which did not use HBM. We know what they did with their next generation.

Read part one and part three of this series.

Related Stories
What’s Next In Advanced Packaging
Wave of new options under development as scaling runs out of steam.
Sidestepping Moore’s Law
Why multi-die solutions are getting so much attention these days.
Moore’s Law Now Requires Advanced Packaging
Shrinking features isn’t enough anymore. The big challenge now is how to achieve economies of scale and minimize complex integration issues.
Making Chip Packaging Simpler
The promise of advanced packaging is being able to integrate heterogeneous chips, but a lot of work is needed to make that happen.
Lithography Challenges For Fan-Out
Advanced packaging moves into high-volume mobile markets, but requires more sophisticated equipment and lower-cost processes.
Fan-out Knowledge Center
Top stories, videos, white papers and blogs–all about fan-outs



Leave a Reply


(Note: This name will be displayed publicly)