中文 English

Power, Reliability And Security In Packaging

Experts at the Table, part 3: Why advanced packaging is now a critical element in compute architectures, 5G, automotive and AI systems.

popularity

Semiconductor Engineering sat down to discuss advanced packaging with Ajay Lalwani, vice president of global manufacturing operations at eSilicon; Vic Kulkarni, vice president and chief strategist in the office of the CTO at ANSYS; Calvin Cheung, vice president of engineering at ASE; Walter Ng, vice president of business management at UMC; and Tien Shiah, senior manager for memory at Samsung. What follows are excerpts of that conversation. To view part one, click here. Part two is here.


(L-R) Walter Ng, Calvin Cheung, Ajay Lalwani, Vic Kulkarni, Tien Shiah. Photo credit: Patricia MacLeod/ASE

SE: There has been a lot of talk about moving the processing closer to memory to deal with all of the data that’s being generated by sensors. What is the impact on packaging?

Shiah: We see two architectures emerging. One is in-memory processing, to address the compute-memory problem. The other is very-near-memory processing. Every AI chip company is either looking at HBM or exploring HBM. There might be some startups that are trying to do something different with in-memory using SRAM, but virtually every AI chip company that is in production or exploring designs is looking at HBM.

SE: Is this HBM2, 2E or 3?

Shiah: HBM2E is our next generation. It’s 33% faster and double the density of our current generation. HBM2E runs at up to 3.2 gigabits per second. HBM3 is targeting 4.0 gigabits per second and up.

Lalwani: HBM2E will be the mainstream at 7nm AI types of applications. Everything in development today is HBM2E, which means the IP already needs to exist. We have the PHYs in silicon and test chips.

Shiah: If you look at the AI problem, people want faster training times and they want more accurate models. With faster training time with AI, with memory being directly related to performance, the faster the HBM is, the faster the training. To get more accurate training models, people are using more and more layers and deeper networks, and deeper networks require more memory.

SE: What happens with 5G? Due to signal attenuation with millimeter-wave devices, these devices will be almost constantly searching for signals, and no the base station side they will have to bend signals around objects. Where does advanced packaging fit into this picture?

Lalwani: On the base station side, you need the infrastructure first.

Ng: Yes, that’s essential before there are any benefits from this technology.

Lalwani: That is full-on operational intensity. Everybody is supporting 5G, and it’s a very intensive ramp. We’re talking about 4 HBMs, reticle-size dies, 500-watt power management in the modules, board-level reliability issues, and component-level reliability issues. It’s like changing a tire while you’re driving 100 miles per hour. There’s a lot of learning going on.

Kulkarni: Another problem is the radiation management. There are problems with the MIMOs and beam-forming technologies and the chips, but more and more the radiation is getting out of control. Things are interfering with each other, and now there is a compliance effort in many countries for an extremely strong standard for radiation for 5G. Belgium just announced one. Several other countries in Europe say there has to be limited radiation. That means the base station distances and how the antennas will be emitting from these base stations are being regulated. There will literally be a base station every 100 meters. For beamforming you need that many stations.

SE: These are smaller base stations, though, right?

Kulkarni: Yes. They’re on top of roofs and street lights. But every spring almost all of these base stations will get obstructed by leaves. It’s a very difficult problem. With a beam, one antenna will not do the job, so they will shower the area with radiation from multiple antennas. Electromagnetic radiation is just the beginning. This is just at the chip level. Then there is package-level, looking at the electromagnetic interference and radiation patterns. This is a huge problem for us and for our customers.

Cheung: You can’t do the infrastructure without packaging. Because it’s millimeter wave, you don’t need a huge antenna. So we’re working a lot with antenna-on-package.

SE: Is it one antenna or multiple antennas?

Cheung: It’s an array.

SE: Doesn’t that make it hard to test?

Cheung: Yes, and we’re working with ATE vendors on this. To develop this at a compelling cost point, it has to use the existing silicon infrastructure. That means wafer-level, package-level testing. We’re working with ATE vendors to figure out a test solution. We have to solve all of the over-the-air test issues and the EMI test issues, and it has to be done at a reasonable cost. That’s the issue that we’re wrestling with now.

Kulkarni: To that you can add all of the cyberattacks with EMI/EMC (electromagnetic compatibility). At the moment the chips are decrypted to receive the signal, analyze and transmit, which is less than 1 millisecond, the chips are exposed to hacks. A huge dollar amount has been allocated by many big companies to solve this problem.

SE: Isn’t this the same problem as cars face with over-the-air updates?

Kulkarni: Yes. All of these modules are completely exposed to cyberattacks, from IoT nodes to 5G to ADAS. These devices are continuously radiating, and the bad actors are watching the EMI or thermal signatures to IR, for example. It’s very easy use side-channel attacks.

SE: One of the advantages initially cited for advanced packaging was the ability to eliminate branch prediction and speculative execution by using more heterogeneous processing elements to achieve sufficient performance. That doesn’t seem to be the case anymore, right?

Kulkarni: Some of the chiplets people are talking about can be attacked for dynamically changing signatures. Packaging will be essential to add some coating to prevent an IR or EMI attack. It can still be attacked when the chips are exposed, though.

Cheung: There are so many outside influences, it’s not clear how to manage the EMI shielding, the package, and your transceiver and baseband chip. You also have to shield everything from other chips on the motherboard.

SE: What’s happening in packaging in automotive? These devices are supposed to last for 18 years with no defects. Is that even possible?

Cheung: It’s possible. It’s just a matter of cost. In the automotive world, you can’t use the same materials—the molding material, the underfill, the dielectric materials on the substrate. All of that has to withstand the automotive demands. Unfortunately, those companies also want consumer costs.

Kulkarni: They just raised the Level 0 temperature to 175ºC from 150 ºC.

Cheung: We want separate the chips that go into the engine compartment and the passenger compartment, so we don’t have to do such extensive development efforts, which raise the cost. But more and more, it appears automotive is heading toward one standard.

Ng: Our automotive business has grown significantly. In 2017, our revenue from automotive was about 7% of our overall revenue. It’s now 18%, which is still a small percentage, but it’s growing quickly. We have automotive customers engaged at old technologies all the way through to our leading-edge technologies for all types of applications, whether it’s body electronics, chassis electronics or ADAS. We’re seeing opportunities in LiDAR, radar, and just about everything else. The challenges in automotive, where companies traditionally have been more like IDMs, have taken awhile to work through. The customers needed to understand what a foundry does, and we needed to understand what their expectations are for automotive. We’ve gotten to that point. We now have a deeper understanding of what their requirements are. A lot of times we generalize that something is a Grade 2, 1 or 0 application, but it takes another level of discussion as to the specific care-abouts for that application. That’s a discussion we have with our end customer, and which they have with their end customer. The Tier 1s are always in our fabs, always doing audits, and we have a very good relationship with them.

SE: And they’re supportive of advanced packaging, as well?

Ng: Yes, they are. And there increasingly is a line between where they play and where their suppliers play.

Lalwani: When you talk about AI and automotive, that will likely be the inflection point.

Ng: I agree. If you talk to a lot of the AI companies, automotive is one of the big focus points. There are a lot of interesting approaches. There is the safety side of this and the edge computing side. In some cases, there is simply more filtering of the data. But all of them are driving more leading-edge technologies, and packaging is definitely a concern in those applications.

Shieh: We’re having those kinds of discussions, too, because there is more interest in HBM in automotive.

SE: And this leads to a different topic, which is where is all of this computing going to be done? Will it be done on the edge, and if so, what does an edge device look like?

Cheung: An edge device is no different than a CPU or GPU in a server, except that the data crunching is done locally.

SE: But now you have to run some of this off a battery, too.

Cheung: Yes, but you can do a lot on your local office server system, too. This is all about power efficiency, and we’re working on how to minimize the power going to AI chips, the CPUs and GPUs.

Ng: The edge has a very wide definition. Rarely do you find an edge device defined as the end-all, be-all. It’s pretty distributed.

Tien: Especially for the IoT. It could be something as small as a pocket device, and something as big as a car.

Ng: It’s also an area where everyone is cost-aware. If you want to deploy this in volumes, it had better be affordable to the masses.

Kulkarni: There’s also fog computing. If you think about the airport, you need real-time recognition to identify the bad actors. That requires significant power, not batteries. That’s compared to a windmill, or an oil and gas platform in the ocean. You want to get the gas from the well back to the shore. A lot of those are changed on the fly due to stormy weather or a moving platform. They do real-time changes with analytics. They are also edge computing locally. But some are doing energy harvesting. So we see a spectrum of edge devices. Sensors are more important, and there are 22 sensors in the world that can do all of the jobs. Those consume different levels of power for different jobs, so those have to be optimized in a system environment, and they have to make calculations that are relevant for different markets. In a windmill, you may have one central windmill and 10 other windmills around it. It’s like a master/slave architecture.

Cheung: You need real time in autonomous driving. You can’t wait for a signal to go to the cloud and come back to the car.

Shieh: There are two aspects to this. One is local compute, which in machine learning is the inference part, where you make use of the same models to come up with desired results. There also is the data-gathering element. All of these edge devices are collecting data, and if you’re not doing something with that data then a lot of times you’re sending it back to the data center, which is contributing to this exponential growth in data. So there is a need for higher bandwidth and greater density in memory.

Check out part one and part two of this series.



Leave a Reply


(Note: This name will be displayed publicly)