Straight Talk On 3D TSVs

Still too expensive for consumers, but the benefits in the server market are compelling.


By Mark LaPedus
Semiconductor Manufacturing & Design sat down to discuss 3D device challenges and applications with John Lau, a fellow at the Industrial Technology Research Institute (ITRI), a research organization in Taiwan.

SMD: What is ITRI doing in 3D TSVs?
Lau: At ITRI we have developed the world’s first Applied Materials’ 300mm (3D TSV) integration line. The line was completed two years ago. We developed the process from the very beginning to the end. We don’t have products. We are demonstrating the feasibility for 3D TSVs.

SMD: What else is ITRI doing in the arena?
Lau: We also have a consortium call Ad-STAC (Advanced Stacked-System and Application Consortium). We have more than 22 members. We just develop the necessary technologies for 3D integration. The members are UMC, SPIL, Applied Materials, Brewer, Rambus, Cisco and others. In addition to that, we have some 80 people working on EDA. For 3D, EDA is very critical.

SMD: Where is the industry at with 3D TSVs?
Lau: For me, it’s still very early. You still have to bring the OEMs into the mix. The OEMs may say: ‘Oh, I’m interested.’ Then, you still have to wait three to five years. There are two different kinds of OEMs. One is consumer. 3D TSVs are still too expensive for them. However, it could be a different story for high-end, next-generation servers, networking and test and measurement gear.

SMD: Where is a good starting place for 3D?
Lau: Take the Hybrid Memory Cube Consortium. Several months ago, the group announced they would open up the spec by the end of this year. But that’s only for high-end servers, test and measurement, and networking. It’s for very high performance and not for the consumer. They may adopt 3D. But the Hybrid Memory Cube for mobile products? Come on. Of course, we hope 3D can be for the consumer market. In consumer, there are larger volumes.

SMD: What is the biggest challenge for 3D?
Lau: Cost. The consumer market is cost-driven. For the iPhone 5, the semiconductor bill of materials is less than $30. The ASICs and memory are less than $30. Now take Xilinx’s 2.5D FPGA. The CTO from Xilinx recently gave a keynote at Semicon West. His conclusion was that they need to reduce the cost. A 2.5D FPGA is still costly.

SMD: What are the manufacturing challenges?
Lau: Just to make the TSV is no more than 5% of the cost. But if you look at the other steps, you have temporary bonding, back grinding, and others. The biggest issue is thin wafer handling and temporary bonding/debonding. And then you need to debug it.

SMD: Who should make the TSVs? The OSATs or the foundries?
Lau: Xilinx is using 65nm technology for their 2.5D FPGAs. OSATs like ASE don’t have 65nm technology. If they did, they would become another foundry. The OSATs should not make the TSVs. I still say a dummy piece of silicon like an interposer, where the line widths are 3 microns and above, the OSATs can do that. Last year, Amkor said that they are not going to invest a penny to make TSVs. That’s the right direction.

SMD: Why is Wide I/O memory generating so much interest?
Lau: Memory bandwidth. Bandwidth is defined as the amount of data transferred per second. Typical dynamic random access memory has 4-, 8-, 16-, or 32-bit data width to communicate with CPU/logic/SoC and/or the outside world. These are called ×4-, ×8-, ×16-, or ×32-bit I/O. Wide I/O is defined as ×512-bit I/O or 512-bit data width or greater.

SMD: So memory bandwidth is the name of the game?
Lau: The memory bandwidth is proportional to memory I/O data width. For instance, the DDR3–1600 chip has a speed rating of 1600 Mb/s per I/O. If this DDR3-1600 chip has ×32-bit I/O data width, the chip would have a total memory bandwidth of 32 × 1600 = 51,200 Mb/s or 51.2-Gb/s. The larger the data width, the larger the memory bandwidth.

SMD: So where’s the bottleneck?
Lau: The data width is limited by IC packaging technology. With TSV technology, which provides very small via size (5- to 10-μm sizes are common) and pitch (20- to 40-μm pitches are common), a much wider I/O data path, such as 512-bit data width, is more than possible. On the other hand, wire-bonding technology has pad sizes and pitches that are many times larger than those of TSV. In order to achieve a 512-bit data width, the chip size, and thus the cost, has to be increased substantially. This is why TSV is so attractive for memory bandwidth. Let’s say that if we have TSVs run through a 4-DRAM stack with a ×512-bit data path, we could have the same DDR3-1600 chip with a total memory bandwidth of 102.4-GB/s. Of course, this DRAM stack has to interconnect to the logic/SoC in order to get this bandwidth.

Leave a Reply

(Note: This name will be displayed publicly)