IP To Meet 2.5D Requirements

Widespread use of 2.5D is still at least a year out from becoming a viable option but specific technical requirements are being identified along the way.

popularity

The semiconductor industry is still in the early stages of evolution in the realm of 2.5D, but when these devices do come out, the IP used on them will have to be brand new, according to Javier DeLaCruz, senior director of engineering at eSilicon.

“The IP causes the biggest risk that you’re going to have in this implementation,” he said. “Everything else in here for making those ASICs in a 2.5D structure is really not that different—same thickness of silicon; same processes; your bumps are smaller. But now you’re dealing with IPs that are different, made just for this.”

What will be different about IP for 2.5D designs is that the pitch is much smaller and the interfaces are much more parallel, with a memory being 1,024 bits wide as opposed to 128 bits wide, DeLaCruz said.

Memory IP providers are all working on this, he said, because “everyone knows there will be no DDR5. DDR ends at DDR4. The guys doing high-density memory are a special breed. They need to adapt or their market is going to go in the wrong direction, so they are all chasing the Wide I/O memory opportunity.”

Kevin Yee, product marketing director in the IP Group at Cadence, said the IP from a controller standpoint is not going to be any different, but rather for 2.5D memory, it will reflect the requirements specific to that space.

“We’re not designing the IP any differently, there are just different requirements for the PHY because the HBM (high-bandwidth memory) is very wide. Typically a lane is 128 bits for a single channel, and you can have many channels. Right now you’re going through the interposer. From an IP perspective we really don’t do that much different. Just like with every technology – PCI-E, USB, etc. – they have their specs and their requirements. HBM has its own specs and requirements and we design to the HBM spec. The fact that it’s going through an interposer really doesn’t change the way that we’re going to be developing the IP. It’s just another spec requirement.”

At the same time, Yee pointed out, “because you’re dealing with an interposer, it is slightly different from a system integration standpoint. You have so much interconnect right now and it’s going through an interposer. Essentially an interposer is just another board. It’s slightly different than putting a package directly onto a board. The microbumps can be on the order of 10 micrometers – very small, whereas on most boards, the I/Os are anywhere from 25 to 40 micrometers in size. Now you’re talking about 10, so yes, the requirements are stricter. But from an IP perspective, it’s a spec that we deal with all the time anyway.”

To deal with the integration issues, Cadence and others are working much more closely with packaging companies such as Amkor and ASE.

Limitations of 2.5D
Taking a step back, Ravi Varadarajan, an Atrenta fellow, noted that 2.5D is an intermediate step on the path to true 3D heterogeneous integration with multiple dies stacked on top of each other. “2.5D is the ability to stack existing dies on top of a passive interposer – it could even be an active interposer die. You could stack multiple dies on top of an interposer die, which is acting primarily as an interconnect medium to connect signals between the dies. Ideally, you could have these dies that are stacked on top of the interposer in heterogeneous technologies – you could have one that is in 28nm, you could have an IP or a die in 40nm technology stacked on top of an interposer die, which is also a completely different technology node.”

The limitation of a 2.5D architecture becomes apparent when an entire die has to communicate to the interposer through bonding pads and the bump pins.

“There is going to be a physical structure that you’re going to fuse to the interposer die,” Varadarajan explained. “You’re going to account for that in the building of the IP, which is not the case if the entire SoC was on one single monolithic die. That comes into the picture and becomes a major function in terms of evaluating the cost tradeoff because the technology today. The microbumps are much wider in pitch. Even though you might have a 16nm technology, the microbump itself will have to be significantly larger in terms of dimensions and pitch. They’re going to take a sizeable area of the IP. This is where the whole notion of pathfinding becomes very important to analyze whether it even makes sense. If you have an architecture in mind and you’ve decided to go to a 2.5D implementation, deciding whether you want four dies on top of an interposer or eight dies on top of an interposer is a tradeoff that you need to be able to make fairly accurately, because that’s going to have a major impact on the cost.”

Test issues arise
Again, while the IP itself will not be that different for 2.5D, another issue that needs to be accounted for in a 2.5D design is the testability of the IP.

“In case the IP is standalone and not necessarily part of an SoC, going into a 2.5D system the IP does have its own testing requirement – primarily good die requirements and other requirements that the IP now has to meet,” said Sunil Bhardwaj, director of strategic relationships at Rambus. “Specifically, in a system in which the IP is standalone on a 2.5D interposer, it would require a lot more self-test built into the IP as opposed to relying on the SoC to provide a lot of the test stimulus, so you can imagine that as it becomes discrete and goes into a 2.5D system, the company creating/owning the 2.5D would like every die that goes into that system to be proved to be good. If it is not embedded, if it is a discrete chip, the best way to do it is to build in self test.”

Owning 2.5D yield
Due to the issues noted above, as well as other interdependencies of 2.5D, the design, manufacturing and packaging ecosystem is tightening up. “With the 2.5D and 3D, the foundries are getting involved, the packaging guys are getting involved, the IP companies are getting involved and the developers are getting involved. That whole ecosystem has to work more closely together to make sure everything works together,” said Cadence’s Yee.

An outstanding issue is which party in the ecosystem will take responsibility for the yield.

“We’re still working that out, but to some degree there are a lot of co-development and partnerships going on and it depends on the end customer, who is going to be offering what services. To a great extent, that’s where yield is always going to be part of the foundry responsibility,” he added.

Further, DeLaCruz agreed that who owns the yield for 2.5D is a difficult issue. “From our standpoint, because it’s our business, we will have to own yield for devices that we design and we’re having to vet these technologies. It’s not about getting to 100% yield. The first step is getting to a very well-known yield and then we’ll be able to move it to a higher point.”