3D design opens up architectural possibilities for engineering teams to realize much better performance and far less power consumption.
By Ann Steffora Mutschler
Low-Power High-Performance Engineering sat down with Robert Patti, chief technology officer at Tezzaron Semiconductor, to discuss future challenges with regard to 2.5D and 3D design, including making tradeoffs and technical issues specific to 3D design. Tezzaron currently is working on 3D designs.
LPHP: What is the starting point technically to achieve the greatest power savings in 3D?
Patti: It varies. Generally what I would do first is look at if there are specific process separations that can help me out. For instance, if I separated the memory from the logic or the analog from the logic, does this allow me to use a process that is just fundamentally better? The next step beyond that is starting to look at the structure in 3D to improve proximity. Am I building some structure in 2D, where I end up spreading things apart just because everything wants to be in the same spot? You end up with a lot of congestion. Then I look at how can I move this into 3D space rather than 2D and probably help myself with the congestion. But this also allows me to bring some things closer that I maybe couldn’t do before.
This might be how you deal with memory blocks or caches. A lot of times those are big blocks and you would like them in the middle of everything, but you can’t put the cache in the middle of everything because it spreads too much stuff out. But in 3D you can fold it underneath the other circuitry. Those are a couple of the things that I initially scout out that tend to open up avenues you hadn’t looked at before. Sometimes you have to step back further than that and look at what am you are trying to accomplish.
Sometimes you need to step back and look at it with a clean sheet of paper. We did that with our DRAM, and we recognized that if we did the process separation we could gain a lot of speed in the device. The speed could be used to reduce the number of circuits that did some of the operations because we could run them faster. We also could run them at lower voltages, so we just got more power efficiency there. It also allowed us to reduce the array sizes in the memory. The array size reduction improves reparability, but it also improves power because the rows and columns are shorter. And it’s a very different design at the end of the day because we started with a fundamentally different architecture. A lot of the miraculous gain from 3D is going to be from architectural changes.
LPHP: Do all of the challenges with 3D design apply to 2.5D designs?
Patti: They are similar. Generally speaking, the big difference is between 2.5 and 3D is the amount of interconnect and the level of efficiency. To the first order, 3D improves your I/O power by a factor of about 1,000; 2.5D improves it probably by about a factor of 50 to 100. So 2.5D gives you quite a bit of improvement, but it is limited in how much you can break things apart and segregate the functions. In 3D we literally can have millions of vertical interconnects. In 2.5D, I am personally stunned by the success of Xilinx with the number of microballs that they have and that they all work. That’s phenomenal. I think that’s kind of where the edge of the world is at 2.5D. You’re talking about tens of thousands, not millions. I think 2.5D for a lot of applications may be the low-hanging fruit provided you get enough to cost-justify it. It’s a complicated formula and that’s one of the impediments of 3D. You have yield tradeoffs and processing cost tradeoffs and much more complicated performance-power tradeoffs and it requires a substantial effort.
We have really been spoiled because [the industry has] been doing it kind of the same way for 30 years where we just do the shrinks, and we kind of know how to do it, we have back-of-the-envelope calculations, we know how that next generation is going to work for us, pretty close. 3D—because it gives us a whole bunch of new knobs to turn—unfortunately makes it a much more complicated picture and I think what scares a lot of companies is they quickly realize that this isn’t going to be a back-of-the-envelope calculation. They are going to spend some real man-weeks and man-months to figure out what an optimized solution is, and it’s not going to end up being a sound bite to give the president of the company in six months. It’s going to be a series of tradeoffs that are going to have different, non-subjective parameters around them. ‘I think we’ll get better yield, and we expect to get this kind of power savings, and we believe we can give it for this price.’ There just isn’t the built up experience that somebody can go and say, ‘We’ve done it like this before and we know the answer was x.’
We have more than 100 customers that we work with on 3D. The majority of them are people who are kind of doing their experiments. They think they might have some ways of saving money and they want to see how well it works. They want to see how reliable stuff is so many of the parts we’re building for people will never see the light of day in a consumer product. They are probing the boundaries to see how well it goes. The vast majority of people who have done it have been pretty happy with what they’ve seen, but it’s a way for them to experiment and they can do things for $100,000, $200,000 or $300,000. Maybe some of the really complicated ones might be $1 million, but they are trying to do small-scale programs. A lot of them are with large companies that are partnering with universities and they try stuff and we put it together for them. It’s really to get their feet wet and be able to have some data so when they go in to have the discussion of the next $50 million or $100 million turn of silicon, and they want to suggest doing some of these things, it doesn’t seem quite so farcical.
LPHP: Have all of Tezzaron’s 3D designs been logic and memory stacking?
Patti: Many of ours are logic on logic. We have maybe a third that are memory on logic, a third that are logic-logic and the remainder is sensors and mixed-signal.
LPHP: In a 3D stack, how many layers are there?
Patti: From a practical standpoint, we do two to seven or eight, and even that is done as a stack of stacks. There are two different stacks that we bond together. From a pure 3D assembly of a single stack, the most that we support is five layers today. At some level that’s an arbitrary limit. We are planning on going up to as many as 17 layers over the next couple of years, and the magic of that number is we have 16 layers of memory on one controller. And there have been some experimental things in the seven-eight layers that I’m aware of. That was done with one of our partners.
Leave a Reply