Experts At The Table: What’s Next?

Problems with power at 22nm; getting heat out of a stacked die and SiP; limited promise of new materials.

popularity

Low-Power Design sat down with Leon Stok, EDA director for IBM’s System & Technology Group; Antun Domic, senior vice president and general manager of Synopsys’ Implementaton Group; Prasad Subramaniam, vice president of design technology at eSilicon, and Bernard Murphy, chief technology officer at Atrenta. What follows are excerpts of that conversation.

LPD: What will happen with verification at 22nm?
Domic: I am more concerned about the complexity of a very large design at 22nm. That will be 500 million cells, meaning 2 billion gates. It’s much larger.
Murphy: Managing the power will be very significant. You can’t just gate clocks anymore. You’ve got to be looking at switchable power domains. You need local process monitors to do PDT compensation. You need body biasing. There are lots and lots of controls.
Subramaniam: There is a limited set of applications that can take advantage of all the real estate on a piece of silicon, and I suspect there is going to be a lot of memory. But how much random logic can you put in 500 million transistors?
Domic: There was talk when we got to 100,000 transistors that we would not know how to use them. I believe people will find applications. I think the complexity will be the main hurdle. The amount of data is enormous.
Murphy: But how much of it really needs to be at 22nm? If you have some really high-speed memory on one die and you have some analog functions on another die and logic functions on another die, do you really need to put everything in 22nm?

LPD: You’re talking about a stacked die or system in package?
Murphy: Yes. But increasingly a lot of interest is moving toward the stacked die.
Domic: If I have a function that is working perfectly at 90nm and I can put that inside a package next to a large piece of logic that is at 32nm or 28nm, that’s certainly an attractive solution. If the economics of manufacturing the package and integrating the pieces are good, there’s no reason to change. Analog is always at an older node because they need stability in the process to make it work.
Stok: It’s partially a function of cost and partially a function of power. You cannot get more heat out of a 3D stack than you can out of a 2D stack. And that’s the problem right now. Anything that limits heat dissipation will have big trouble going to 3D.

LPD: At future nodes, does the design also require application software because of the complexity?
Domic: The majority of engineers being hired these days are software engineers. We’re not even talking about large IDMs like IBM or Cisco. These are semiconductor companies. They have to give their customers more than just a piece of silicon. The industry will look at hardware-software co-verification, but really we’re not in the software design side so far.
Murphy: We’re definitely being told that application software verification is, by far, the biggest problem. Hardware validation is being rapidly overtaken by the software validation to the extent that almost all the cost is going there now. How do you verify your MP3 player is functioning correctly as you’re doing streaming data on your Internet connection. That’s something you can’t prove with Verilog, SystemVerilog or SystemC or any of those levels. You have to run the software.
Subramaniam: There’s also the issue of manufacturing test. When you have such a large design, the existing testers do not have the capacity. We will need either all kinds of compression techniques or to do multiple passes on the tester.

LPD: One idea being kicked around inside places like IBM is to limit the functionality on a chip. Will that approach work?
Stok: This ties into the overall theme of making things simpler. The people who will win will find the right mixture and mechanisms to do this. Standard cells are much more restrictive than custom designs, but they have carried the industry for many years and they will carry it for many years forward. We never got to the next level of abstraction where, no matter how you tie it together, it works. If that were true, you would never have any problems with your Windows machine. The only way we are going to be able to build more complex systems at a reasonable cost is by having things that are pre-verified and guaranteed so that when you put them together they will work.
Domic: IP blocks have been used for that, and they will be used more in the future—but for very, very well-defined applications. This means PCI or USB. But I’m not sure that changes the paradigm very significantly. The other idea is to move up from RTL to a behavioral/transaction-type of C. We have been successful for simulations at that level, but we have not been very successful in using that for the actual hardware implementation. If you look at logic synthesis, there were internal efforts at companies like IBM, which were using this kind of technology internally before it became commercially available. I have not seen that kind of equivalent at a higher level that can be commercialized.
Murphy: I would argue that an FPGA is that kind of solution. It’s a different solution. You’re not synthesizing from a language. But an FPGA, by definition, is a proven piece of silicon that is re-targetable for a number of applications. LSI tried RapidChip, which wasn’t the right solution, but there might be something out there that will produce re-targetable chips rather than re-targetable IP.
Subramaniam: I think there’s a little bit of a conflict between what you’re suggesting and what is economically feasible in a 22nm technology. Rather than come up with 10 different designs, if I can come up with one design that can be reconfigured for 10 different applications that makes much more economical sense. With that you have a single chip that targets all these different applications.

LPD: What you’re suggesting is a platform, right?
Subramaniam: I’m not sure whether it’s a platform or reconfigurable RTL or an FPGA. But I do believe these chips at 22nm and below are going to have repeatable structures in them that can be re-used and reconfigured.

LPD: Is one solution more embedded functionality?
Domic: Some chips that we call platforms are just that. There are ARM cores and embedded DSP processors and memory, and you leave a small piece of the chip where you put your specific logic. But there are still specific chips you need to design. The problem we have with the FPGAs is we will need some breakthrough because the number of transistors and gates is still an order of magnitude over an ASIC.
Subramaniam: The performance we get out of an FPGA is nowhere near the performance of an ASIC.
Domic: All of this has hampered the FPGA market. If you think of things in a package and you are not sure what your protocol or interface will be, you leave that to an FPGA. But what is clear is the older technologies are surviving for a longer time. Designs at 90nm and 65nm are lasting longer than designs at 180nm and 130nm.

LPD: And some companies are skipping nodes, right?
Domic: Yes, because you need to put in substantially more to justify the next generation.

LPD: Are there new materials that will help?
Domic: Materials changes are dangerous. At 130nm, copper was introduced and there were some difficult problems. We did much better, as an industry, with high-k dielectrics. Through-silicon vias will require some new materials to make them flexible for applications. But I’m still amazed that CMOS, with all its modifications, has lasted so long.
Stok: There is a continuous search for better dielectrics. There are interesting tradeoffs between mechanical, thermal and electrical, and some of these materials will get to manufacturing. But I don’t think they’re going to change the game that much. They’ll just allow us to have a little bit better electrical properties.
Domic: There is a little bit of a decrease in reliability of the implementation because if you look at the people doing semiconductor development by themselves—you have Intel, IBM and the Common Platform allies and TSMC—the others are significantly smaller.
Stok: We have been able to do that by consolidation, but now we can’t. Three is the magic number. Two is too risky because one might falter, so the industry likes three of everything. There is no one left. It has become incredibly complex. Engineers have to get out the next technology. They need a simpler design so they don’t have to worry about every possibility.



Leave a Reply


(Note: This name will be displayed publicly)