Second of three parts: Adoption of new technology is simple enough in a PowerPoint presentation, but what gets in the way? Challenges in stacked die for cooling and test; rethinking hardware-software co-design; the growing role of silicon photonics.
Semiconductor Engineering sat down with Sumit DasGupta, Si2; Simon Bloch, Samsung; Jim Hogan; Mike Gianfagna, vice president of marketing at eSilicon (VP of corporate marketing at Atrenta when this roundtable was held). What follows are excerpts of that discussion.
SE: The future of technology isn’t just about technology. It’s about people and regulations, as well. Where are the hurdles and what needs to be solved?
DasGupta: One of the big opportunities is 3D, but there are some technology problems that still have to be solved such as cooling. How do you cool a chip that’s been embedded between two hot chips? The power problem has to be solved. So does the testing problem. What’s a known good die? Test pattern generation will get solved, but it’s not quite there yet. And the business and legal issues have to be solved. Those are more important than the technical issues, so the TSMCs and GlobalFoundries and all the others can play together. Who pays for what and who’s liable for what?
Gianfagna: The analogy of the general contractor is reasonably good. The general contractor on a construction job takes all those risks.
SE: Isn’t that what happened in the PC era?
Gianfagna: Yes. Companies like Dell took the inventory risks. They developed bulk pricing. And they were responsible, as well.
SE: What else will change?
DasGupta: Photonics will be big, where the communication is going out of the chip. This isn’t intra-chip. The technology dimensions they’re using are significantly older than the chips being developed. If you’re doing a 20nm chip, you’re not going to do 20nm photonics. Photonics is at 90nm or larger. For inter-chip communication, that’s very useful, though, because a lot of the latency is when you’re going outside the chip. There’s a real potential for improving performance. One scientist told me that to go to exascale computing, they must have inter-chip photonic communication.
SE: Doesn’t that help with heat issues, as well?
DasGupta: There are all kinds of advantages—noise immunity, thermal issues, the way you can multiplex signals through the same waveguide. There are incredible opportunities with this technology. The key is to get the infrastructure for production and design ready.
Gianfagna: Do you see this interconnect technology as an alternative to 3D? Part of the attractiveness of 3D is shortening the wires and improving the throughput using good old-fashioned copper.
DasGupta: It won’t replace I/O in the stack, but what about when the logic chip is going out of that stack? In fact, the company I mentioned is looking at 3D in one chip, and photonics to the other chip.
Bloch: One of the big opportunities I see is with the compute system and a new way to design it. Today, in designing big compute systems, they are all heterogeneous multicore. One CPU core is not enough because you can’t beef up frequency anymore, so to get better throughput you add more parallel cores. But that’s for very generic tasks. If you count very specific tasks, such as video processing, you throw in another very specific core. If you want to process imaging, you put in an image processor. You end up with an array of cores. Some of them are interchangeable. GPUs can do a lot of data processing, and data processors can do a lot of graphics, as well. There is this whole notion of workload management behind that. But in a way, you’re wasting a lot of resources because making them optimally utilized for workload and power management is very difficult. That’s why we’re talking about new compilers to compile stuff into multiple resources. We’re talking about complex multicore operating systems. These problems are not solved, and they’re only going to get worse. Software-designed paradigms relate to hardware as a block in a design. It’s the same as adopting agile design practices in software. We want to adopt agile development processes that include both hardware and software. The intent there is that you should be able to redefine your hardware in an iteration process. Hardware shouldn’t be something you make and then give to software people. It should be software-defined, so you continuously rebuild the hardware and software together. That requires completely different design languages, simulation methods, and different design methodologies.
SE: Sorting through these comments, it sounds as if we’re working too hard in the wrong places. But the silos that create these issues are self-sustaining and self-propagating.
Gianfagna: That’s certainly part of the problem.
DasGupta: There are companies working on the right thing and the wrong thing, but they are emphasizing what they need to do to get to the future. You need to have some initial thinking done before you engage more people. But if you go back to the issue of reconfigurable computing and you’re talking about multiple resources, whether it’s a CPU or a GPU or graphics engine or video processor, ultimately there has to be underlying technology that supports it all. I’m not a big believer that anyone can write code blindly and that a miracle compiler can sniff out parallelism. It’s a happy thought, but it doesn’t produce optimal code. There has to be fundamental research that creates parallel programming and languages. Until we get there, where people think in a parallel way using a parallel methodology, it’s just not going to happen. The entire infrastructure has to change, including the human thought process.
Gianfagna: So we have 3D ICs, which are the right thing. We’re working on it. However, I get hate mail from my CFO because we’re doing all this work and I have advanced customers and it will be probably a year or two before I will get money back. I’m investing way ahead of the need. Can I do that with five other things? No. These things need a lot of work, but the payoff is late. That brings up another point. A lot of the technology we use today came out of the space program. We need someone to fund this stuff to do this in advance of the need, because if we wait for the market to need it, the world’s going to evolve at a much slower pace than we need.
Bloch: When you think about where logic synthesis came from, it was GE. In many of these technologies there is a very significant research and incubation period—years. Unless it’s Bell Labs or Samsung, they can’t afford it. Some of this incubation in the past has been done in universities. Today, not a lot of people are attracted to it because there are so many other areas that can be explored. Twenty years ago, EDA research was the cream. It’s not that way anymore.
DasGupta: Maybe the vision we have is a little slanted. Europe is hurting, but look at Imec. Look at some of the research in Korea, China and Japan. The space program was a shared responsibility. The government was the banker of last resort. Look at Sematech. It was the savior of the semiconductor industry. We need to continue with that model of shared responsibility and reward with the government serving as the banker. That way we can do this leading-edge research so everyone has skin in it—industry, universities and government. If you can create that partnership and continue with it, there are big rewards to be gained. Imec isn’t the Imec of 30 years ago. It’s in far more areas of future interest than in the past.
To read part one of this roundtable, click here.
[…] To view part one, click http://semiengineering.com/experts-table-whats-next-3/. To view part two, click here. […]