Overcoming The Limits Of Scaling

Experts at the table, part 3: Looking at IP from a system level; the good and bad of advanced packaging; the benefits of machine learning.

popularity

Semiconductor Engineering sat down to discuss the increasing reliance on architectural choices for improvements in power, performance and area, with , CEO of NetSpeed Systems; Charlie Janac, chairman and CEO of Arteris; CEO of Imperas; John Koeter, vice president of marketing for IP and prototyping at Synopsys; and , a consultant at Cadence. What follows are excerpts of that conversation.

SE: Can IP be designed for an entire system, and does that change what has to be done architecturally?

Janac: If you are using layers and stacks, you can go all the way from layout into architecture for a particular piece of a chip. It gets used by the architect, by the RTL developer, by the layout person, by the verification engineer, for what is essentially a vertical slice of the chip. That’s the right way of designing IP.

Koeter: I don’t see it that way. There is no doubt there is closer collaboration between system companies and their semiconductor partners in terms of driving specs. There are ways to communicate above the traditional spec level for chips. We’re passionate about having virtualized models. The SoC is a great way to communicate and get feedback from the system companies and the semiconductor companies. It’s still niche-based, but it’s a great way to do it. Other ways are to have hardware prototypes before you have a chip. The best by far is using a simulation model of an executable spec. But it’s something the industry the industry has been slow to adopt.

Janac: The best approach is to have that model dovetail to a cycle-accurate model, to dovetail to RTL, to the layout where you take into account the layout constraints. Each of these levels has some forward and backward capability so as you get more detail you actually get something that’s real.

Koeter: I agree, but that’s not the way the world is working right now. We’ve been trying to change the world for 10 years.

Rowen: It’s striking how well the system works from RTL down, and above the RTL. Some of that includes very diverse views of how you get to RTL. Did you do high-level synthesis?

Davidmann: There are worse challenges.

Rowen: As the chaos increases from architecture to RTL, that’s just one level of chaos. And then you have this Wild West of several communities.

Koeter: We just recently modeled an ADAS chip using virtual prototyping technology. We completed that model 15 months before silicon would be available. They’re communicating that model to their Tier 1’s, who are using that for virtual hardware in a loop, and they’re starting to use it for fault injection. That’s the way the world should work.

Davidmann: It’s beginning, but it’s slow.

Mitra: It needs to move sooner rather than later. We are taking technology and having it make real-time decisions for us. It’s dangerous if we haven’t validated it and done the co-simulation that has to be done at that level. The second thing is that if you look at some of the largest chipmakers, they have dozens of SKUs optimized for different customers. That doesn’t make any sense. If there was a way to analyze this as a system-level problem where you can create some level of optimization for different applications, then you don’t need to do all of those different pieces of hardware to satisfy each customer.

Rowen: That approach sounds like success, not failure. It suggests they’ve invested in a platform and they’re able to spin off solutions.

Mitra: Yes, ideally that is what you’d want. But if you look under the covers you find how many design teams there are. You would not need that many people.

Rowen: Yes, but the idea that we’re going to get to consolidate on chip design is neither possible nor desirable.

Mitra: Probably not, but we should at least be able to get to a subset of that. As you’re designing systems in packages, with 2.5D and 3D, with the memory moving closer to the processing, there is a paradigm shift in how architectures need to be done. We’re not equipped to do that analysis today.

Rowen: There are some good examples of methodologies today that are working effectively. Some of it comes from having much more rapid turnaround times from concept to having the right building blocks. Having higher-level forms of synthesis, including process synthesis, is required. Neural networking is another example of another higher-level form. This is a fundamentally new way of thinking, and it obsoletes the old way. On top of that, higher-level modeling, emulation and prototyping are quickly getting to the point where people have the choice of doing early software. And because much of the problem is how hardware fits together with software, that is a big piece. And this moving up in terms of the fundamental level of abstraction and the early development of software are two of the pillars. Different verticals have fairly different requirements. There is not as much common denominator.

Mitra: Having a top-down flow that is integrating everything and giving you good feedback is heading in the right direction.

SE: As we move into the world of machine learning, we’re architecting chips that could end up as something different. How do we do develop those?

Rowen: There are no systems that are just machine learning. Machine learning is a technique within a large portfolio of techniques. It’s not as if we’re not going to have human interfaces or we’re not going to have operating systems. There are going to be traditional pieces integrated with these neural networks. It’s going to be heterogeneous. The other problem that we face, and which we will overcome, is that people don’t have a very good understanding of how to get the most out of these systems. We’re heading into an era where the hardware is less valuable and the data is more valuable.

Koeter: We’re not talking about AI. We’re talking about a very well-understood engineering process. You have the algorithm, and then you apply heuristics to that data to do a certain step.

Davidmann: It’s not the traditional way of designing. Because you have some hardware, there has to be some software running on it. And there may be some other things you don’t get technology to help you with. The architectures may be different, but the process to get things built and running isn’t that different.

SE: How do some of the advanced packaging options—fan-outs, 2.5D, 3D—affect architectures?

Mitra: It gets memory closer to the processing.

Rowen: That’s biggest thing it does. It’s still not the most cost-effective solution, but it works. People are doing it. It opens up a path that needs to be explored better. This is being driven today especially by people who need a small form factor. It has to get commoditized before people use silicon interposers or through-silicon vias in cheaper products. That will take awhile.

Mitra: There are a lot of manufacturing challenges. How do you know which die is a problem if something goes wrong? That’s true especially for memory, because you’re moving it closer. There also are a lot of challenges to commoditize it. In the compute segment, it is a massive shift. It is challenging the whole volume and architecture. Memory is suddenly becoming available to the compute engine in quantities that were not available in the past. If you could figure out a way to create a high-speed SerDes and put it inside memory, it would be a killer product. It’s getting the data too and from memory, which was a bottleneck. This is what advanced packaging is enabling.

Janac: There are other solutions to that problem. You also can have on-chip caches, which decrease the off-chip memory access. There are a lot of solutions to the memory hierarchical issues that are architectural.

SE: There are more dislocations now than at any time in the past. Is it a good time to be an architect?

Janac: Yes.

Rowen: And part of the reason is that we no longer have the luxury of Moore’s Law behaving in a traditional fashion. Moving from one node to the next every two years has gotten pretty hard. Now all attention has turned to the architecture. Memory bandwidth is a case in point. Architects are figuring out how to do things with much less memory. If you take an embedded vision neural network, you may have 6 megabytes per frames per second. If you were running 1,000 frames per second, you would need 6 gigabytes per second of memory. By other kinds of architectural changes, you can do that in 100 kilobytes per second. That’s not going to create a power issue. You didn’t have physics to help you. You had an architecture to help you. This is really a golden age for architecture, and that’s partly because it’s gotten tough to do it any other way.

Davidmann: When we started a while back, everyone was building a new computer architecture because they couldn’t be programmed. Very few of them could gain adoption. There’s a risk with architectures. It’s about the system, and getting software to run on the system.

Mitra: Where the neural network is gaining hold is not in the main core processor area. It is happening in the accelerators. We will be innovating at a slower pace, but it will start there and then move into other areas.

Rowen: If Moore’s Law isn’t working for you, there are other options in terms of CPU performance. Offload becomes more important.

Janac: One of the key architectural issues is that as more computing gets offloaded to accelerators, how do you make the accelerators more coherent so they look like part of the main CPU to software? Heterogeneous cache coherency has become one of the big architectural issues.

Rowen: And there is no technical barrier to having things coherent, as well. But people have to be accepting that it will be coherent.

Related Stories
Overcoming The Limits Of Scaling (Part 2)
The impact of security on architectures, what’s missing in software, and why EDA business models are so rigid.
Overcoming The Limits Of Scaling (Part 1)
Complex tradeoffs involving performance, power, cost, packaging, security and reliability come into focus as new markets open for semiconductors and shrinking features becomes increasingly expensive.
Stepping Back From Scaling
New architectures, business models and packaging are driving big changes in chip design and manufacturing.
Focus Shifts To Architectures
As chipmakers search for the biggest bang for new markets, the emphasis isn’t just on process nodes anymore. But these kinds of changes also add risks.



Leave a Reply


(Note: This name will be displayed publicly)