Optimization Challenges For 10nm And 7nm

Experts at the Table, Part 3: Modeling accuracy, skin effects and new packaging techniques take center stage.

popularity

Optimization used to be a simple timing against area tradeoff but not anymore. As we go to each new node, the tradeoffs become more complicated involving additional aspects of the design that used to be dealt with in isolation.

Semiconductor Engineering sat down to discuss these issues with Krishna Balachandran, director of product management for low-power products at Cadence; , CEO of Teklatech; Aveek Sarkar, vice president of product engineering and support at Ansys; and Sarvesh Bhardwaj, group architect for ICD Optimization at Mentor Graphics. What follows are excerpts from that conversation. Part one is here. Part two is here.

SE: Modeling is a compromise between accuracy and performance. How much information do designers need to make rational decisions especially in the front end of the process? Is this any different for power?

Sarkar: If you abstract too much you will give up accuracy. But it comes back to the issue of what are you going to use that number for. If you are pragmatic enough to understand the elements that went into the model, then most times you can take it at its face value and assume it will do everything you ever hoped it would do. You can do that without understanding the assumptions that went behind making the model and what is the appropriate way to use it. As you go up in abstraction, the simplification becomes the predominant need rather than the detail. For example, if you are designing a standard cell, then the accuracy of the transistor models is very important. You are looking at the circuit delays, how to structure the cell and lots of things like that. As you go up to higher levels, the SoC-level, you don’t care about that. You have a layer of abstraction in the library models. At the system level, you create an abstraction of the chip for power analysis—in our case, a die-chip-power model. That fits into the system level. As long as you understand the tradeoffs a lot of the concerns would have been addressed.

Balachandran: It is a chicken-and-egg problem. Without the model you cannot do any analysis at the upper levels. Without the analysis you will not be able to make good decisions. You need them both. The trick is in making the model as accurate as possible so that it is accepted and widely used within the industry. In the absence of that, whatever analysis you produce is not credible enough, and customers will go back to using spreadsheets. Why do I need all of these analysis tools? To get the necessary levels of accuracy you really have to dive down and understand the physical effects and bring that up. So the challenge is what is the right level of extraction from the underlying levels of abstraction to take to the higher levels in order to give a reasonable amount of accuracy—but at the same time not making it an arduous task. There are no standards for this.

Bhardwaj: Decisions made at the earlier stages can have orders of magnitude impact on the final outcome, and it is important to provide the models for that. This could be from fast prototyping or accurate mathematical modeling techniques.

SE: With the new nodes, the physics of the wires are changing. We are getting more skin effects such that the R is not scaling, and thus more heating in the wires themselves. Will we hit the same issues as we did with timing and what impact will it have on EDA and synthesis?

Balachandran: Yes, you hit the nail on the head. The resistivity of the metal layers is going up compared to the previous technologies. It is not even going up linearly. It is going up exponentially, and that is a problem. Therefore, it is not just wire delays, but the switching power that is going up. Dynamic power is increasing because it is an RC product. So synthesis has to be aware of that, place and route has to be aware of that. At every stage, you have to be making the right decision. You have to factor this in and understand it to make the right choice during the mapping and optimization process. Without that you will not be able to converge on either timing or power in these new geometries.

Sarkar: It an important thing. The way that we look at the delay aspect — the way that people have traditionally designed — could be stated as, ‘How do you deliver the most robust power delivery grid structure on the chip, package or system?’ Traditionally, they have gone in and said, ‘Whatever my chip does, at any technology node, all of my chips will have this particular power grid.’ They do extensive simulation and come up with a robust, well-defined grid. But it takes a lot of routing resources, and with the newer technology nodes you start to see the delay effect. You are stuck with these over-designed power grids, which are not needed for most of the blocks. So the overall chip size increases. To meet the timing, the area will start to go up. We also see not just R but for power delivery, but also the inductance of the chip. The edge rates of the devices. We started seeing this issue at 16nm, and for 10nm and 7nm it is worse. The devices are operating faster so when the edge rate is within a couple of picoseconds that the time of flight across the wire becomes even more interesting. You don’t have to have a package. PCB length trace is going over a couple of millimeters. It is hundreds of microns, and we need to model the inductance on the chip.

Bhardwaj: Congestion is becoming a prime issue. Because of high congestion, it not only has an impact on timing but also on power. So you need to start spreading out the wires and the cells because of pin access issues. That eventually leads you to not have the same level of scaling that you would expect from moving to a new technology. The area and cost benefit that people expect will not be met because of these kinds of issues.

Bjerregaard: The key issue is that on-chip metal for routing and power delivery is becoming very expensive. The traditional view on scaling, with gate length being the proxy for scaling of everything (power-area-performance) is breaking down. This is an incredible opportunity for the EDA industry to prove its worth, because smarter tools that optimize designs for more optimal exploitation of sparse on-chip resources are needed.

SE: Do 2.5D and 3D integration add any additional challenges?

Balachandran: There are innovations on 2.5D and 3D structures. There are innovations on the EDA side to deal with the flows. It is impacting tools. First the tools have to understand the topology and to be able to extract the parasitics from them. If this does not happen properly, you will not get good results from any timing or power engine and therefor the place and route will not be good, your synthesis will not be good – they all depend on the parasitics being accurate. There are a lot more effects, such as skin effects that become prominent so it does impact the whole flow.

Sarkar: For any problem solved in regular silicon, to move that to 3D becomes very interesting. There are a couple of things like thermal and mechanical stress that are not that big a concern because you are putting a 16nm chip with a 40nm die, or a 65nm analog block. You would probably not worry about the thermal effects. But when they are on top of each other or even side by-side, there is thermal coupling. Then you start to become more aware of the impact on the entire system. From a modeling standpoint it becomes interesting in terms of the sheer geometry count that we have to deal with. From the power delivery point of view, or timing, it is still a single chip problem in most cases because most of the routing is within one die. But if you look at power delivery, we are supplying the regulator through the package through the through-silicon vias (TSV) or upper pillars to the top die. There is a long routing in connectivity and we have to simulate all of that. Maybe the dies come from different companies. What detail should be put into the models so that the combined simulation can be representative? You will make design decisions from that – either for power or thermal or ESD. The standards are not well defined and that will come back and hurt us. Today they may be using two different tool flows and the person doing the integration – what are they going to get when they try and simulate them together.

Bhardwaj: There are clearly advantages and disadvantages. You are getting a system which is more compact but you have to consider the different level of technical issues with respect to the mechanical stress and the thermal profile completely changes. There will need to be a focus on these effects. Solutions will be necessary for analysis and optimization for these kinds of effects.

SE: What else?

Sarkar: For 7nm and 10nm, going back to the things that we have been worrying about for the last technology node migration, the traditional approach of looking at data in silos is going to hurt us if we continue to do that. How do we overlay all of these things in one data structure and make it fast and easily accessible so that we don’t have to wait for hours for a design database to load? How do we enable designers to do development so you don’t have to wait hours for an answer – you get it immediately? If you want to understand all of the flops in the design that have more than a certain drop that are on a critical path and maybe are at 50° C, you can’t do that right now. In order to break the silos and look at data holistically, you need to be able to do that. We have to head in that direction.

Bjerregaard: Physical level issues are being pushed higher and higher up the abstraction ladder. This is nothing new. It’s an ongoing development for the last 20 years. The difference is that the complexity of the designs, combined with the multitude of physical aspects that need to be taken into account, is exceeding the capacity of the single machines we use to develop the chips. So we silo, both in terms of breaking designs into manageable chunks, and also in terms of what physical aspect of the design we are looking at. And in doing this we are leaving optimality, or even design convergence, on the table. This is the big challenge.

Bhardwaj: The overall flow needs to consider all of the power-related issues so that the right decisions can be made for the downstream flow. Given the complexity, there will have to be more modeling techniques developed.

Balachandran: The monster has taken on a few extra heads. Tools can operate in a silo, but I would rather say that a lot of interconnections have to be made between tools. Whether the implementation of that is one database, into which you can peek and poke, or multiple databases is not relevant. That is an implementation detail. But there has to be an interconnection between tools that analyze power, that analyze timing, tools that look at the back-end flow, tools at the front end, – which includes power, performance, thermal and area analysis and optimization — that has got to come together in the form of multiple tools working closely with each other. Information must be easily exchanged and understood, and there needs to a way for the information to be modeled. That is the way to solve these problems. That is part of the holistic approach.



Leave a Reply


(Note: This name will be displayed publicly)