Getting power right makes the designer’s job much more interesting.
I have to admit I’m always surprised to hear that design teams are not using tools to the fullest extent possible, leaving valuable power saving opportunities on the table, until I remember how daunting it is to get it all right without tremendous experience, expertise, and the right tools.
I’m also always fascinated to learn about less-obvious effects from power.
To this point, Aveek Sarkar, vice president of product engineering and support at Ansys pointed out a rather counterintuitive effect that occurs during design, which is that when designing a power grid, designers typically over-design it, meaning, they will via every 4 or 5 micron, and they believe they have the most robust power grid architecture because they think they are reducing the risk.
This is fair enough, he said, but in many cases that may not be necessary or given the operation of the chip, it may not be appropriate. “If you eliminate some of this over-design during the implementation phase, you actually open up more routing resources. When you open that up, some of the routing you had to follow because you did not have any space earlier versus now that you have space, your routing challenges go down significantly. You get better utilization, and if you get better utilization, you could potentially downsize some of the drivers that you are using.”
Interestingly, this is actually a self-feeding cycle, Sarkar explained. “Imagine that you are using a 16x driver but because you were routing a net that had to follow a convoluted path, because you had so many power grid vias that you did not have enough room, so you followed a much longer path to route the signal, and because of that you had to use a bigger driver. Now you open up the power grid and the signal could take an easier route so you can downshift that driver from a 16x to an 8x because you are driving less load. And because you are 8x, your power actually goes down so even though you have less power grid, you actually don’t need it because you have a smaller driver. So, during the implementation phase, because people start off with heavily designed power grids, that’s when they are rather constrained to certain design parameters.”
Another area that is often overlooked is along the chip I/O boundary, which touches more on the floorplanning side of design, he pointed out. “Let’s say you have a high speed block like a GPU or something like that, which has multiple shader cores, for example, and you are doing a lot of activities or a lot of high speed switching activity. You put in the I/O at the periphery of the chip where the bump inductances tend to be high. When they are doing the floorplanning, they are really not thinking about how the package comes and fits on top of the chip because they are just looking at how to connect this particular block to the rest of the blocks; how to route it appropriately for the timing, and all to be clear but because they put it on the boundary.”
Looking at this from the package point of view this is the worst place because the signals are coming in and the power is coming in. A better scenario would be to move it to the center of the chip where there is only power to worry about, and that way, there can be a very uniform bump placement. “Along the I/Os and the periphery, it’s much more disjointed so the inductance going in, from a package point of view, tends to be much higher. That also ends up creating a very interesting scenario that we see all the time. People focus on one particular aspect from an implementation point of view, or a floorplan point of view, and then at the end when they see all the hotspots, it’s too late in the process,” Sarkar added.
While it is highly unlikely that these issues will ever be completely automated out of existence, they are also what makes the technology evolution surrounding them so interesting.
What are your horror stories with power, and how did you solve them?