On-Chip MCUs Excel At Power Management

When it comes to supplying power to an SoC, there is an increasing trend to make it more intelligent. On-chip MCUs can help here.

popularity

By Ann Steffora Mutschler
When it comes to supplying power to an SoC, there is an increasing trend to make it more intelligent—how to control it more accurately, how it is monitored and how it communicates with different aspects of the chip.

Traditional power supply models with analog supplies have less of this control, so a number of engineering teams are considering the use of on-chip microcontrollers to do the power management. IBM is one company that has documented its use of an on-chip MCU in the Power8 processor.

“In general, the trend is how to have more predictable on-chip power supply and there are a couple different reasons why,” said Aveek Sarkar, vice president of product engineering and support at Apache Design. “One is that people want more granularity in the power that is coming in. Different parts of the chip may require different levels of voltage, and different parts of the chip do not need to be ‘on’ at the same time. If you can power down, not necessarily through power gates, you can even take down the supply voltage. Basically, how do you control the power supply in concert with the control and operation and performance of the chip?”

This strategy has been used for a while, but it may be becoming more prevalent especially in larger designs noted Barry Pangrle, senior power methodology engineer at NVIDIA. “Just as the complexity is getting higher, it is kind of the tradeoff between different approaches. You can look across the whole semiconductor market and say, ‘I’ve got a problem. How do I want to attack it?’ And you could do a full custom design for it or you could go to something like an FPGA, which is giving up a little bit for some flexibility. Microprocessors have in some sense dominated certain segments in the market just because you can design it once and you can re-use it. And it is very easy to change it as things change. The tradeoff there then is that typically it doesn’t run as fast and it takes up more chip area and it’s not quite as efficient. But on the other hand it gives you a whole lot more flexibility too, and it’s often easier to take something that’s sort of a standard architecture and you’ve got software for it and can move it to the next thing as opposed to having to redesign the whole circuit again.”

In fact, pointed out Mike Thompson, senior manager of product marketing at Synopsys, the 8051 8-bit microcontroller has long been used for power control in circuits. The 8051 IP cores came out when he was at Actel (now part of Microsemi), and many customers used the 8051 for power control in FPGAs. But they also were used in SoCs at that time, as well. “A lot of power control has graduated to the 32-bit level just because the size of the processor has gotten so small,” he said. “Your ability to easily integrate that with everything else on the chip is much higher with the 32-bit processor because it has to write external bus structures and it is going to have a programmer’s model that is very similar to the other processors that you are using on the chip—even if they are from another vendor like ARM or MIPS.”

The on-chip MCU tends to be used more in the bigger chips, Pangrle observed, especially as there is more complexity. “[This approach is taken] with chips that have multiple cores, whether they are traditional CPU cores or GPU cores. A lot of it is has to do with the additional complexity, so the more components that you have, the more likely it is that you’re going to want to be able to power them down, run them at different voltage and frequency levels. At that point you need to be able to bring in information that is monitoring what states they are running at, but also what’s happening with their neighbors and neighboring monitoring temperature sensors.”

From Synopsys’ point of view, Thompson said customers currently use small microprocessors for power control on their chips, but it’s something that really wasn’t done five years ago. “If they needed power control they would do it with in whatever processor they had but today our EM family…those processors are so small that you can put a processor on there with 8K or 16 K of memory – it doesn’t take up any space, it doesn’t burn any power and it gives you really powerful capabilities for power control.”

There are a number of techniques for power management. “One is shutting down when there is nothing to do; i.e. putting the core to sleep,” said Rich Rejmaniak, technical marketing engineer at Mentor Graphics. “Another is to scale down the operation of peripherals so they are not being used and, of course, the big one is to dynamically shift the processor speed and clock speed in order to minimize overall system consumption. Normally, with any type of control like that, the microprocessor itself has control over the features itself. For instance, if the processor is going to shift gears the processor will go to the clock module in its memory address space. If it’s going to take peripherals down, it would write to the peripheral controls.”

In the case of the IBM Power8, there apparently is a separate microcontroller so that the main software doesn’t have to have drivers for the on-chip peripherals.

“The main software could obviously talk to an API, to a code running on this microcontroller,” said Rejmaniak. “It’s a method of abstracting. It wouldn’t increase the speed or efficiency of it, as the applications that want to shut down power don’t actually have to do the power shutting down. All they have to do is to do a request and the actual power management code gets removed from the operating system of the processors. It eliminates the need to write power management into the application or the operating system on the main application level.”

From an architecture perspective, it also would mean that the power management software can be written by a separate engineering group, which is independent of what the application is going to do, he explained. As a result, the application writers don’t have to concern themselves with power management and power management doesn’t have to worry about working around with the application is trying to do. They each get their own little sandbox.
“That’s one of the advantages of multicore that has not been taken up,” he said. “Up until now, with a lot of multicore solutions everybody has asked, ‘How do I write a big application, spread it across the cores?’ The real power is to be able to write to separate areas on their own and let them run autonomously.”

Important to note is that traditional problems of voltage drop analysis and power analysis still apply here, Sarkar asserted, “because even though you put the microcontroller there you still have to model and simulate what is going to happen for something like that. ‘Is it able to maintain the supply voltage? The other problem is on-chip regulators. These are analog circuits, which are small so if you have too much current demand—let’s say you turn on all of the chip at one time and you need 5 A of current—there is no way an on-chip regulator can supply the current. The supply voltage will not be able to maintain itself anymore but the off chip power supply can obviously supply it because they tend to be bigger. So even though you take a lot of the off-chip problems, on-chip issues that you have to struggle with and resolve just increase from that point of view.”

At the end of the day, Pangrle said, “whether the technology nodes were shrinking or not, what we will see is more additional complexity from the power management standpoint because we are at a point where the power directly corresponds to performance. Using these types of control systems, what people are doing is monitoring the temperature on the chip. And what’s happening—not just with maybe the thermals on one core but what’s happening to the surrounding cores—you have to be able to calculate and say, ‘The guys around me are really that busy and that means I’ve got a little bit of thermal headroom to work with, so I can go ahead and get better performance by maybe increasing the voltage and the clock a bit.’ In order to take advantage of that you have to know you have to have an intelligent enough management system to be able to look at everything that’s going on around it. If you do that and you can bump it another 10% or 15%, that’s generally considered a big deal. Compared to the competition if you are doing that and they are not, you’re going to out-benchmark them.”



Leave a Reply


(Note: This name will be displayed publicly)