The cloud is only as good as our local devices and our patience is low. The solution is in the SoC.
By Frank Ferro
Even though the cloud is permeating everything we do today, I was recently reminded that it’s even omnipresent far outside the walls of tech. With all the TV ads, as well as our most prominent airports and U.S. highways peppered with cloud-based billboards, even our parents know how to properly use cloud in sentence today. But to hear about the cloud from the pulpit at church on Sunday, that caught me a bit off guard. (And no, the punch line was not that heaven was in the clouds!)
A visiting homilist explained that he could travel light—especially to a high-tech place like Silicon Valley now that all his information was in the cloud and easily accessible from anywhere, anytime. It was true, he had no problem accessing his homily, until he tried to print and the printer ran out of ink. Unfortunately, the content was stuck in the cloud so he had to “wing it.” Needless to say, this quickly brings us back down to earth with a firm reminder that as powerful as the cloud is, at some point we are limited by the performance of (or lack thereof) our local devices.
Clearly our local devices are not much use without cloud content, but the opposite also holds true: Accessing information is only as good as our local device. When browsing Web sites from a smartphone, for example, how often are we frustrated with waiting for the page to render? With technology’s near limitless trajectory and upgrades, we are no longer forgiving consumers and our patience has hit an all-time low with most of our “smart” devices.
Web browsing speed is now a major metric when comparing smartphones. We want the information and we demand it now. Of course, there are many factors that affect Web page speed, but if we normalize the connection to the cloud, a fair comparison can be made of device performance. The first thing everyone looks at is the processor/GPU speed and number of cores. Speed is important, of course, but it’s not the entire picture.
To ensure the best overall device performance, the entire system design needs to be considered. This includes the performance of all the heterogeneous processors, the overall dataflow efficiency and power consumption of the underlying SoC architecture. All of these topics are worthy of further exploration (and likely future blogs), but I am going to focus on the one you might consider the least likely to impact performance—power consumption.
Everyone understands that the processor performance and power must be balanced in mobile devices to achieve acceptable battery life. For years, this has been a key differentiator for processor companies (both IP and hardware) competing in the mobile market. The problem is compounded as more and more functionality is packed onto a single silicon substrate, (e.g. 28nm) with multiple processor cores operating at different frequencies while trying to perform multiple tasks concurrently. The problem now is not only power consumption, but also power density.
Consider again the case of Web browsing on a mobile device. If you want to open a Web page while running multiple applications concurrently (and today this is an automatic must) then the power of the processor can easily ‘spike’ above set limits permitted by the wireless specification. This forces the processor to ‘throttle back’ because it is getting too hot to support the task, and ultimately significantly slows down the speed at which the page can be viewed. This is not as bad perhaps as running out of ink when needing a hard copy at the last minute, but it is another factor that can limit the end user experience, making a device much less desirable and competitive in a highly competitive market.
So at the SoC level, how can system architects deal with these increasing power challenges? Using power management IP, which is an important subset of system IP, allows the SoC architect to take a different approach when considering the system power design. System IP for power management allows the architect to look at the chip from several power perspectives, including the lowest hardware levels that define power domains, control of individual cores or subsystems, and even power management from the application perspective.
The Plumbing: A complex SoC, especially for the mobile market, easily can have 10 or more power domains. And within these domains there are many more frequency and voltage domains. System IP tools allow the SoC architect to efficiently define these domain hierarchies and automatically insert the correct type of logic to support domain boundaries. Many gates (read chip area, cost and power) can be spent crossing domain boundaries. Therefore, efficient implementation is critical for the best performance/power ratio. Additionally, having support for CPF (Common Power Format) and UPF (Universal Power Format) is also critical for the design flow.
Keeping Silicon Dark: The next step is to ensure that all silicon blocks are kept dark, or off for as long as possible. Unfortunately, today most power control is done via software only. This can be a relatively slow and unreliable method, because the CPU has to wait many cycles from the time a power down command is issued to the actual power down of a core. It needs to be certain that all transactions in the system are complete.
Using system IP (where the power control resides inside the on-chip network), allows for much finer control of power up and power down of system cores than exclusively software alone. For example, the on-chip network has visibility to all the transactions in the system, so by interacting directly with a power management unit, the network, when a power down command is issued, can halt any additional transactions to the subsystem or core being put to sleep. Once all transactions are complete, the network can then let the power manager know that it is safe to shut the core down. This is much faster and more reliable than using software alone, delivering a significant power savings—2x or perhaps up to 3x. The same is true for wake-up because the on-chip network can ‘see’ a transaction coming to a core that is asleep and can quickly notify the power manager to wake it up.
Moving forward: By using system IP for the ‘power hierarchy and system control’, it would be possible to extend this control even up to the level of the application. Having a dedicated power processor with interfaces to the software, applications can become acutely ‘power aware’ by giving them knowledge of the hardware and the other applications with which they are interacting. Although this vision may be a few years away, it would significantly improve the overall user experience and reduce the total power consumption of any mobile device. But for now, if the system architect takes advantage of system IP, they can develop smarter power architectures that will not only reduce power, but actually improve performance since more of the silicon can be ‘dark’ and there is less chance of hitting those power limits.
So the next time you need to access that information from the cloud but have to wait, think of how handy that system IP would come in now…
—Frank Ferro is director of marketing at Sonics.
Leave a Reply