Executive Insight: Grant Pierce

Sonics’ CEO discusses how to improve the energy efficiency in always-on portions of an SoC.

popularity

Grant Pierce, president and CEO of Sonics, sat down with Semiconductor Engineering to discuss new ways to increase energy efficiency in SoCs. What follows are excerpts of that conversation.

SE: Looking out at the semiconductor industry there are a lot of changes underway right now. What are the biggest impacts from your perspective?

SonicsINC-Headshot-Grant-PiercePierce: The amount of data that is being captured or sensed is growing exponentially. The world around us is becoming fully aware of our presence in it, and in that process we’re gathering information for a plethora of interests. We need to know where people are at because they’re commuting or because they’re at work or doing something else, and we need to capture, store and retrieve data. This promotes the need to process information very close to where data is sensed.

SE: The big issue there is power, right?

Pierce: Yes. You’re stepping into the world where everything you interact with is battery-operated or connected to something that’s battery-operated, which means it’s power-sensitive. At the other end of the spectrum, the data center is extremely power-sensitive, too. Google spends more money to power its data centers than it does on salaries for its employees. The more you go out to the edge, the more you have to think about how long a device has to sense. The longer that process, the more power-efficient it has to be. This is somewhat like the progression of the computer industry.

SE: How so?

Pierce: Back in the 1980s, processors needed to be more efficient and complemented by the software we were going to run. Instead of designing the hardware using a complex instruction set intended to serve the needs of a software developer working close to the hardware, we realized we could take advantage of software technology on the compiler front and look at building the most efficient hardware to run that compiled code. That efficiency had a benchmark, which was MIPS—millions of instructions per second. At the time we focused on efficiency of this activity of executing instructions and how we built computers, how we built the memory systems, how we handled the I/O of that pipeline for executing instructions. If we could bring the memories closer, we had better performance. If we could decouple I/O to minimize interrupts, that helped, too.

SE: So how does that apply today? We still have performance issues, but the overriding issue in many applications is power.

Pierce: In the early 1980s, we did not once consider power consumption. In fact, power consumption didn’t influence computing until we got into the era of ARM. They recognized that their processors were the lowest-power-consuming processors in the marketplace, which was different from the preceding architectures that were built primarily for speed. ARM was the first to say, ‘Our processors are fast enough. Now, what about power?’ Today we have a world dominated by the compute power of the architectures we’ve built for any kind of electronic device. We’ve been focused on the activity of the chip—the moments when a computer is doing work. What we have ignored is the accumulation of more and more idle moments in the chip that are being powered up and wasting energy. Solving that requires an approach in hardware, and in a subsystem on the chip, that is focused on energy consumed by the chip. We need to pull together all of the energy-savings techniques in a coordinated way to save power. We’re taking the mirror image of the active components of a chip and complementing that with an EPU.

SE: What’s an EPU?

Pierce: An energy processing unit.

SE: Is this a new term?

Pierce: Yes, but it’s been an area of study among larger companies working in a battery-operated market for a long time. Today it’s primarily done with ad hoc custom techniques that are difficult to scale and which do not address all of the needs. Basically, it’s the work of specialists.

SE: What exactly is it, and where does it fit into the overall scheme of things?

Pierce: It’s an IP subsystem. You can break the market down into two pieces. There is a portion of the market that is taking well-understood ideas that already may be implemented and they’re repackaging those ideas and moving them into SoCs. That’s where you have big microprocessors doing big tasks, small ones doing small tasks, and specialized processors doing interesting vertical applications within an SoC design, only more efficiently. There also is the concept of cache coherency, which has been around for decades and is just now making its way into SoCs. But the new ideas are in the area of subsystems, and an EPU is one of them. The focus is on bringing together the techniques and instruction sets that are tied to the software executing on the device—or the software driving the EPU. It maximizes the amount of idle states in an SoC. So there is an architectural element that executes a set of instruction because it tells the device how to manage power. It involves multiple techniques that range from power gating to clock gating to voltage and frequency management within a device.

SE: Is there any way of quantifying this?

Pierce: Yes, the benchmark is millions of power states per second.

SE: So it’s basically adding a level of granularity into power management, right?

Pierce. Yes. We have domains on a chip—power domains, clock domains, and things we want to control infrastructure for the compute task that needs to take place. We call that a grain. So granularity is exactly correct. We’ve put in logic to add mobile control of a grain. And that grain can be infinitely small, which is the key to adding power savings.

 

SE: How does the data come back into an EPU to be able to make these kinds of decisions?

Pierce: An EPU is designed to be tied into the event matrix of the SoC. That could include the sensed data, like thermals. It also could include the compute data, where work is taking place or not taking place within a design. If you look inside a subsystem in an SoC, where that that subsystem is active, it’s usually a compute pipeline tied to local memory that’s executing an instruction set on data streaming into it. It’s simply another compute system that has inherent idle states that could be mined out. There are little bits of gold there. We can save power for the subsystem when it’s running at specification and frequency, so we can find efficiencies even in an always-on device. Idle states in an SoC are everywhere. That’s the opportunity.

SE: How much power can be saved?

Pierce: We are finding routines on a device that are yielding more than 40% power savings for that routine. In streaming video we’ve found opportunities to save as much as 90%-plus. But when you aggregate these numbers, two things happen. First, the overall power savings get better. Second, as you implement this technology generation to generation, the intelligence of the EPU gives you the opportunity to get better and better at the power savings efficiency you can achieve. We can instrument and monitor what’s happening in the deployed device. This is a part of the SoC that operates autonomously below the operating system. It’s invisible to the CPU.

SE: What’s the power draw of an EPU?

Pierce: About .0004%. It’s very small and energy conservative. It’s a part of an SoC that is in the always-on domain.

SE: And how does it connect to the rest of the chip?

Pierce: It’s a complete subsystem. It will have its own driver, memory and communication resources to the power grids.

SE: Do all devices have to have signals coming into the EPU?

Pierce: There are simple interfaces that connect a local power grain controller to the centralized controller. This is built off an on-chip network concept. Instead of trying to maximize things for throughput or scale, this optimizes it for power.

SE: Does the process or packaging approach matter?

Pierce: No, it’s process-independent. It also can save power where there is no host processor. If you tie into its capabilites for handling DVFS, you can see the hot spot, sense what’s happening on the chip, and adjust for thermal hotspots. The policy comes from the architect, who understands the application and what its needs are.

SE: How do you see this concept playing out for the market?

Pierce: There are two types of customers out there in terms of their power sophistication. Some have power management already on a device. This is designed to be a system that is open enough to allow them to apply their technology expertise. So it might be more scalable within their company to more design groups. A lot of companies don’t deploy even their own expertise into their hardware group because it’s hard to do. And then there are the other companies, which don’t have that kind of expertise. This is a complete solution for them.



Leave a Reply


(Note: This name will be displayed publicly)