Experts At The Table: Multi-Core And Many-Core

First of three parts: Use models determine battery life; power vs. performance; many processors vs. multicore; writing better software.

popularity

By Ed Sperling
Low-Power Engineering sat down with Naveed Sherwani, CEO of Open-Silicon; Amit Rohatgi, principal mobile architect at MIPS; Grant Martin, chief scientist at Tensilica; Bill Neifert, CTO at Carbon Design Systems; and Kevin McDermott, director of market development for ARM’s System Design Division. What follows are excerpts of that conversation.

LPE: Computers aren’t getting the power/performance boost today from multiple cores because the software can’t take advantage of them. How do we fix that?
Martin: Your computer isn’t a place where all the advanced design techniques are used. You have to look at battery-powered, cordless devices to look at the places where people use the most advanced design techniques. There they very often will have specialized application processors for different parts of the applications they want to run on those devices. Those processors are designed to be energy-efficient and to efficiently use battery power, and they probably do work better from one generation to the next—except for the case where they may throw on additional general purpose processors and don’t take advantage of energy consumption. You have to get a big distinction between multiple processors that are application specific vs. general-purpose processors that do not offer efficiency or better performance.
Rohatgi: Once the Intel-AMD megahertz wars ended people started heading down a different dimension of multicore. Back then they believed that changing the software ecosystem so that specific software or systems could be written to take advantage of multi-core, multi-thread, multiple processor designs would actually work. We’ve seen it work in many cases. You can reduce the latency when you’re executing a certain process or multiple processes. Another twist to this paradigm is people use core islands. The operating system may run on one core while another core is used for acceleration. Some people define that as multi-core, and that has been very successful because you can partition between a media processor engine, a video processor engine and a graphics processor engine. In terms of power consumption, that whole element needs to be pieced into this picture. When it comes to embedded SoC design vs. desktop design, those are very different when it comes to power consumption. That element hasn’t been worked through very cleanly on the desktop side, where suddenly you need 800-watt power supplies.
Neifert: The overall user experience that people have when interacting with a device has moved from the underlying hardware to the software. The emphasis has shifted to enhance the user experience. Opening a window on your desktop used to be simple. Now there’s shading and fancy graphics, so the same window that used to come up in 5 instructions may now take 500. It looks a lot nicer and in some cases that changes the user experience. But from the processing side, the focus stopped being on single-thread performance as the megahertz started burning up too much power. They branched out into multicore to solve that, but changing the software to accommodate that has been a big struggle. Changing the hardware to isolate that properly has been a struggle, too. Some of the processing that been done on computers is difficult to migrate over to mobile devices. A lot of the innovation on the desktop is now taking place in the embedded space. If you want to see the leading-edge design techniques, that is where you have to look.
McDermott: In the mobile area low power is associated with the battery life and the key to the user experience is maintaining functionality throughout a working day. We’ve gotten to that point. Now we’re engineering more productivity. There are more features you can run, more capabilities, more graphics, but still within that working day. Now what we’re seeing is low power is key to other markets. Data centers are predicted over the next few years to rival the airline industry for energy consumption. Cloud computing will lower the power a node, but that energy is still being used somewhere even though it’s shifted. What cloud changes is that if you run an application on one device and shift to a different device it’s no big deal. It takes advantage of the underlying computing architecture. There also may be a hierarchy of operating systems to deal with it, depending on the device.
Sherwani: We got very interested in how power relates to multiprocessing. If you are trying to predict power within a watt or two that’s no big deal. If you are trying to predict power within a milliwatt, that’s very difficult. We thought that by looking at implementation of the netlist we could predict power. That turned out to be not the case. Then we tried system-level design. That doesn’t work. We finally came to the conclusion that you have to have a user model. We needed a human model—a businessman, a lawyer, a student—and then analyze what they did during the day. Then we had to convert that into system level and then RTL level. This takes us far from what Open-Silicon does as a company, but we have found this the only way to accurately predict power. These kinds of human models don’t exist. We created two models of two types of people who use it. Then we started recording real human beings and calculating the model against them. Good models don’t exist if you want to accurately predict power.

LPE: Are we better off with many cores or multiple processors?
Martin: Multiple heterogeneous processors are the way to go, particularly in the mobile domain. With clusters of servers you may have many homogeneous tasks you want to map. The desktop is a bit of the orphan here. If you move to cloud computing and the highly mobile devices and ever-smarter phones, you wonder if people will worry about even having a tethered desktop. That means the innovation may be in the big server farms and the mobile devices, and the desktop may gather dust.
Neifert: It will be replaced by a docking station that you plug your mobile device into.
Martin: That’s right. Or as we have seen, some companies are combining mobile devices and a laptop together. The use cases are extremely interesting because there is no single use case. For a mobile device that has an advanced graphics processor, the game player may burn up battery by hammering that all the time. The music lover may be using MP3 decoding and get significantly longer time out of the battery. That drives significantly different use models and processor choices.
Rohatgi: There are a lot of different vertical markets. It ranges from digital still cameras to anything with a battery. There is a use case for multiple processors. Networking and cloud computing are very large markets. In the embedded space, what has happened is there are a lot of people in the SoC space. The hardware itself is heavily commoditizing. Even the operating system is commoditizing. The differentiation is how you pick and choose your IP. If it comes down to cost in a mobile phone, from the top up they don’t have a feature list or a use model. The discussion begins with, ‘What can you fit in a 7 x 7?’ Based on something like that, what kind of IP can you fit in there and still have a useful device? In the volume mobile phone market, the direction is to shrink the die as small as possible. It may be a 6 x 6 or a 5 x 5. In that case, I would choose multicore rather than multiple processors.
McDermott: In cell phones the issue used to be standby and talk time. People could self control that. If you talk more your battery goes down. People are starting to experience that if you want to play games you have to deal with this. We’re starting to deal with the apps developers. You used to have specialized OSes and applications. With the proliferation of open source you don’t know what could be running on there. It can run any app. We’re reaching out to the app developer to write code that is attentive to the power effects. There is an amazing learning curve through people writing a good game experience in a power budget that’s acceptable. You need to get the apps to be power-efficient.