Experts At The Table: Multi-Core And Many-Core

Last of three parts: The impact of cloud computing; security and privacy issues; energy efficiency vs. virtualization; limits of parallel programming.


By Ed Sperling
Low-Power Engineering sat down with Naveed Sherwani, CEO of Open-Silicon; Amit Rohatgi, principal mobile architect at MIPS; Grant Martin, chief scientist at Tensilica; Bill Neifert, CTO at Carbon Design Systems; and Kevin McDermott, director of market development for ARM’s System Design Division. What follows are excerpts of that conversation.

LPE: How does cloud computing change the need for multicore and many-core processors?
Sherwani: Cloud architectures will evolve differently from mobile architectures. They will be homogeneous 8-, 16- and 32-core architectures. They knows a lot about what you are storing. You can put a lot of intelligence into what you’re storing, which is not the case in a mobile device.

LPE: So what does that mean for the mobile devices taking advantage of it?
Sherwani: It can certainly make mobile devices more efficient. You can store a lot more on the mobile devices. You can do a lot of streaming.
Martin: The application cloud interaction may change in character. People will write somewhat different apps in the future that will take advantage of what the cloud has to offer. This is why you’ll see cobwebs on the desktop in the future because no one is very interested in it anymore.
Sherwani: And if you look at video, with the cloud and a good wireless connection you don’t have to store the video. Video cameras will become a lot less expensive.
McDermott: This should be put into context. It’s amazing that people are so excited about a database. That’s all it is. I believe the vision for the mobile device is that you have access to all the data, and you selectively choose how to expose it. The browsing experience is different. You don’t try to replicate the desktop experience on a smaller screen. It’s a given. You take the appropriate content and you display it in a way that’s easiest to digest. I think the hardware on the mobile device will become smart enough to selectively show you the piece that you need on your mobile device. You don’t need an entire map. You just need to know where you are.

LPE: What’s interesting about databases, though, is that they’re one of the very few applications that really can do true parallel processing and scale effectively.
Sherwani: I’ve been saying for the last two years that we should stop giving people content. In five years all the content will be available. If you’re a mechanical engineer, everything you need will be on the Web. What we need to do, though, is teach people how to do something useful. This is the same thing with mobile devices. Whatever device will be useful will be the one that can quickly filter through what you’re looking for to get something done. It’s not about storing more information. Cloud brings that opportunity to people, devices and things. Our view of expertise will change. It won’t matter if you’re an electrical engineer. It’s whether you can get a task or series of tasks done. That will be more important than a Ph.D. We are 10 years from that, but this is how people of the next generation will think.

LPE: What you’re talking about is data mining for the masses?
Sherwani: Yes.
Martin: Before we get too carried away, there are a couple of issues that really need to be solved in this cloud paradigm. We do need to think a lot about privacy, security, and the ability of the infrastructure—both wired and wireless—to deliver all of this content off the cloud and onto the sea of mobile devices. We all know about the experiences of certain smart phones overloading networks and they’re still trying to improve the quality of the network. The wired infrastructure is not fault free. Security and privacy worry me more. If you upload all your data into some big infrastructure, you want your data secured.
Rohatgi: That’s the weakest link. Everybody’s pushing down this path. What worries me is the security and reliability. There are a ton of issues that need to be resolved. Creating a smart infrastructure for data mining can be done today. On the mobile side, there are probably some advances necessary to improve battery life, which is the No. 1 complaint I hear today. But the weakest links we hit are the communications channel, security, privacy and reliability. If those can be resolved then we can progress.
Martin: The technologies we’re all involved with are going to help in a big way. It just requires a bit of mobilization to focus on those issues.
McDermott: This reminds me of where we were with cell phones years ago when the processor went through certification with the carrier. The consumer doesn’t see all the certification on the network. The carrier loves new features. It’s more traffic for their store. It brings in a new wave of users. What they don’t want to see is something that disrupts their infrastructure. For the engineer, the certification is really intense and the field trials are difficult. The cell phone industry has to show a partition that you can certify your baseband and your protocol stack and that has to be isolated from other activity. That underlying security infrastructure is built into the certification. I think we’ll see that extended upward through commercial transactions to having trusted processes and transactions.

LPE: Will cores all be homogeneous or heterogeneous, and will some of them be virtualized?
Sherwani: All of the above. There will be homogeneous cores, heterogeneous cores and there will be virtualization. They all solve different problems. You need virtualization in data centers.

LPE: But will you need virtualization on your smart phone?
Rohatgi: We’re starting to see some of that. I don’t think the operating system wars are dead. And at the end of the day, there is some value to keeping RTOS access to legacy hardware and a high-level operating system like Android or Windows or IOS. From a security angle, it all depends on the use case. The mobile guys are really scared of virtualization of a single processor that has access to all memory. They want separate memory and separate everything.

LPE: This is similar to devices that have a partition between what’s used at home and at the office, right?
Rohatgi: Yes. It’s the same problem. And this almost ties into virtualization. On the privacy side, there isn’t a well-defined security layer with NFC (Near Field Communications Forum) and they’re talking about mobile payments. If you power on an Android phone and shut off all networking then your maps go haywire. Why? Because there’s a back channel that goes to some cloud that helps triangulate where you are. That information is stored to help applications of the future. I’m surprised people aren’t bothered by this. But to return to the question, we’re starting to see some effort down the path of virtualization even though it’s not widespread yet.
Martin: You won’t see virtualization down to the metal. In the dataplane layers it’s nice that processors can emulate other processors effectively, but close to the metal you want extreme efficiency and high performance.
Neifert: And that’s where I see the problem with virtualization. It’s the power. Virtualization is nice, but it’s an abstraction away, which is a power loss. At that point you need heterogeneous processing.
Rohatgi: Transmeta, about nine years ago when they started doing abstractions to hardware, had power numbers that were way down. It’s too bad that green energy wasn’t something that was important then. Still, the genesis of the Atom processor was entirely because of Transmeta..
Sherwani: A typical Bluetooth radio takes about 32 milliwatts of active power. At 65nm we have a Bluetooth radio that only uses 3.2 milliwatts. And there is a design on the board that will take it below 1 milliwatt. There are a bunch of engineers getting excited because over the last 100 years the basic design of a radio has not changed. What Marconi designed is essentially the same as we have today. But when you scale down the power needs to go down. It’s amazing how much lower you can go.
Rohatgi: There’s the other side of this, too. Battery technology has not evolved as much as we would like. For the analog components, it’s the switching characteristics that are governing it. That’s where you’re seeing a lot more intelligence. If you were to look at the power profiles of a mobile device, LEDs and LCDs were supposed to be the promise for low power. That hasn’t worked out. There are still 250 milliwatt drivers. The radio is probably No. 2 on the list after that.
McDermott: People’s expectations were that a screen would be a certain pixel density. Today that needs to be super high-definition. It’s beyond high-def.

LPE: So will we see more cores in the future or have we maxed out?
McDermott: As a programmer, how are you going to keep track of 100 cores? How are you going to program that intelligently? Either it’s going to be some array a programmer can visualize, or it’s going to be three or four very solid cores and let other cores do things like Bluetooth. You can’t keep 100 threads in your mind.
Rohatgi: There’s a limit to this. If you look at the desktop space, in 2006 when Intel began heading out on this multicore approach they found that success wasn’t nearly as fast as they thought. There’s probably a limit on mobile devices, too.
Sherwani: We did all this in the 1980s. nCube used to have a 16-core and 32-core machine. It works great up to 8 cores, but after that you lose it.
Martin: If you are trying to program a concurrent application and split it into different threads, there are inherent limits. Some very specialized applications may be very concurrent, but most are not.
Neifert: The programming model has a human in the center, and humans can only process so much. Until the fundamental programming model changes, you won’t see much advancement.

Leave a Reply

(Note: This name will be displayed publicly)