One-On-One: Mike Muller

Part 2: ARM’s CTO talks about new memory strategies; coherency vs. non-coherent designs; future phone innovations; what’s limiting wearable electronics, and the impact of stacked die.

popularity

SE: What happens with memory, where access is more localized?

Muller: Hybrid memory cube is one approach. HBM is another. ARM is chairing an IEEE standards group for a next-gen memory interface to make sure we build memories that fit mobile as well as networking and classic servers. Whereas in the past, memories were driven from the performance, we need to make the power scales with the bandwidth. The challenge at the moment is that your memory takes an awful lot of power, no matter how much of it you use. If you use the whole line of memory, it’s very efficient. If you only use a small part of it, it costs you as much as using the whole disk. It’s a multi-year standards effort, but if you look at it from a power perspective it’s not about just getting the most bits.

SE: What happens on the software side? That’s the other piece of controlling this.

Muller: There are clear abstractions between the hardware and the software and the tools that sit between them. They will evolve on completely different timelines.

SE: One of the challenges with multi-core design is efficient cache coherency. Do you see that getting better over time?

Muller: That’s definitely true in embedded. The view of coherency in mainstream computing is littered with other architectures that haven’t been successful. If you make everything coherent, it’s easier for the software guys and they’re calling the shots. You need to know who your customers are. As you step up the spectrum into supercomputers, you get to a number of cores where it’s impossible to do full coherency, so you have coherency clusters and then clusters of clusters to solve that problem in a different way. As you move down on the pyramid, we’re doing banking, we’re doing simulations of things that go bang, and we’re doing simulations of earthquakes. They all care a lot about efficiency and so they architect the algorithm to match the underlying hardware. As you come back into the embedded world, in the past you could up the frequency and add more threads. That changes when you talk about subsystems. The processors are small and cheap, so you’ll see multiple separate subsystems and they don’t need to be coherent. And if you look inside a modern mobile phone you can see that today with Bluetooth, MP3, audio playback, and some of those are architected with standalone systems that are integrated together. That comes more and more into embedded, but it’s not a coherent world. It’s a function-specific world.

SE: What does a smart phone look like in five years?

Muller: From a hardware perspective, you can see where a lot of the development is going. There’s more memory, flexible displays, better cameras. That’s all fairly predictable. What you don’t see is the innovation in sensors. We have touch already. There are various things that fundamentally could change UI design, and that will come as a surprise. But for the next few years, the phone is your primary compute device. What is very hard to say is what the software will look like in five years. That’s where the innovation happens in a year, and you can really change what matters. It goes from using my phone for voice to e-mail to Facebook to something else. That’s where the radical innovations come, even if the underlying platform in some ways stays the same.

SE: What’s your vision for wearable electronics? Will we get to the point where a watch can predict a heart attack ahead of time?

Muller: I believe in wearable technology as it gets into weeks, months, years of battery life. That’s where it transforms from a novelty into something useful. But I don’t think the heart attack prediction happens in your watch. It happens in the cloud, combining real-time data with your medical history. Who controls that data and who sells you the service for analyzing and monitoring it? That’s a game for others to play.

SE: This is all about pattern recognition combined with some level of artificial intelligence, right?

Muller: Yes, you have to make the predictions.

SE: There’s a lot of IP in devices these days. How does ARM determine which other vendors to work with?

Muller: If you look at the main IP that we produce, it’s pretty clear what we’re going to be doing for the next four or five years and what we will need to get there. The choices there are which order you prioritize, and that is a combination of engaging and talking with all of our customers about their priorities, and a certain amount of judgment about what we think will add value. It’s not science. We’re lucky in that we’re transparent. Everything we do we sell and we expose, and the engagements happen over quite a long period of time. If you look at the CPU road map, we’re already discussing the next processor and the processor after that. The synchronization happens well before engineering is fully connected. We have road maps, and we review those with all our partners. And those road maps change. We refine them. We have an annual meeting where we get all our partners together, and it always changes. Some people will be happier than others.

SE: Do 2.5D and 3D change anything for ARM?

Muller: We’ve been watching that for some time. At the moment it changes how you build systems, it’s primarily memory is here, SoC is there. We’ve built two versions of a research chip with the University of Michigan. What happens if you put the cache up here and the CPUs down here and bigger pipes? But when you look at the costs involved, you need a good reason for doing it. The really good reason is that you have two really different technologies, so this is memory and this is SoC. Doing real classic logic design across 2.5D and 3D—we’re a long way off. There are test chips, but it’s a 5 to 10 year process.

SE: The semiconductor industry tends to move in a linear fashion, but we do have fundamental gaps in lithography and at 10nm we hit quantum effects. What does that mean for ARM?

Muller: We have a team that sits down and says, if we do this, then what happens. So if nanotubes are the way of the future, what does that mean? What happens if you try to build that? It may not come about, but you have to predict what will happen at 7nm and 5nm, build models for that, build processes for those models and iterate them as the future arrives. Assume it all stops. It’s not the end of the world, because innovation continues. So software stacks run top to bottom, and they become a means of making progress even in a stable world. From a consumer point of view, even if it stops, we have a long way to go. I was really reassured by our project for Cortex M0, our smallest processor. If you take the smallest processor we built back in 1985, they’re about the same transistor complexity and the same size of design team. Verification of the new one took a lot longer, primarily because we shipped a few tens of thousands of the first one and we’re shipping a few billion of the new one. So the quality level has changed. One was designed as a desktop computer. The other is designed as an independent microcontroller. They look similar, but they’re actually very different. Even in a world where you don’t have twice as many transistors, there’s a load of innovation to be done.

SE: Interconnection through the IoT may actually spur innovation because of all the connections into new markets that we’ve never touched in the past, right?

Muller: Yes. What I find exciting about IoT is that you have all these great products, but it actually changes the business model. That’s where the real innovation occurs. It’s now all interconnected. What do we do with it? How do you make money out of that? Is it hardware, is it software, is it a service? That’s where the changes will come. It’s the new business that comes out of it as much as the shifts within it.



Leave a Reply


(Note: This name will be displayed publicly)