Computing Way Outside Of A Box

Arm’s CTO talks about how AI and the end of Moore’s Law are shaking up processor design.

popularity

Mike Muller, CTO of Arm, sat down with Semiconductor Engineering to talk about changing boundaries between client and server machines, the end of Moore’s Law and the impact of machine learning on chip architectures. What follows are excerpts of that conversation.

SE: Are the lines blurring between what’s considered a client device and what’s considered a server?

Muller: It’s less about a computer than the change from classic-world client device that’s connected to a server, which lives in a server farm on the other side of the planet, than what’s happening to compute across the whole spectrum. There’s a whole bunch of stuff in the cloud. There’s a lot happening in the network, in the cell tower, and then in the boot of your car and in your client device. Compute is going to happen in a lot more places than it does today. Basically, the server is moving out of the cloud, into the network and down into the device.

SE: We seem to be heading toward what was once referred to as ubiquitous or pervasive computing, where it’s not just about the device, particularly as capabilities of those devices improve.

Muller: You’ll still have degrees of specialization. When you have a server farm, the devices that you’re going to put in 19-inch racks with managed cooling is different from what we will end up putting in a base-station or in your phone. What changes a lot of these things is the programming model. As you write your application, how much do you care about what is in the client app versus the server software that was running? Now you say, ‘Here is the application I want to deploy.’ Compute will happen. Sometimes it will be on the device. Sometimes it will be running in the cloud.

SE: And sometimes will be opportunistic, right, because with 5G you may not get the same signal strength everywhere?

Muller: Yes, and what makes it hard is it will not be uniform.

SE: So what impact does that have on architectures?

Muller: From a hardware architecture perspective, it means you can’t have the same level of segmentation where ‘these features are only available on these servers.’ What we’re trying to push at Arm is the same architecture, but it also can run across the whole spectrum.

SE: One of the changes here is the number of data types that need to be bridged to make that all work. Data from a vision sensor will be different from vibration data in a factory.

Muller: An IoT system can generate lots of data, which leads to the question of where that data is collected and processed. Some of it is done near to where you collect the data, and some of it is a batch job on a server somewhere else. Different data types have very different characteristics. Some of it is driven by time. Some of it is driven by how you want to search that data. There is no doubt that data types result in very different types of processing. But you also may want to go back and look at the same data from a different perspective, which means you sometimes have to store or even cache that data, with multiple views of that data, so that when someone comes to query it you can give them the answer quickly.

SE: So now we’re not just now looking at one thing, right? We’re looking at different kinds of data coming together from multiple places.

Muller: And people are truly architecting systems, not just individual components.

SE: How do you define system?

Muller: It’s the communication infrastructure and the computer.

SE: So where are the potential new opportunities and risks for Arm?

Muller: We have relied on a certain linear Moore’s Law scaling that gives us faster, lower-power transistors, which makes some architectural choices relatively straightforward. With that roadmap slowing down, you have to think smarter. If you want to do machine learning, you need to come up with a machine learning accelerator. You can’t just crank up the CPU to run faster. Rather than spending more transistors to make your processor go faster, you take those transistors and get significantly more performance from machine learning by building yourself an ML accelerator. People have to be more creative in how they spend their transistors. It’s not just more of the same. That’s the biggest change from a hardware architecture point of view. Then you have to develop a software stack, and you don’t know what it looks like because today it’s going to be on an accelerator, and tomorrow it might come back into the core. And then the day after that it may be running on something completely different in the system. The tradeoffs for all of those are different.

SE: So the hardware challenge is connecting everything together and understanding what process and data take priority compared to other processes and data?

Muller: People have been building complex software stacks for a long time. With the Internet, when you build a web site you can see how that’s changed from how you used to build a web site. There are multiple components you now put together. Now you’ve got a really complex web site where you may go from the videos to the home page. People have learned how to stitch together really complicated systems out of what might be 15 different programming languages, all held together with scripting. That dynamic way of building complex systems, and being able to rip out components, is the way that people are approaching design today. For many years, people will build platforms to help you build those platforms. You’ll pick your tool set, and someone else will focus on the same problem and build something completely different. One will scale better than the other and be cheaper to put together, and the other will be entirely non-maintainable. But there is not going to be a way of doing it simply. There are a few popular languages today, but when you decompose them and stitch things together, there is no commonality.

SE: That leaves lots of room for innovation, right?

Muller: Yes. And there’s as much, if not more, innovation in the software platforms as in the hardware platforms.

SE: But it’s no longer software-defined hardware. Every piece has its own potential to go in a different direction.

Muller: The transition is no longer about hardware and software. It’s about delivering a service. We have to maintain it. We have to keep it alive. We have to change it while dynamically also keeping the service up and running.

SE: Which is difficult, right, because we don’t necessarily know where this is all heading? The edge is a new element in this whole thing. It’s sometimes client/server, sometimes not.

Muller: What you’re adding in there is that the services are driven by the data that is collected and processed. That data flow is as important as how did you join the sensor to the cloud.

SE: So it’s plumbing for data rather than just electrons?

Muller: Yes, and that the compute that happens on that data.

SE: So how does this change your job?

Muller: It used to be about a system architecture that was inside a box. Now, you have to step outside of that box. It’s a complete end-to-end system, and when I started there weren’t that many big systems that spawned so many different components.

SE: Where does machine learning/AI fit into all of this?

Muller: There’s part of machine learning that doesn’t happen just in the cloud or on the edge. It happens everywhere. So you get ML accelerators in the client and the cloud devices. Traditionally, training happens in the cloud. In the future, it will happen everywhere. You’ve got tradeoffs with that, and those won’t just be about CPUs. There will be big ML accelerators and little ML accelerators. The change that machine learning is bringing is that the collection and processing of that data now becomes integral with providing a service. Therefore, you need to measure where that data goes, how it’s being stored, how you’re architecting it. I consume the data to do whatever I need to do with it, while in the past you could throw it away. With ML, you need to keep it because without the training data you don’t have anything.

SE: So now, rather than just going for power, performance and cost, which are still important, you’re looking at data as the first point of consideration? What are you going to do with that data?

Muller: It’s like a caching problem. It may be really cheap to do that caching, but it also may be really slow to get to it. A lot of these systems are about how architecting how much of that data you need to replicate so you can get at it quickly. In automotive, there is stuff you need close to the car, and stuff that can be further away. Over time that becomes a dynamic thing as you move the data around.

SE: Several years ago, the Internet of Things was the hot topic. Where is the IoT today?

Muller: Nothing has fundamentally changed. As more and more devices become connected, more devices send, consume and store data. You connect all these sensors and devices up and you’re going to have a whole bunch more devices in the cloud. It now has machine learning and obvious touch points in consumer products. Nobody wants to talk about better control. They want to talk about the next feature in your mobile phone. So machine learning is expressed in photo recognition and other features in your phone. The actual machine learning that takes place behind the scenes in the IoT keeps going. It’s just not that exciting. Your building runs 12% more efficiently than it did the year before.



Leave a Reply


(Note: This name will be displayed publicly)