The Building Blocks Of Future Compute

How Arm sees its role in emerging segments of tomorrow’s compute challenges.

popularity

Eric Hennenhoefer, vice president of research at Arm, sat down with Semiconductor Engineering to talk about privacy, security, high-performance computing, accelerators, and Arm’s research. What follows are excerpts of that conversation.

SE: Privacy, cybersecurity, silicon photonics, quantum computing are all hot topics today. What do you find really interesting with these emerging areas?

Hennenhoefer: Photonics, quantum computing, superconductive, the end of Moore’s Law — they’ve always been 10 years out for my whole career. Some might actually happen.

SE: Especially quantum?

Hennenhoefer: Quantum is interesting, but we don’t know what we’re going to do with it. The good news is we have new algorithms that are quantum-resistant and crypto and stuff like that, so it’s not going to be a situation where there’s going to be a breakthrough and everything’s broken. We’ve known this is coming, and it’s been one of those things where we need to get some of those computers to see what we could actually do with them because it’s not quite there yet. It’s not clear because they work differently. They will be a new tool. For sure, it’s going to get interesting, but it’s going to be one of those things where it will take awhile to figure out how to apply that technology.

SE: What will happen with all of the legacy computing resources in a quantum world?

Hennenhoefer: We have traditional programming compute, which works well. Now all the stuff that works poorly, machine learning’s chipped away at it, so that’s good. The really interesting thing about machine learning is that it lets us solve things we couldn’t do before, which is much more interesting than solving things we could do, but just faster. That broadens the market. Quantum computing theoretically could solve some hard problems traditional computers can’t. Are we going to be able to use it widely? There’s speculation we could do deep learning and stuff on it but we’re going to have to wait and see. Arm’s role in that is it’ll start with these quantum computers that will be specialized compute engines, and they’ll need to attach to regular compute.

Currently a lot of these quantum devices work at very cold temperatures so we’ve been looking around there. You have to put ideally a very small embedded processor next to it.

SE: When is the quantum computer IP market going to take off?

Hennenhoefer: We’ve got a while to go on that, but, similar to machine learning and all these new applications, they need to integrate with existing systems.

SE: Is there a recipe to Arm’s success?

Hennenhoefer: It’s the business model that has been the important part. We ask customers how they’re going to make money. They say, ‘I thought it was us selling stuff.’ No, if you’re launching a product and if your lead partners aren’t successful, you’re pouting. The flip side is without the right balance of ecosystem partnership, we’d have to do all this stuff. And if we had to do all this stuff there are business challenges. Would we be able to charge enough? Would we be scaling? Arm’s a relatively small company, and we get questions about how we’re going to compete with Intel because their sales are more than our profits. The answer is it’s not just Arm. It’s the whole ecosystem. We do research for the ecosystem and Arm Inc. Some of it is just paving the way, like through high-performance computing, where it is really our architecture partners that are going first. There are lots of benefits for us helping them be successful, and things do come into our products later, but we need to get there and make sure they have freedom to operate and prepare the market. There are only certain products that are going to be viable as IP products, and there’s a whole bunch of other stuff that won’t be, so it is important that we know what we can do and how to add value. Still, we do spend a lot of time worrying about what new technology means and what Arm need to do. Does the ecosystem need to do anything? Do we need to get involved with those memory standards, or is it all just going to be fine and we can stop worrying about it?

SE: Shifting gears to hardware architecture, how do you see the ebb and flow of general-purpose versus hardware built for a specific purpose?

Hennenhoefer: The interesting thing about the embedded space is that usually you have some sort of power and real-time constraint, and you can do the math. If a general-purpose compute works, you’re done. But there are a lot of things you can’t do, like if you’re going to build some sort of high end radar system you just can’t do it. There, you need to go and design something customized, which is more expensive. But generally you can get an order of magnitude or two efficiency out of that, and usually you push that only as far as you need and then stop. Then it turns into a long-term cost thing, where it costs a lot to build custom hardware. It has to be maintained, it has to be redesigned, and it’s on a mass scale—especially in the cell phone platform. As such, we’ll have new applications that will have requirements beyond general-purpose. We’ll have other ones that work just fine and you just hang accelerators off the edge and can do video, audio and machine learning. It evolves based on that. It’s really going to depend on the market, and there’s always going to be the need for some amount of specialized compute. As transistors kept getting faster that enabled us to build here. One of the easiest things to do is just run it all on the CPU. It’s just easy. And so segmenting the work — there are huge gains, but if you don’t have to do it because CPUs just got faster every year, you were good. We’re going to see a continuation of needing to hang additional specialized things just to reach the power or cost profile. It may be that a CPU could do it, but we need to do it in an order to magnitude less.

SE: Where does machine learning fit in?

Hennenhoefer: The same will hold true for machine learning. Arm’s view on machine learning is that it’s going to be everywhere. It’s a different class of compute, which solves problems that we weren’t very good at. That’s totally awesome, but it’s not a CPU problem or a GPU problem, or FPGA, ASIC or accelerator, because it really depends. There are some applications where I worry about how much energy it might take. It doesn’t have to be done quickly so yes, you can spin up some big GPUs, which are amazing creatures. They have huge amounts of bandwidth and compute and take a bunch of power, but not everyone needs that in all cases. If you just need a little machine learning from time to time and it’s not time critical, CPUs are just fine, whether they’re embedded GPUs or not. Also, depending upon your platform, some people have roles where there are GPUs in there. And more important, for power reasons some people need an accelerator. Arm Research’s role in this was really to focus upon all of our products and get them optimized. This predates machine learning. For us, we look at every new workload and ask what it means. For microprocessor people, all that means is new instructions. Maybe sometimes it means the ratio between compute and memory is different here. And if you’re going to target that we need to make sure we offer these things and provide some guidance. As with HPC, a lot of it involves paper studies to show how to configure Arm IP to meet the workloads so there is actual evidence before they build a giant computer. With machine learning, we added some instructions into the CPU that were an easy win. But second order is controlling how the data moves around through the microarchitecture and, and so it’s just a big optimizing engineering system.

SE: What works best where?

Hennenhoefer: We’ve done this with a lot of applications. ML is interesting because it’s so big and it can apply everywhere. But in the future CPU, GPU, accelerator, your own accelerator, different algorithms in the toolbox — it’s going to be all those things, so there’s not one big bet to make. And a lot of it comes from the ecosystem, too. For instance, if you wanted to run machine learning on an FPGA, and some people do, Xilinx has one of those. It’s got an Arm core. Even Nvidia has Arm cores inside next to theirs. The story of future compute is really optimizing it better for the applications, and if the rate of new chips was a little bit slower it wouldn’t necessarily be a bad thing because you’d have some more time and you’d optimize. Spinning up a new processor every year is akin to just taking a vacation because the next one is coming. All of this will force people to look at the actual use case. I

SE: Where will silicon photonics fit in?

Hennenhoefer: ‘I don’t know’ is the answer, which is the beauty of having a 150-person research group. We have a disruptive technology roadmap that is maintained by one of our groups, and their whole job is to continuously review disruptive technology and set flags about when to look at it next, and if certain things change we need to pay attention. With quantum, apparently they have a marketing person somewhere. It’s more like quantum adequacy. It’ll be cool when it happens. Actually be quite cold when it happens, but we’ll have to see. As for photonics, my assumption is it would come up in telecom or HPC specialized markets. It’s not going to be an IP block we’re going to license anytime soon, but it very well could influence whether an AMBA bus is too slow. It’s the same thing with technologies like 3D, chiplets and stacking. We ask, ‘Does this break anything? Can people build stuff within our standards? What do we need to do?’

SE: When it comes to privacy, which is a growing concern, what does the industry need to do? What can Arm do?

Hennenhoefer: It is likely the government will regulate and mandate certain levels of security, which will then give us economies of scale to be able to do it. We look at new technologies and a lot of times we see potential. We look into our core market being mobile and the answer may be, ‘Not yet.’ It goes back on the shelf, like 64-bit. When we look at ways to do more security — there’s a DARPA project that we’re involved in, and I applaud DARPA for picking hard problems. Their plan is to solve all of them. Still, there is a cost to adding more hardware in to defeat a lot of these attacks. I suspect eventually governments will say this needs to be a ‘security level three’ and we’ll just stop talking about it. It’d be like, ‘Yes, it’s in there.’ But right now we have a case where every PC is insecure. You walk in through the USB bus. Let’s not pretend it’s not. Companies like Google are doing a lot of work to make things more secure, but we’ll need to go through some rounds of security and there’s a cost. As processor people, we’re trying to save energy, and the ironic thing is the optimizations that we’re putting in for power and for performance are being used against us. There are ways to fix all that, which is good, but it’s the problem we didn’t know we had. We’re going to be in an extended period where we need to raise the awareness around security. The technology companies need to come up with additional building blocks and help get them adopted. There’s a role for government to play. They could accelerate that in niches and demonstrations, which is exactly what DARPA is doing.

SE: Where does privacy fit in?

Hennenhoefer: Privacy is harder because different cultures have different expectations of privacy. That one’s not going to be one-size-fits-all. In order to control your data we’re going to need to invent new things. And there are technologies like homomorphic encryption, zero knowledge proofs that let you reason about data without disclosing it. Blockchain is in there too. You’ll see technology being applied to government. Like with zero knowledge proofs, you can run checks about compliance without seeing the actual stuff underneath. There’s a general good hygiene, but we’re going to need to figure out which of these new technologies we can use that doesn’t just reduce everyone’s privacy, but at the same time gives us some more transparency without compromise. The difficult thing will be the EU, the U.S., China, and others will all have different expectations of how this is done. Exactly how it’s going to roll out I’m not sure. But technology companies need to be in on this because, well, if you don’t use some signatures and some stuff in the beginning, there’s no way later to put it back in.



1 comments

rgarcia071 says:

I’d ask what does he thinks about open source architectures, and how they could change the industry.

Leave a Reply


(Note: This name will be displayed publicly)