Arm’s CTO sounds off on machine learning, the new starting point for designs, new markets that are opening up, and what became of dark silicon.
Arm CTO Mike Muller sat down with Semiconductor Engineering to discuss a wide range of technology and market shifts, including the impact of machine learning, where new market opportunities will show up and how the semiconductor industry will need to change to embrace them. What follows are excerpts of that conversation.
SE: It’s getting to the point where instead of just developing chips, we’re looking at what we can do with technology. We have enough processing power to make machine learning possible, and enough bandwidth and memory to make it ubiquitous. And that’s just one narrow area. Where do you see all of this heading?
Muller: We tend to get hung up on all the ‘high techy/transistor’y/software/cloud/appsy world,’ but there’s awful lot going on elsewhere. For example, in gene-editing, there’s people down the road from us at Ambrosia who will inject you with blood plasma from young people because that’s how you will regain a bit of your vitality. There’s a whole lot of biomedical stuff going on that is as transformational. CRISPR gene editing raises a whole lot of ethical and moral questions, but the technology is there, it’s going to work, and we’re going to learn more about the bits needed to move around to achieve what. It’s classic sci-fi meets technology meets biology meets humans, which will blur a whole lot of boundaries.
SE: Throughout the mobile phone era the emphasis was on low power to extend battery life because we had enough processing power. But all of a sudden, processing power seems to be back in vogue. Why?
Muller. In terms of processing, what’s clear is that people will find useful things to do with whatever you give them. If you look at the mobile phone evolution, I don’t think anyone really put it all together to be able to say, ‘This is what an app in 2017 will be doing and this is how much bandwidth it will be consuming, and therefore this is how much compute it needs.’ It’s more like, ‘If we have a little more compute, or a little more comms, we can do a little bit more.’ People have consumed it and will continue to consume it. We haven’t yet reached the plateau.
SE: Some of the hot markets for all of this processing power are AI and machine learning. What happens next?
Muller: We’re building the engines to enable that. If you go back to the mobile phone’s evolution, I could have given you the hardware roadmap of the mobile phones because we could look at what was doing, you could look at what the processor architectures were going to do, you could look at the stuff done in the 1960s, and you could map it out. We had forward-looking projections on what phones do. But at no point did we say ‘Uber’ or ‘virtual reality games.’ All of the things that consumers now associate a mobile phone weren’t there, even though you knew what the hardware roadmap was going to be.
SE: We also had a single hardware roadmap in those days. Now, IEEE has broken it up into vertical markets and horizontal technologies with the IRDS.
Muller: But within any particular market segment you can form a view on what manufacturing technologies are going to do, what we are going to be able to build, and what memory technology is going to play out.
SE: Still, this isn’t all evolutionary, right? We’re now leaping into different areas.
Muller: Yes. You can say where the hardware is going. But with the track on biomedical, at some point there will be someone who will put that together with whatever is the appropriate bit of mobile phone technology. It isn’t waiting for the next generation phone—it’s waiting for the next generation biomedical stuff to come to a point where it works well enough. And yes, we’ll have the compute component that goes with it.
SE: This goes across markets, too. You don’t know where things are going to come into this market, whereas before it was a fairly linear progression. If you look at your own acquisition by Softbank, no one expected that.
Muller: There are there’s two pieces here. One is, ‘What are the underlying platforms?’ There will be an emerging biomedical platform that will enable things to happen. On top of those platforms are the software and services that people dream up, and that’s the bit that’s the most unpredictable—and the hardest to have a roadmap for how it plays out. That’s where the real invention and creativity happens.
SE: Flipping this around, what concerns do you have about all of this technology?
Muller: As technology becomes more embedded in people’s lives, the impacts of hacks and security breaches are going to be more significant. There’s a disconnect between what people say they want and what they’re prepared to do to protect themselves. That worries me. The technology is becoming more and more embedded, and more and more is known about you. Society is not responding well to that at the moment.
SE: So it’s about who has rights to the data?
Muller: Yes. It’s not just about making things more secure. That’s an important part, and you can roll that out over time. It’s the fundamental issues around who owns the data, who can do what with the data, what permission did you give, and do you understand the downstream ramifications—and whether people realize what they’re implicitly signing off. It’s a social concern. I’m not sure it’s actually the technology. It’s what people do with it, how it gets used and what the social impact is. On the technology side for things like gene-editing and medical interventions, that opens the whole eugenics debate. There are some real significant issues that will need to be addressed around what is it to be human.
SE: So does the starting point of a design become the data rather than the technology?
Muller: In terms of businesses and business models, yes. For devices, a lot of that is and always will be rooted in the physical world, where you actually have to deal with the bits and the bytes and the megahertz. But that depends on what products and services developed around the data I have, what I can learn from that, and how I extract and monetize it. The device merely becomes part of that equation. There’s this whole move from ‘I used to buy this physical thing,’ which is what the product was, to ‘I’m paying for a service and I might get the thing for free.’
SE: You don’t really care about the thing. You care about the service.
Muller: Yes, and the value shifts to what am I paying for, how is it subsidized, and how does it work.
SE: As that happens, do we start get a much tighter coupling of hardware and software than we’ve had in the past? We’ve been moving to co-design for 20 years and it still isn’t fully here.
Muller: The tighter coupling only comes when there are other serious constraints. While all the individual components are iterating and moving so quickly, there is no time to stop and pull them back together again. When you start to run into limits, that’s when you have to go back and say, ‘I can’t assume the platform is going to double, double, double, and double again.’ I need to step back and look at how do I get the most out of this platform. That doesn’t happen until some parts of the world slow down.
SE: Another trend is the severe pricing pressure on logic. Microcontroller prices were down about 15% last year, and processor prices are going down too. There’s so much processing power and generic hardware available these days that it’s harder for a lot of companies to differentiate. How does that play back into your world?
Muller: That’s a trend that’s been going on for a long time. The PC market has a completely different margin structure in some parts of it, but if you were on the front line of shipping the boxes you would have said your margins were eaten away a long time ago. In most of the markets that Arm plays in, we’ve always operated in a world where there’s much more competition, and therefore the margins have always been that much thinner. It’s not a radical shift. It’s a tightening of what has been business as usual. For us, it stays a licensing and royalty business. That doesn’t change.
SE: As we move forward and this model plays out eroding prices to a certain point, do we start seeing the value come in different ways? And if so, where is the value? Is the value in the custom design for a particular market or the package that’s made out of different building blocks?
Muller: If you go back to the top of the chain with the OEMs, different OEMs have decided who wants to be completely vertically integrated and who has gone for a differentiation with a UI design or product marketing using commodity products. Different players have taken different routes on how they want to do that.
SE: In some markets, we can count them all on one hand.
Muller: Yes, some of those markets have consolidated. There are much fewer mobile players today, for example, than there were 10 years ago. But if you look at IoT, it’s the exact reverse. There are 1,001 players, no standardization, lots of diversity, and getting to scale is actually a challenge. It’s a completely different dynamic. Where people are extracting value today in IoT will be completely different in 10 years time when there will be a smaller number of players doing it in different layers. If you look at the markets, they are playing out in very different ways. Mobile has got to, if not actually mature, then something close to that.
SE: We’re seeing a lot more heterogeneity in design with accelerators and a different applications. The systems are getting more complicated even as the components are getting, in some cases, simpler.
Muller: People have learned that with that heterogeneous evolution, we’ll also have the sophistication in the software stack. So people are now much better at putting in their layers of abstraction, working out all their programming interfaces, and then being able to say ‘I have a custom hardware accelerator here. But if it’s not here, and if I have a different one, it doesn’t ripple all the way up the software stack.’ You go back 15 years and that accelerator became almost programmed from the user level, and it was all glued together. Now, it’s nicely abstracted and you can have it programmed with or without that accelerator and the software stack above it still stays the same. The sophistication in the software development is matching the complexity in what the hardware components are.
SE: As you add more granularity into designs, is there still the same focus on dark silicon and turning off cores or various functions? Or is it starting to move toward a world where components are sized for a specific application?
Muller: In mobile what’s happened is we’ve ended up with ‘warm silicon.’ There were more transistors there than we could use, but you turn it on and you can’t run it flat out because it gets too hot and you throttle it back. So people have found a different way of using the transistors, which is, ‘as much as I can get away with.’ When we did the dark silicon work, we didn’t anticipate that degree of sophistication in being able to extract the most you could from a chip because there was this tempered element added in, which actually meant you could exploit that dark silicon as long as you didn’t do it too often and too much.
SE: That was a linear progression off Moore’s Law, where you had these extras cores but you couldn’t turn them on all the time because they got too hot. You had to use them sparingly and appropriately. We’re seeing a lot more as you are with near-threshold computing stuff coming in. People are sizing these differently with different size cores.
Muller: The whole big.LITTLE or big.medium.LITTLE—they’re all ways of dancing around that same problem to get the most you can out of your thermal and power budgets.
SE: So the world has gone to the architects. What can they do that hasn’t been done before?
Muller: We’re still on the road of enabling that heterogeneous system. You’ll see us push out a whole load of stuff on machine learning with a library of abstractions so you don’t care whether it is running on an Arm processor, on a Mali graphics engine, or on your custom hardware accelerator because you abstract it away with neural net libraries. It’s all about trying to simplify what the software development environment looks like. If you look at any real system deployment, the cost isn’t in the chips or the device, it’s in the software and the applications you put on top of it, and then the systems engineering you need to deploy those complete systems in the real world. While it’s critical and you need the most you can get out of the hardware, your job is to make it as easy to use as possible because it’s the software development frameworks on top of that where all the real money is being spent.
SE: Are you surprised at how fast machine learning is kicking in?
Muller: We just did a machine learning project on CPU verification. Can you train a set of classifiers to work out what are good and bad tests for a load store unit? The answer is yes you can. Generating tests is cheap. Running them is really expensive. So if you can train a classifier to recognize good tests, you can generate a million more, run them through the classifier and select just the best ones. You actually can halve the time it takes to do verification. There is machine learning in products. You might use machine learning to make your business more efficient. Your customer may never know about any of this stuff. It’s not just about shiny new toys. It’s actually about looking at everything you do. And for us, a big chunk of our effort goes into verification. Machine learning can do some of it better than people. It’s not a sexy application, but it’s a significant cost in our business. What’s happened is the tool flows for doing machine learning have gone from geeky research to the point where you can download it and have two people sit on the side of a verification team and see what they can hack together. With remarkably inefficient, badly stitched together machine learning algorithms and a few CPU cycles, you can transform how we do this. I am surprised you can do an awful lot with very little. It’s because there are now a lot of high-quality tools out there that let you build flows and stitch it all together.
SE: The algorithms, training, and inferencing have improved, and we run all of this on on edge devices, which was never possible before. So where will this roll out?
Muller: ML will touch everything.
SE: How do you see ML versus AI?
Muller: Artificial intelligence, machine learning, deep learning and neural nets are all variations of the same thing. To make a device that you think is intelligent you may need to stitch together lots of different bits. One big bit of that would be machine learning.
SE: Do you still consider Arm to be an IP company?
Muller: We added IoT cloud services over the last year, so you can come and do your device management and attach your devices to our cloud which runs 24 x 7. That’s a cloud service, not an IP play. We still have an IP play. That’s what most of the business is. But we now have a genuine cloud services business and that’s new.
SE: That moves you substantially closer to the end customers, right?
Muller: Yes, because the customers for those cloud services are closer to being the OEM—it’s not the chip folks. It’s not in some ways the same customer as our chip partners’ customers, because it’s quite often a completely different part of the same organization.
SE: As you move in that direction, do you get insights you didn’t have in the past when you were solely an embedded IP core vendor?
Muller: The closer you get to customers, the more you learn. We saw this in 1997 with the launch of the first Nokia phone. Arm started in 1990, and I started flying to visit Nokia regularly in 1993 and continued to meet Nokia at least twice a year. We didn’t sell anything to Nokia, but we spent an awful lot of time with them. In the early days of Arm, it was a push/pull model. We pushed our technology to our semiconductor partners and we went to the OEMs and persuaded them that it was really great technology and they needed to ask their chip guys to do it. We’ve always invested in engaging with the end OEMs even though we weren’t selling them anything. They’re the people with the real problems and they understood what it was. Nokia said our challenge was code density. That came through engagement with the end customer, rather than our partner.
SE: Has it become more difficult? You’re now in a lot more markets than in the past.
Muller: The world gets more and more complicated and moves faster and faster. Everyone talks about the industry maturing, consolidating and that all makes it sound like it’s getting easier. It’s not. It’s getting more complicated.
Related Stories
Starting Point Is Changing For Designs
Market-specific needs and rules, availability of IP, and multiple ways to solve problems are having a huge effect on architectures.
The Next Phase Of Machine Learning
Chipmakers turn to inferencing as the next big opportunity for this technology.
Tech Talk: Near-Threshold Power
A look at the power benefits and performance impact as designs move closer to voltage thresholds.
Leave a Reply