More Processing Everywhere

Arm’s CEO contends that a rise in data will fuel massive growth opportunities around AI and IoT, but there are significant challenges in making it all work properly.

popularity

Simon Segars, CEO of Arm Holdings, sat down with Semiconductor Engineering to discuss security, power, the IoT, a big push at the edge, and the rise of 5G and China. What follows are excerpts of that conversation.

SE: Are we making any progress in security? And even if Arm makes progress, does it matter, given there are so many things connected together?

Segars: It feels like we’re making progress. We rolled out our PSA architecture last year and feedback from our partners is, ‘This is good.’ There’s a lot of fragmentation in security, and the more fragmented it is, the harder it then becomes to write software on top of a chip that is secure. What we’re doing in PSA (Platform Security Architecture), putting those building blocks together, has had a good reception from our partners because it does help solve some problems that are very common in lots of chips. There’s no point in needless differentiation at that end of things. Now, we’re a ways off from seeing those chips come out into the market because of the design cycles. These things always take longer than you’d like. But by way of adoption, it feels as if we’re making progress.

SE: Another side of the security problem is that companies can build great security into a chip, but that security isn’t integrated with other components, even within the same device.

Segars: One of the strengths and weaknesses of our business model is that we enable differentiation. We create these building blocks, but there’s only so far we can go. We don’t want to dictate how to put the whole chip together. The next step up is in defining PSA as a system-wide architecture and provide a bit more commonality to avoid the, ‘You’ve got these bits, but unless you use them in the right way, unless they’re wired up correctly, you don’t get the benefits of them.’ We’re trying to take a more system-architecture approach to address some of those issues.

SE: Are the chipmakers willing to pay for security? Consumers seem to expect it.

Segars: Consumers aren’t used to thinking that their television or refrigerator is a security issue. ‘Why would someone want to hack my world?’ But as more connected things come into our homes, more people need to be aware of this, and the more the technology industry needs to be addressing it. There’s going to be a cost to putting it in devices, and ultimately the cost needs to ripple up the supply chain. Someone needs to pay somewhere.

SE: Let’s swap topics. Power is a very big issue in design. Do customers recognize that?

Segars: In the IoT world with really low-cost devices, there’s a high correlation between low power and lower cost. In really small devices, that matters. In things you want to deploy with energy harvesting, the lower power you can make them, the better. A lot of what we’re doing with lower power really matters.

SE: Where is energy harvesting in all of this?

Segars: I’ve seen a couple of solar technologies that are pretty interesting, and one thing that’s driving an uptick and interest in energy harvesting is the IoT, with the idea that some of these devices are going to be very remote. You don’t want to service them very often or change batteries. Energy harvesting is a natural way forward for power supply.

SE: Are we speaking about IoT as a thing these days, or is it evolving into a lot of narrow markets?

Segars: We’ve been thinking about it as lots and lots of markets for a long time. From the early days of IoT, most people were thinking about lots of verticals. Some of those groups are viewed together, like smart buildings, which is still too large a grouping. And smart cities is still too large of a grouping. But those groupings do seem to be breaking down.

SE: As part of the IoT, edge devices have suddenly become a hot topic. What’s changed?

Segars: The idea that you can take all data, move it to the cloud, process it, and get an answer that is ‘yes’ or ‘no’ is crazy. As people get their heads around the magnitude, the volume and the velocity of the data that the IoT world creates, it’s just not feasible to think it’s all getting processed in the cloud. That’s driving edge processing.,

SE: Streaming video is gigabytes per second, right?

Segars: Yes, and you can imagine what happens if every security camera in the world, instead of doing local processing, send all of those raw images to the cloud to do lens correction. It’s not possible. IoT is an extension of all of that. You want to be throwing data away so you get this cone of data heading for the cloud, where the only right amount is being stored. You throw out everything you can along the way.

SE: And prioritization of how you act on things has to be done locally if it’s time-sensitive, right?

Segars: Yes, absolutely. One of the things 5G is going to do is solve network latency. Part of that only can come through local processing, particularly for time-sensitive applications.

SE: Does this leave you with a big opportunity versus what you thought a year ago?

Segars: Actually, five years ago this is what we thought was going to happen. It’s taken a while to become a reality and we’re still a long way from mass edge computing, but we used to talk about the ‘intelligent, flexible cloud’ before it was called the cloud—data centers and endpoints and networks with sophisticated switches in between.

SE: Does Arm still see a big play on the cloud side, or is more now about the edge and the mid-range servers?

Segars: It’s still very much about pushing ahead on the data center. Clearly, no matter what happens on the edge, the cloud is going to grow and will be a big source of growth for the semiconductor industry. We want the opportunity to participate in that.

SE: But it also has to grow with very low power, because at this point the power consumption numbers are rising with the increasing amount of data. In places like the northeast of the United States, they really can’t even get more power. They’ve decommissioned nuclear plants and they’re limited by the amount of hydropower.

Segars: It’s interesting to see how energy generation is evolving. Over the last couple of weeks in the U.K., wind has only contributed to about 4% of the country’s total energy, whereas in January-February there were big storms and wind contributed 20% to 30%. It’s a huge difference. The U.K. is not using coal much these days but it’s there to be switched on. Energy generation seems to be a hybrid of technologies, depending on the prevailing conditions and loads. The fact that wind isn’t blowing right now is survivable because it’s warmer, whereas, in the middle of winter that would be a real problem. That will be the way forward—a hybrid of solar, wind, coal, nuclear, gas and oil.

SE: That becomes an interesting part of the whole grid for automotive because a number of companies are going electric. But how are we going to get the power to all these stations?

Segars: The electric car thing is really interesting for what you do in a car, but it’s also interesting because energy will need to be distributed to remote places. That will require a major infrastructure upgrade.

SE: We need major infrastructure upgrades for a lot of the new technology. With 5G, you need antennas everywhere.

Segars: Someone told me that putting up a small cell takes an afternoon, but only once you’ve gone through about a year of getting a building permit. That’s the limiting factor in the rollout of small cells that will be required for 5G, at least in this country. There are base station companies at the moment that have the infrastructure they rent out. But when you go to a small cell world, everyone who has a bus stop in a city might become a landlord.

SE: Where does Arm play in automotive?

Segars: We’re looking at the intelligence that’s going into a car, the range of senses, how you pull that data together, how much is processed in the car, how much you try to offload. We’re trying to think through the roadmap of the technology. What proportion is driven by expensive cars that consumers might buy that have lots of fancy features in them? How much is driven by the robo-taxi industry? Those will have very different economics. You and I go buy a car and it’s usually a one-time payment to the car company. A robo-taxi operating a fleet is focused on amortized cost over the lifetime of the car, and how much revenue is generated in the lifetime. Those two markets have very different economics, and can drive cost increases for all the electronics that are in the car. To have that robo-taxi world requires a lot of infrastructure in cars talking to each other and all the rest of it. It’s not a flip-the-switch convenience of putting my feet up in the back of the car while someone else drives me to work.

SE: Do we evolve into a world where cities put in more charging infrastructure?

Segars: Thinking back to where I live, there are cars parked on the street everywhere or in their garage. There are lamp posts with electricity along the street. It’s feasible for a city to put in charging infrastructure and solve the problems of payment for that. You can see that being solved. But it does involve a large investment.

SE: Do you think we’re actually going to get to the point where we can get the cost of all these electronics down to something affordable? A core competency of automakers has been keeping the cost down, but now we’re talking about basically running the equivalent of a supercomputer in a car.

Segars: That is one of the challenges. With an electric car, you get way fewer moving part than you do in a mechanical car.

SE: Somewhere on the order of 200 versus 2,000.

Segars: Yes, and that in itself reduces cost and service cost. But it’s replaced by some very expensive electronics right now. So is it a feature of an expensive car where you pay a premium, or is it a car delivered as a service where you can amortize the cost over hundreds of thousands of miles driven of a car that is going 24 x 7? A Boeing 787 is a very expensive plane for an airline to own, but airlines amortize that cost over a very long period of time. It’s a totally different model. The upfront cost can be high if there’s a business model that pays for it over time. It’s interesting how the whole ride-sharing industry is growing very rapidly and changing the way that transportation is consumed. We’re seeing it in China with rental bikes and scooters, and the scooters are coming more to the U.S. now. People’s attitudes towards transportation are changing.

SE: Let’s look at Moore’s Law. It’s getting very expensive to continue scaling, which is why we’re seeing so much interest in advanced packaging. Does that affect what you are developing?

Segars: It does and it doesn’t. Our processor designs exploit advances in transistor design that come along every generation. We also look for micro-architectural efficiency improvements. We just launched Cortex-A76 which gave a 35% year-on-year improvement over its predecessor. The majority of that came from microarchitecture design. It’s not what comes for free out of the transistors.

SE: So you’re no longer banking just on scaling?

Segars: Scaling has been slowing down for some time. There obviously are some generations of transistor scaling ahead of us, but we’re going to have to get smarter and smarter at design, at using the right tool for the right job. It’s not all about one big single-threaded processor. Mobile devices have become distributed computing devices. They are not just a big processor. They’re a collection of multi-cores and dedicated processors for different tasks. That trend that will continue.

SE: So Arm will continue with a broad lineup of processor cores?

Segars: Yes, and we’re going to continue investing in that—big processors, small processors, everything in between, plus our GPU family, our video processing family. We are looking at more and more machine learning algorithms implemented not just in the cloud, but also on embedded devices, edge computing and endpoints. We need solutions for that class of computing, as well.

SE: For machine learning, we’ve been using fairly standard algorithms that are being updated almost daily. What does that do in terms of design? On the training side, it’s mostly GPUs. On the inferencing side, you want to optimize it. But at the same time, you want programmability in there.

Segars: You hit the nail on the head. We’re a long way from AI. We’re using machine learning algorithms to do tasks better than they’ve been done before—speech recognition, image recognition—that’s what a lot of these algorithms are doing. But it’s all improving. In terms of implementing algorithms or executing algorithms in silicon, you have a tradeoff between complete programmability or a dedicated function, which is much more energy-efficient. We try to find the right balance at a moment in time based on the maturity of the algorithms. When people started buying MP3 music—and predecessors to MP3—and algorithms were changing a lot, it was all done in software on a processor. Video encode/decode similarly was all done in the first implementations in software on a processor. That’s energy inefficient. As the algorithms mature and standards mature, you move to dedicated hardware. Where we are at in the world of machine learning right now is there’s a class of algorithm that is heavily intensive on matrix multiplication, so you can build accelerators to make that better. You need flexibility in getting the data in and out, and so clever approaches have more intelligence in the way you funnel the data around and the way you share the workload between the main CPU, where you want that flexibility, and the accelerator. But we’re just scratching the surface of where computing goes in the field of data-driven processing. We’re a long way from general intelligence. There’s a lot of hype around it, but humans are going to be in business a long time.

SE: One of the areas that a lot of people associate with AI is the medical field. How will this apply there?

Segars: Think about radiology. You need someone to help diagnose what to take a picture of. Having machine learning technology analyze it and provide a system to an expert who understands these things is going to improve the quality of diagnosis, speed it up and reduce costs. Doing all of those things is good for accessibility for lots of people. There’s nothing bad about that. It’s a case where technology can help streamline part of a bigger function. Using radiology to diagnose illness can be enhanced through technology. The doctor’s capability is enhanced through technology—not replaced. You just can’t read the volumes of medical research published every day. You’re going to need help in sifting through and leveraging all that research for the patient you’re diagnosing right now.

SE: Let’s back out a few thousand feet. What are the big drivers for your business and what’s changed in the past year or so?

Segars: There’s a network upgrade going on that’s going to lead to more data, lower latency, and more connections. That is a processing opportunity in itself. The other thing that is really interesting is that those features will enable new use cases, new business models and new businesses. I’m not sure what those things are, but it’s going to be really interesting. It will all drive the need for more processing. You think about what 4G has done relative to 3G, and what 3G did relative to 2G. The days of, ‘you can send a text message’ and maybe vote on ‘Dancing With The Stars’, that was one thing. With 3G you could suddenly start to stream video. You’re now in a highly connected world where the latency means you can call up an Uber and the responsiveness is good enough that a service like that works really well. With 5G, the responsiveness goes up enormously. You’ll see people take those technologies and say, ‘Now I’ve got this low latency connection, I can do this.’ With a step-up in performance and low latency, there’s going to be a whole load of things people come up with. It’s going to create industries and businesses with billions of dollars of valuation that we can’t really think about today.

SE: Arm’s growth is closely tied to mobile phones. That market has flattened a bit. Does it now get a big boost from a lot of things around that?

Segars: The growth rates for mobile devices are clearly not what they were. They’re not bad in absolute volume—1.6 billion smartphones this year—and they are continuing to get smarter. But even though unit volumes are growing slowly, the amount of technology in the devices is going up. That drives a lot of what we’re doing. We’ve found if we get it right in mobile, most of those technologies end up in lots of other things. The volume of Arm-based chips in our partnerships was 21 billion last year. A lot of chips went into smartphones, but a lot of chips didn’t. There’s a lot of microcontroller devices getting deployed.

SE: Those are not the same microcontrollers that used to be out there.

Segars: No, they’re incredibly sophisticated—superscaler with DSP capability. Compared with performance in microcontrollers five years ago, what we’ve got today is incredible.

SE: What’s happening in China for you? The market over there really seems like it is starting to pick up.

Segars: China accounts for about 20% of our business now, and it’s growing rapidly. Whenever I go there, there’s always something new versus when I was there a few months ago. The whole bike-sharing idea went from nothing to where there’s different colored bikes everywhere when you drive though the big cities. Five years ago, there were no mobile payments in China. Today, using cash is difficult. China is on a huge economic growth tear, with urbanization being a big factor. All of this is driving a lot of experimentation around smart cities, the use of IoT, and there is a lot of innovation going on all over the place. We want to make sure we are tapping into all of that. That’s what drove our joint venture. We didn’t want to get excluded from any of that. China’s rate of innovation and willingness to experiment and fail and move quickly on to the next thing is unprecedented. You have to be there and take part in it or risk being left behind. We want to stay very closely attached to that pace of innovation.

SE: Any other surprises around the world in terms of how fast they’re moving with technology? Are other countries surging as well?

Segars: In terms of pace of innovation, China really does stand out.

SE: It has enough pieces now that they can start building one off the other.

Segars: Yes, and a lot of local government investments in China are interesting. They have large sums of money to play with. The cities that are quite a ways from each other, so they can do things on a large scale in a relatively isolated way to see what works. And when it doesn’t, they can move on to the next thing or take a solution from somewhere else. It’s a fascinating place.

SE: Will we see our first fully autonomous roadways in China, particularly out in the western provinces because there’s no infrastructure there?

Segars: There are experiments going on. ‘There’s a large stretch of road, let’s have a couple of lanes just for autonomous vehicles and see what happens.’ You have those large populations connected with this infrastructure, making it an interesting place to run the experiments. You can’t imagine trying to do that here.

SE: Big concerns on the horizon? Concerns on the uncertainty on the political arena or economics or is there more to it?

Segars: The semiconductor industry is enjoying a boom period at the moment. Revenues are the highest they’ve ever been and growth rates are at the highest. But ultimately the world has to be able to afford to buy all of these things. Global economics is in a bit of a precarious situation at the moment. There’s all this great stuff going on, but I hope the environment to absorb becomes equally positive. If you go back three or four years, semiconductor industry CEOs were asking, ‘What are we going to do next after mobile?’ Now they’ve got all these choices and they have to figure out which ones to back. It’s a much better problem to have. Oddly, at the same time, around the machine learning space, a lot of companies are getting funding to build ML chips, versus a few years ago when no one was putting money into semiconductor startups.

Related Stories
Processing Moves To The Edge
Definitions vary by market and by vendor, but an explosion of data requires more processing to be done locally.
New 5G Hurdles
Getting 5G standards and technology ready are only part of the problem. Reducing latency and developing applications to utilize 5G have a long way to go.
Architecting For AI
Experts at the Table, part 1: What kind of processing is required for inferencing, what is the best architecture, and can they be debugged?



Leave a Reply


(Note: This name will be displayed publicly)