中文 English

Conflicting Demands At The Edge

Experts at the Table: Cost, power and security clash with the need for intelligence, localized processing and customization.

popularity

Semiconductor Engineering sat down to define what the edge will look like with Jeff DeAngelis, managing director of the Industrial and Healthcare Business Unit at Maxim Integrated; Norman Chang, chief technologist at Ansys; Andrew Grant, senior director of artificial intelligence at Imagination Technologies; Thomas Ensergueix, senior director of the automotive and IoT line of business at Arm; Vinay Mehta, inference technical marketing manager at Flex Logix; and John Sanguinetti, CTO of Adapt. What follows are excerpts of that conversation. (Part 2 can be found here).

SE: The edge is an ill-defined, transitional area, but there is widespread agreement this is going to be a big opportunity for the entire semiconductor industry. Where do you see the edge and what sorts of problems are ahead?

Ensergueix: At Arm we tend to speak about the end point. The edge spans from the first gateway that connects to the endpoint and all the way up to the cloud. Each company has its own definition, but what’s important is how we can add scalable intelligence, how we can move workloads here, and how we can add end-to-end security from the node up to the cloud. There’s a question of what is the best way to liberate all this compute power that is now distributed within the node or the edge in the best manner. Network bridges and switches have been replaced by new edge servers. You see those in 5G base stations and in the gateway. They can do inference and training and data analytics on the data plane and inside meta data. We don’t need to send everything back to the cloud. There is so much we can do closer to where the data is generated. Scalable networking across the edge is going to be key for the next few years.

Grant: We tend to think of this as being a very fluid landscape, and hard definitions of the edge are often not helpful when we discuss this. If you think of it in terms of where is that end device or end point, is it in an autonomous vehicle or ADAS, mobiles, surveillance or smart cameras. Where it sits is really important, and so is the purpose it’s being asked to serve. What’s going to be run on that end device, and can you accelerate it appropriately to the performance that you really need? It’s a really exciting and challenge time, because things are changing every day. In addition, there is a lot of stuff being developed in the universities that is about to hit.

DeAngelis: The edge is where the machine or the device meets the real world. It’s your ability to interpret the analog and digital world that surrounds you and to bring that into the network to process. The key areas of challenge and opportunity are to influence the types of semiconductor devices we bring to market in a way that these devices can provide a high level of diagnostic capability or real-time information. This is important because as we improve the real-time information we collect at the edge, that enables AI algorithms to make better decisions. That’s really the key here. It’s providing higher levels of algorithms to be able to make decisions on the fly. We need the ability to adjust the network on the manufacturing floor, for example, to make different kinds of products. So if you think about COVID-19, a lot of companies want to change their product lines to address different products to manufacture, but it’s difficult to do.

Mehta: There is always a tradeoff between whether you process data locally or in the cloud. Is it cheaper to have a higher utilization resource in the cloud, like a GPU put in a data center and query it once every 60 seconds, or to do the processing in your home, for example, where that resource sits idle for 90% of its lifetime? At the same time, there’s a shifting back and forth between memory to compute, where one or the other is the bottleneck. In the edge you have garage door openers or robotic vacuum cleaners, which are basically IoT devices. At this point they’re not smart, but you can start to see where adding intelligence can have a value. And this ranges from Arm microprocessor on a Raspberry Pi to water my plants all the way up to an 800 watt, 2 terabyte server sitting in the back of a vehicle generating a decision tree. And you have to weigh this against what the cost would be if you have 5G communication and very low latency. Stepping back, there is a bubble of the edge expanding, a bubble of the network expanding, and all of that affects what we do locally.

Sanguinetti: The edge is the interface between the digital and the analog world. That’s really the definition of it. That interface may collect information, like a sensor, or change something, like an actuator. Edge devices are the entities that allow objects to become Internet citizens. In the past, Internet citizens were always people. Now there are lots of types of Internet citizens. One of the interesting aspects of this is the industrial IoT. So around your house there are camera doorbells and a variety of security devices, and those are one class of edge device. But the interesting ones are in the Industrial IoT, where you may have thousands of devices sitting in a field. There is one case where a geological survey being done with explosive charges set every few meters over a large area. There are thousands of devices collecting data and trying to get that data back to some central space to do some processing. With these large-scale devices there are thousands of devices working in concert.

Chang: The big challenge here is trying to inject intelligence into the edge devices, and over the past two or three years there have been a lot of efforts, such as the TinyML community. That’s a departure from traditional IoT or IIoT devices. You want to enable local intelligence, because when you get a lot of data from the environment or the manufacturing factory floor, or even sensors in your body, you cannot send all of this data to the server in the cloud. The question now is how do we filter data correctly. If I am using Microsoft mail, it will filter out a lot of irrelevant e-mails, but I can still find one or two e-mails that are relevant. It’s the same for devices on the edge. You don’t want to see important data get lost in the sensor. Another characteristic of an intelligent device on the edge is low power. If you look at previous IoT and IIoT devices, the power requirements were pretty high. With new devices for TinyML or the edge, people are looking at microjoules or picojoules for operation. Also, these devices are asleep most of the time. You need to be able to wake up a processor, which is different from a CPU or GPU. You want to have a very low power device wake up when it needs to. You also want it to be autonomous, which may require energy harvesting because if you can harvest energy from the environment it can survive for a very long time — even on Mars. Some of these devices will need to survive longer than 10 or 15 years. Digital circuits have to be optimized. Analog circuits need to run in the sub-threshold region. And it needs to be secure. You need to secure the information, so you need to have a co-processor for security that is always on. Also, if you’re looking at vehicle-to-vehicle devices, latency has to be extremely short for handling local intelligence and avoiding traffic. We also need to figure out what are the killer applications.

SE: Because there are so many variations and iterations, almost everything is a unique design. There are different applications, power requirements, and two edge devices may sit at different places between an end point and the cloud. How do we design what are essentially custom or semi-custom chips while also working within all of these different constraints?

Grant: That is the challenge of our day. How do you build something as modular and configurable as possible, but also as simple as possible? We do embedded neural networks, and people are always thinking about how is that going to act in my SoC. You’re trying to give them as much flexibility as possible, but with a lot of customers — and it depends on where they are in the world — they want some they can just drop in. The use of a LEGO device, or multiple LEGO devices, as building blocks with multi-instance, multi-core is really important at the ADAS level. But if you’re at the very small level, and you want sub-1 TOPS performance with really, really low power draw, that opens up another set of challenges. The only way we’ve seen to get around that so far is to design so you can scale up and scale down, and that adds its own set of challenges.

DeAngelis: This is the challenge — how do you create something that is flexible enough to adjust and react. We’re fortunate enough to have some technologies out there that enable us to do that. If you look at intelligent sensors, there’s new types of technology out there, such as IO-Link, that has a very common interface. You can think of it as an industrial interface. You’re basically loading a stack onto this device that allows you to customize it on the fly. So it’s not just a dumb sensor where you’re reading it information. You can actually talk back to the sensor and reconfigure it. But this whole concept requires a new way of thinking, and that’s what we’re focused on. This includes things like software-configurable I/Os. That gives you the flexibility of adjusting the size and the interfaces that are required to accommodate the expansion of a system for a manufacturing line. We’re integrating additional types of flexibility into the ICs that allow you to have that kind of interface. On the actuator side, these devices are becoming sophisticated enough to allow them to auto-tune. So you drop it into a system and it will adjust itself — based on the time of day and the application it’s sitting in — to optimize for whatever that person wants in throughput, efficiency, or optimal performance. This is here and it’s being used today.

Sanguinetti: There aren’t too many killer apps in this space where you know you’ll be able to sell 100 million devices into this application area. What we’re really in right now is a period of experimentation. People have myriad ideas about how to solve edge problems that were never solved before, or which were never even recognized. But for these things to be practical, they have to be cheap. If you’re trying to deploy a few thousand sensors in every parking garage, for example, you can’t afford to have a device that costs $100 each. You need a device built around one or two chips, not a $300 FPGA or some other complicated device. To do your experiment, if you have to make a custom SoC, that’s a problem. As device suppliers, we need to be able to solve that. Ultimately we have to figure out how to make custom SoCs faster and cheaper.

Ensergueix: Cost is really important if we’re looking at billions or tens of billions of devices. We’ve been focusing on how to enable high-density compute in small microcontrollers that are AI-capable, along with very small NPUs. That will be anywhere from $1 to $5 at the end point, with security and with safety. Safety is very important. These need to be autonomous. They need to make their own decisions. So there will be more nodes that can take action, and they will need both safety and security built in. If you do all of the computing in the node, and you only send a digest to the gateway or some other compute infrastructure, then you can keep the original data at the node. We did an AI survey, and we found that three quarters of the end users preferred to have data remain on their device rather than leaving that device. This is definitely a key advantage.

Sanguinetti: You can identify a few capabilities that are going to be universal, or close to universal. Security is certainly one of them. So is power efficiency and wireless connectivity. Almost all edge devices will require those things, so we can make a platform chip to integrate those requirements. That’s a good starting point. With that, you have a chance to make low-volume experimental devices. That way you can decide whether an application will scale and work and justify whether that’s worth the investment.

Chang: If you have several thousand sensors in the wing of an airplane, for example, those need to be very reliable. One problem, though, is that if you want to filter some of the data locally, you need a hardware-software configuration on that device. But how do you filter out some of the abnormal data? If you do anomaly detection on an intelligent device, but you have 2,000 thermal sensors in the circuit, you need to consider the thermal bias of those 2,000 sensors. You need to tune the sensors’ predictive value for the global bias. Then, to minimize the global bias, and to minimize the difference between the predictive value and the minimum value in the intelligent device, you need to take that into account in the hardware-software design of the intelligent device. One of the methods we use is a digital twin by employing a multi-physics model. You can use that for a machine-learning model, so it can be run instantaneously with a response time of no more than one second. In the wing example, that’s a one-second response time with a reading coming back every second. That’s the amount of time you need to take instantaneous action. Some action gets triggered, and some intelligence needs to be included for predictive value that’s different from the minimum value. That’s one of the new trends.

Mehta: The idea is, ‘This is a really good place for a decision to be made, and it needs to be made locally, for a certain cost and power.’ The hard part is experimentation, and how much we can spend to experiment to build a custom SoC. That’s a problem right now, because you can’t build a custom SoC that’s profitable for every one of these applications. Taking this farther back to the consumer space, the top two edge devices are the Nest thermostat and the Raspberry Pi. One is just in charge of your temperature, and it does one thing very well. Raspberry Pi can do everything, but inherently it’s just a development platform. Both of these things have sold in the millions. There’s a value in building a killer product, and those things exist. But there also are cool products out there like delivery robots that are not going to fall into that price point.

Related Material
Challenges In Building Smarter Systems
Experts at the Table Part 2: A look at the intelligence in devices today, and how that needs to evolve.
Edge Computing in Our Knowledge Center
Top stories, special reports, videos, blogs and white papers on the edge
Revving Up For Edge Computing
New approaches to managing and processing data emerge, along with standard ways to compare them.
Memory Issues For AI Edge Chips
In-memory computing becomes critical, but which memory and at what process node?
Where Is The Edge?
How to minimize latency and power.



Leave a Reply


(Note: This name will be displayed publicly)