One-on-One: Smarter Architectures

UC Berkeley Professor Edward Lee talks about energy efficiency, intelligence, what’s needed to make the IoT work, and why machines are superior at some jobs than people.

popularity

Edward Lee, distinguished professor of electrical engineering and computer science professor at the UC Berkeley, sat down with Semiconductor Engineering to talk about what is needed to maximize the usefulness of the and how our perceptions need to shift to take advantage of this technology. What follows are excerpts of that conversation.

SE: What changes do you see on the horizon for the cloud, the IoT and computing in general?

Lee: It’s a profound transformation where we’re moving away from an information technology, with the focus information for human consumption, to focus on intelligent network systems to engage with the physical world directly. We’ll be using sensing and actuation to deal with the physical environment rather than just information. It’s not just about bank balances anymore. It’s now about driving cars and controlling traffic in a city, and energy consumption in buildings. When you expend energy you do it for a useful purpose. Right now we have wide deployments of stupid lights. They stay on all night even when no one is there. It’s completely unnecessary. We can fairly dramatically reduce our energy profile if we endow our information technology with good interfaces to the physical world. That’s what we’re trying to change.

SE: Are you adding intelligence into devices, as well?

Lee: I’m wary of using the term intelligence. Artificial intelligence has seen a resurgence, but for me intelligence is a fundamentally human thing. It makes me nervous to think of physical devices around me behaving like humans. I don’t really want that. Humans are unpredictable. I want devices that are going to be responsive, and I want them to be proactive, but not in spontaneous, creative ways when they are in our service. Ultimately, the most effective uses of technology are ones that enhance the capabilities of humans. We certainly see this in information technology. We have ready access to information. It used to be that we had to get access to one of the exclusive physical libraries to get that information. That’s gone away. Berkeley has been famous for decades for having one of the world’s most extensive libraries. That used to be an extremely valuable asset. We have tremendously enhanced what humans can do by giving them ready access to this information.

SE: So what do you do with all this information?

Lee: The ultimate goal is enhancing the ability for humans to do things that are constructive and better for the planet and other humans. When you think about self-driving cars, they’re not taking over. That’s a human enhancement. I teach an introductory embedded systems class, and one of the things I tell my students is that, in the not-too-distant future, they’re going to look back at the technology we use today and view it as absurd. Today we have huge machines—cars with 2,000 pounds of steel—that can do enormous damage, and we’ve designed them so that the sensors and actuators are humans. We can make much better sensors and actuators for driving a car than humans. We can make self-driving cars that will do what we want, and they can do it in a safer and more efficient way than what we have now. Instead of giving the cars eyes and ears, we’re relying on the human’s eyes and ears. And instead of giving the car a motor to control the steering wheel, we’re using the human’s arm to control the steering wheel. There are many things where machines can do things better, safer, and more efficiently.

SE: So what happens when a car has to make a choice about what to hit because an accident is inevitable?

Lee: You have to compare it against what a human would do in that circumstance. A human is not going to make a rational decision. An accident like that occurs at a time scale that cannot involve human cognition. Humans react to danger stimulus before the signals hit the brain. It’s the brain stem that’s reacting. There are no rational decisions made there, either. In fact, the brain stem is probably going to focus on self-preservation, because that’s built into our DNA.

SE: What’s your expectation when we’re going to start seeing these kinds of devices?

Lee: There is certainly a lot of driver-assist technology appearing in new cars, such as automatic parking, lane keeping and more intelligent cruise control. Those are compensating for things humans don’t do well. Parallel parking is an art. If you think about boats, boats go in bow first. But in a car, you go in stern first. The reason is that in a boat, the steering mechanism is in the stern and you swing the stern, and in a car it’s in the front and the steering mechanism can swing the front. You have to reverse the process. For machines, those kinds of issues are trivial computation issues. For humans, they’re rather complex and you learn by practice. Those are motor skills you’re learning to park a stupid car, but if you didn’t have a stupid car it could do it better than you could ever do it yourself. It’s a misuse of the human brain to master that skill.

SE: This has big implications on car purchases because what you buy now may be obsolete much more quickly in the past.

Lee: The auto industry is in for quite a shock. Things are going to change drastically in the next few years. That’s very exciting. We all love when new technology comes out that makes our lives better. And going back to the moral kinds of judgments, cars will be much better at avoiding these kinds of situations and be able to see the person crossing the road in front of you at night, dressed in black, when it’s raining. You can’t see them. The human visual system has limitations that we can overcome with sensors.

SE: You’re overseeing the Swarm Lab, which uses thousands of sensors. Isn’t that one of the benefits of all these sensors—being able to combine information to come up with a clearer picture?

Lee: Yes. There is stuff that is invisible to humans, such as all of the stuff going on with the radio waves around us. Yet it’s becoming a very important part of context awareness. A machine can see a cell phone walking into a room. A human can’t. A human can see a human walking into a room, but the cell phone may be in their pocket. They make it possible to do things we can’t even imagine today.

SE: You’ve talked in the past about the need for heterogeneity with the IoT. What will that entail?

Lee: Once you start talking about engaging the physical world, there are so many different ways of doing it that it doesn’t make sense to over-homogenize things. One of my colleagues at the University of Michigan has been working on energy scavenging devices he calls peel-and-stick sensors. The idea is that you manufacture these peel-and-stick sensors in large volume, you sell them in sheets, and you peel them off and put them on a pipe. Whenever the pipe vibrates, it generates energy, and when it has saved enough energy it sends a little packet of data.

SE: This is like a duty-cycle approach?

Lee: Yes, exactly. So now you can have a device that will know whether there is fluid is flowing through that pipe and how much. It will be able to look for anomalies. Maybe it shouldn’t have fluid flowing through it, but it does. Think about using those in a chemical plant where we can prevent accidents proactively by giving the plant operators more visibility into what’s actually going on.

SE: The duty cycle is a transmission of information, right? That’s all you need for that.

Lee: That’s right. The key principle [Professor Prabal Dutta, Sloan Research Fellow] is working with is that when you have enough to send a message, you send a measurement of something. Getting back to the heterogeneity question, if we try to homogenize how all the devices communicate by radio, it would probably require a peel-and-stick server to run an HTTP server and to be able to send a Web page so you could go to it and configure it. That doesn’t make any sense. You want something much more spare. There are things with widely varying bandwidth and energy requirements. Standardization efforts that are focused on over-the-wire or wireless communication protocols are not the answer for IoT.

SE: We do need some standards so people can understand what they’re building, don’t we?

Lee: Yes, we do. One of the projects we’re working in TerraSwarm is uniquely inspired by Web technology. In the Web, there are uniform standards. The Internet Protocol is the base standard for all of this stuff. Then you have TCP-IP on top of that and HTTP on top of that. These things are very important, but they’re not enough. One of the key standards is not actually backed by any standards organization. It’s the consistency with which Web browsers can run Java script. It took 20 years to get this to be almost standardized. It was a battle between the browser makers—people who had browsers were very eager to put in new technology that none of the other browser makers had. Initially they looked at this as a proprietary thing, and it created incompatibilities between browsers that made it much more difficult to create sophisticated Web-based services. There were all these battles that created incompatibilities. Things have more or less stabilized.

SE: How so?

Lee: If you want to write a banking application, you can create a sophisticated piece of software that runs on the client’s computer to provide the human interface to that banking application. It relies on the fact that there has been a standardization, of a sort, of this Java environment. It’s not perfectly standardized. You still have to write Java script to make it work with different browsers. The HTML and Javascript gets downloaded into your computer, which then runs a program that functions as a proxy for a remote service. The mechanism that the proxy uses to take advantage of the Web service no longer needs to be standardized. The banks are all going http, or https hopefully, so it’s encrypted. But they don’t have to standardize any further than that to agree on the data structures that get exchanged. They are relying on an agreement among the browser makers to run this piece of Java script. We’ve been developing a similar architecture for the IoT. We think of it as the browser for the IoT devices. The difference is that a browser provides an interface to a human. We want to provide an interface to a device.

SE: Regardless of whatever that thing may be?

Lee: Yes. It’s an interface to the physical world. We talk about a device that can instantiate a proxy for a sensor. That proxy is what the service interacts with, not the sensor. How the communication occurs with the sensor doesn’t have to be standardized anymore. Instead, you standardize how the application interacts with the proxy.

SE: It sounds as if you’ve added an element of local processing.

Lee: Absolutely. There is very much local processing involved. Jan (Rabaey) has a vision that he started articulating in 2008, where he used the term ‘swarm’ at the edge of the cloud. At that time, the semiconductor industry was overwhelmingly focused on mobiles because that’s where all the money was. Jan’s argument was that the next big thing would be the swarm at the edge of the cloud, which includes devices that interact with the physical world—the sensors and actuators that connect digital technology with the physical world. In the vision he articulated at that time, he thought the cloud would be the central element in all of this and the mobiles would function as the human gateway to the cloud. The swarm would provide the physical world with a gateway to the cloud. We realized that vision has some problems. Most IoT services deployed today are based on an architecture where the cloud-based service is central to providing that functionality. But you can’t control the communication to the cloud, and as soon as you communicate with the cloud you have data flowing over the open Internet. Now you have privacy risks, even if you have encrypted data, and you have security risks, because encryption is hardly perfect. And particularly in the IoT world, state-of-the-art asymmetric encryption techniques such as the ones we use when we access our banking services on the Web don’t work. They’re architected wrong, so people aren’t using them. Instead they’re using shared secret techniques.

SE: So you’re talking about restricted access?

Lee: Yes, in order for the cloud to react safely with the swarm, recognizing privacy and being able to do things that are more safety critical. We call it the Internet of Important Things. In order for that to be possible, the cloud has to have some locality. It has to have some components that are physically close to the sensors and actuators. It can’t be just hidden away in some remote data center.

SE: Isn’t that where the term ‘fog server’ came in?

Lee: That term comes from Cisco. They describe the fog as kind of like the cloud, but closer to the ground. The vision is a build-out of a new infrastructure. We’ve witnessed a new infrastructure being built out over the past five years or so. Most of the wireless access points weren’t there 10 years ago. The WiFi access points that you buy today are single-function devices. Their job is just to route packets. We believe that is going to change. There will be a new infrastructure that also combines compute services. Using cloud-like services, you can have tight control over the latencies, so you can develop solutions for factory floors. You also can keep your data private and keep it local. It improves your privacy and security.

SE: What changes in the server architecture?

Lee: Server farms have relied heavily on virtualization, which is used for thermal management and to make systems more reliable in regard to failures. Virtualization in the kind of architecture we’re looking at becomes more challenging, and it creates some interesting research opportunities. But if your cloud relies more on physical devices rather than virtual devices, you’ve introduce new points of failure. If you have a service that is relying on WiFi, it still has a single point of failure. If the WiFi router goes out, it doesn’t matter how much virtualization you have. We believe there will be quite a bit of redundancy. And there is a possibility of virtualization, but it will be different.

SE: What will this new virtualization look like?

Lee: On a factory floor, if you want to use Internet of Things for industrial deployment, one of my favorite examples is a printing press made by Bosch Rexroth. It’s a printing press designed for print-on-demand services. It’s a huge machine for the factory floor. You have two-ton rolls of paper flying through at 100 kilometers per hour, 24/7. There are hundreds of microcontrollers and sensors and thousands of actuators, and you’re depositing ink on paper with great accuracy. Bosch Rexroth, when they built this machine, did something very aggressive. They used an Internet network with TCP/IP with off-the-shelf network devices. TCP/IP is a best-effort technology. You send a packet, and if you don’t get an acknowledgement you send it again. But you can leverage some new innovations in networking. They synchronized the clocks on the network in such a way that they never lose a packet. Physical failures become detectable. They have reliable delivery of data with very low latencies and extremely low probabilities of failure in the network. They’re adapting Internet technology in a way to make a safety-critical system more reliable. We see this as the transformative part of IoT. You adapt the Internet technology, rather than just use it, to make it work with physical devices.

SE: And it doesn’t make sense to do this in the cloud, right?

Lee: No, this benefits enormously from locality. If any of those microprocessors on the printing press were relying on the cloud, latency wouldn’t be anywhere close to what they are. And you wouldn’t want anything safety critical relying on the cloud because if you lose network connectivity you need locally fail-safe mechanisms. That also means you have to design your software differently.

SE: And you also want to extract data over time, right?

Lee: Yes. In the case of this Bosch Rexroth printer, this is isolated from the outside world by an air gap. There is no Internet traffic flowing over this thing. This creates a huge disadvantage. Now we can’t take the enormous amount of data flowing over this thing and aggregate it with other data around the world. That allows continuous engineering. So you’ve got to feed that data into something, and the real benefit of the cloud is that it allows you to aggregate data from multiple sources. You need to design gateways so you still contain some control, but you admit external traffic. If you have synchronized clocks you can time-slice your use of the network. That’s a brute-force approach. There is AVB (audio-video bridging), which has been renamed TSM, for time-sensitive networks. It came out of organizations that were interested in delivering media over the Internet with high reliability. This technology has been standardized and it’s starting to get deployed. It does give you ways of managing latency for mission-critical streams. That isn’t possible with standard switching and routing technology today. If we’re going to have services that engage with the physical world, we need to be able to mix criticality. There is some stuff that is more critical than other stuff.

SE: So you’re no longer looking at a data center. You’re looking at computing where it makes sense only with what’s needed for that particular job.

Lee: A lot of the computing has to migrate out to be physically closer to the devices. We refer to these devices as the immobiles—roughly comparable to the mobiles. They will have pretty good networking, wireless or wired, and they’ll have significant amounts of memory and compute power, but they won’t walk out of the room.

SE: What does this do in terms of energy efficiency?

Lee: If this becomes a widespread infrastructure deployment, it could have negative consequences from an energy standpoint. You can’t do the same kind of load balancing that you do in a data center. That is a cost. On the flip side, one of the challenges with IoT is that you have vast amounts of data, and it takes a fair amount of energy to ship that data to the cloud. If you keep all of that data local, and only ship data to the cloud in summary form—you apply machine learning techniques to find anomalies and you only report those anomalies to the cloud, that could reduce energy consumption. It would reduce the amount of data that needs to be sent over long distances.



Leave a Reply


(Note: This name will be displayed publicly)