Making Everything Linux-Capable

In the still unfolding and relatively undefined edge, this OS is a landmark.

popularity

It’s not clear how the edge will play out or what will be the winning formula from a hardware standpoint. But for everything beyond the end device, and possibly even including the end device, a key prerequisite will be the ability to run Linux.

That means at least one processor or core within the hardware will need to run 64-bit software. In addition, systems will need to have enough storage and processing horsepower to be able to run multi-threaded, parallelizable applications based upon Linux.

There are several reasons why this is happening. First, Linux has a long history in the corporate enterprise and the cloud. Despite its rather modest beginnings as a free version of Unix, it has matured over time to be the OS of choice in many data centers. Being able to run this OS on a system means that end users can assess a product based upon a slew of existing benchmarks.

That doesn’t necessarily mean that one chip will perform any better than another for specific compute jobs at the edge. In fact, the opposite might be the case. But it’s at least a starting point for further research and experimentation, and for lots of companies many of the applications they run are Linux-based because that has been the standard OS in many companies for at least a couple decades.

Second, operating systems are becoming the glue between the edge and the cloud, and not just in the obvious ways. Like general-purpose CPUs, server OSes are pretty big and clunky for a lot of operations, but they are very good at managing available resources on-chip and off-chip. So despite the need for customization of algorithms for specific markets, an OS can do things like manage peripherals and memory and traffic prioritization. This is true for proprietary OSes like iOS and Windows, as well as open-source software such as Linux.

There are more efficient ways to implement OSes, too. Not every device or system needs all of the features of of an OS, and some of that can be slimmed down or removed. There also are more efficient ways to run certain functions or applications using real-time OSes or even highly specific algorithms. But a general-purpose OS can support all of those, and depending upon how the interfaces are written for a specific application, it can do so reasonably well and within an acceptable power budget if it is partitioned properly to not interfere with those other operations.

The third reason is that Linux can help move data more seamlessly from edge to cloud and anything in between, often utilizing something like a virtualization or middleware layer. Or looked at differently, Linux itself is a good platform for partitioning, authorizing and prioritizing, and it’s one that is scalable for everything from edge devices to edge servers all the way up to hyperscale clouds. In addition, it already has years of built-in security options. To be sure, Linux-based systems can be hacked. But the vulnerabilities are generally pretty well known, and so are the methods for keeping different operations separate.

Chipmakers are well aware of all of this. Many of the new chips being designed for edge servers and various edge devices based on technology from Arm, RISC-V vendors, Apple, Google, and Huawei either already can run Linux, or will be able to in the future. End users do not have to run Linux on them, but chipmakers need to make sure they can.

Bottom line: The competitive battle for the edge is heating up, and being Linux-capable is good indication of just how competitive this market is becoming, and how serious the players are about gaining market share.



Leave a Reply


(Note: This name will be displayed publicly)