Evolving Edge Computing And Harnessing Heterogeneity

Reference cores and hybrid runtime deployments could play a useful role in mitigating complexity.

popularity

In the Evolving Edge Computing white paper, we highlighted 3 challenges to enable the Intelligent Edge, they are:

  • Enabling hardware heterogeneity
  • Removing development friction
  • Ensuring security at scale

This blog post examines the first in that list, heterogeneity. It will cover the ways in which heterogeneity appears, its effect on systems and some ideas for resolving its inherent challenges.

How heterogeneity enhances the design space

Edge computing infrastructure is characterized by heterogeneity in the same way cloud computing infrastructure is characterized by homogeneity. Edge infrastructure heterogeneity originates from harsh environments, high cost of installation per node and organic growth (long-lived infrastructure). These drivers of diversification at the edge are absent in the cloud. Edge computing uses heterogenous solutions because they enhance the design space by providing various levels of compute capabilities. Each design configuration addresses a different point of cost, size, power and energy. Heterogeneity can appear within a node (intra-node). A node may contain multiple processors that are different in some way. For example, different Arm architecture profiles like Cortex-A and Cortex-M, different capability and size, big-LITTLE, or even workload-specific accelerators. Heterogeneity in edge computing can also appear between nodes, for example when multiple generations of the same platform are being deployed over time.

Implications for using cloud tools and methodologies

The use of cloud-native development tools is now proven to enable fast time-to-market, ease of programming, portability and manageable deployment for developers in the cloud. Cloud tools and methodologies can provide similar benefits for developers at the edge but heterogeneity presents challenges. Heterogeneity and its effects are either ignored by these tools or a naive solution is used.

One of areas that is affected by heterogeneity is application Quality-of-Service (QoS). QoS management is also a main requirement for edge computing infrastructure. QoS can be preserved if sufficient resources are available for each component of each application. QoS at the edge is also expected to be preserved in all conditions in which the system is designed to be operated. Applications or application metadata provide information about resource requirements and the edge infrastructure should guarantee the availability of those resources. Any shared resource must be managed since it could become a point of contention.

Heterogeneity therefore improves the design space but also increases complexity, and taming that complexity is key to preserving the benefits. Our quest to mitigate complexity for 2 cases of heterogeneity are described in the following diagram.

Inter-node: Heterogenous cluster and heterogenous node with homogenous architecture

Current orchestrators assume nodes only differ by the number of cores and memory capacity. This implies that any of those cores provide similar compute capacity. In reality, an application component sized for a node with Cortex-A76 cores for example (Raspberry Pi 5), would behave very differently when run on a node with Cortex-A72 cores (Raspberry Pi 4). From the perspective of the orchestrator however, the nodes are equally capable. In a previous blog post (Adapting Kubernetes for High-Performance IoT Edge Deployments), we put forward a solution for addressing this problem by introducing the notion of a reference core. In this scenario, each node provides an expected compute capacity when compared to the reference core. Applications specify their compute resource requirements in terms of the reference core and have an expectation that when running on different nodes, sufficient compute capacity will be provided so the QoS is achieved. The special case of big-LITTLE, where different types of cores are present on the same node, are addressed by converting the resource requirements for the core type that is allocated to run the application.

Intra-node: Hybrid nodes

Hybrid nodes provide multiple computational units with different architectures that may also be running different Operating System and base software. Microcontroller cores, like Cortex-M4, real-time cores and accelerators are examples of additional cores that may be found in hybrid nodes. In this case, the main requirements are portability, ease of programming and deployment. Even for cores with similar instruction set architectures and profiles such as Arm-A, Arm-M and Arm-R profiles, it is not trivial to create solutions that are deployable and maintainable with cloud-native style tools.

Cloud-native development in hybrid systems

Another recent blog post addresses some of the problems described previously. The blog post outlines a proof-of-concept, showing how to deploy applications onto Cortex-M within a hybrid system from a Cortex-A and using cloud-native orchestration tools. The solution enables applications to be partitioned into multiple parts where each part runs on a different core depending on its requirements. This makes firmware updates easy, secure and controllable at scale. Using a hybrid runtime makes it possible to update what’s running on the Cortex-M on-demand, for example, to upgrade the functionality. In this scenario, we are reducing complexity and enabling better usage of those additional compute elements.

We have also published a Learning Path for developers interested in learning how to deploy a containerized embedded application using a firmware container image and hybrid runtime components which you can access here Learning Path: Deploy firmware container using containerd.

In the future

Edge computing is an evolving area with new workloads and requirements being changed or added continuously.  AI capabilities: workloads and accelerators are becoming entrenched across all aspects of computing. Some of the techniques presented here, demonstrating reference cores and hybrid runtime deployments, may play a useful role in mitigating complexity in edge computing in the future. However, we anticipate that some gaps will still need to be addressed and are working across our ecosystem and with academia to circulate these ideas more widely, obtain feedback and develop both supporting and alternative models. We invite you to share your thoughts on the solutions put forward by taking a look at the links and getting in touch in our technical forum at Architectures and Processors forum.



Leave a Reply


(Note: This name will be displayed publicly)