Systems & Design
SPONSOR BLOG

Hyperconnectivity, Hyperscale Computing, And Moving Edges

What balance of performance and power consumption can be afforded at each level of edge?

popularity

As described in “The Four Pillars of Hyperscale Computing” last year, the four core components that development teams consider for data centers are computing, storage, memory, and networking. Over the previous decade, requirements for programmability have fundamentally changed data centers. Just over a decade ago, in 2010, virtual machines would compute user workloads on CPU-centric architectures connected as networks within the data center with up to 10GB/s speeds. Five years later, software-defined networking found its way into the data center, and network speeds improved up to 40GB/s. Containerization replaced the classic virtual machine model. Over the last five years, storage has become software-defined, too, with intelligent storage in the hardware and network speeds increasing to up to 100GB/s.

Today, requirements for hyperscale data centers have become so specific that system companies are often considering developing their own chips. They also lead a complex ecosystem of more than 230 suppliers from processor and design IP through semiconductor providers, system houses, and cloud providers as part of the Open Compute Project.

As we are entering the era of hyperconnectivity, computing is shifting. With more compute pushed outside the data center again, the critical questions become where the edges are, what level of computing is done at what type of edge, where data is stored, and what balance of performance and power consumption can be afforded at each edge.

Since my post “Hyperscale and Edge Computing: The What, Where and How” in September last year, edges certainly have become more refined. For instance, Databank uses the terms “near-edge” and “middle-edge” as layers distinct from the far edge (defined as micro data centers beneath 5G towers). Installations at big enterprises and in stadiums are referred to as far edges as well. New “hyperscale edges” provide availability zones in high-density population centers as “local zones” or “edge zones” in New York, Boston, Houston, Los Angeles, and Miami. These sit “in the middle” between the far edges and the near edge that hosts generic services, like content delivery network (CDN) caching, before the data actually enters the actual cloud data centers. (Please note that all distances so far have been measured from the perspective of the data center.)

Another perspective comes from the origin of the data. Once measured at a sensor, when and where will computing happen, i.e., how far away from the sensor? The answer is a resounding “it depends” based on the application domain and its requirements.

Here is a personal example from the health/consumer domain. In the spirit of creating a digital twin of myself (yep, one of me is not enough, clearly, and the digital version doesn’t talk back as much), I am measuring my daily routines with two different fitness trackers. Tracker A gives me some valuable insights during my workouts. Most of the information is computed and shown on my wrist, close to actual sensors. Tracker B resides on the other wrist, has no display, but wants to be connected to my phone all the time. On my phone, I do get some valuable insights from tracker B during the workout. After I am done, it uploads all data to a server in the cloud and comes back about 30 seconds later with more additional insights, again displayed on my phone or available online. This is no issue if the server is available, which has failed only once in a year, but it is a reminder that computing has crossed the edge to the data center. Tracker B wants to be connected to the phone all the time, 24×7. If the connection is lost, it is “catching up” and transmitting data. If it’s not connected, it holds a couple of days of data before it runs out of memory. Developers had to make decisions based on a tradeoff around memory/bandwidth and data-set size. A similar tradeoff consideration prevents tracker A, which has GPS built in, to track my route during a four-hour round of golf—presumably due to memory—while tracker B easily charts the path, but only because tracking happens on the phone while connected, facing fewer memory constraints. Interesting enough, Tracker A also mixes its computing. Sleep analysis, for instance, requires upload of data from my wrist to my phone – the Device Edge – and seems to be calculated there.

Another excellent example of constraints from a different domain—the industrial internet of things (IIoT)—comes from Nokia in “Industrial control will not come just by waiting for 5G, says Nokia.” Comparing the use cases of motion control, mobile robots, mobile control panels with safety functions and process monitoring, cycle time, payload, number of devices and service areas vary. For motion control, cycle times well below a millisecond often are a must, so compute has to happen locally. Latencies below 50ms are deemed appropriate for process monitoring, but the number of devices per km is driven to 10,000 or more. The four target segments of “360 Video”, “Virtual Reality + Vehicles,” “People + Things,” and “System Control” can be plotted in four quadrants in a chart showing bandwidth over latency. “Virtual Reality + Vehicles” requires the highest bandwidth combined with the lowest latency.

To add complexity, considering AI/ML, the topic of inferencing drives other tradeoffs between the power budget and the breadth of application capabilities that can be achieved. Innatera called out a “sensor edge” at the recent Linley Spring Processor Conference as having the narrowest power budget. Designs at the sensor edge are also latency critical, area constrained, and noise sensitive. Still, by using MCU embedded processing, tasks like conditioning data, understanding relevance, and identifying patterns can be performed in power budgets in the 10mW range. In contrast, applying large inference models to extract complex insights from data requires much more complex multi-processor systems on chips at power budgets that reach 100W and above to support the computing requirements necessary.

In a hyperconnected world, the industry is facing lots of types of edges. Computing and storage of the associated data are moving around. And they will move around for a while as the compute and latency requirements heavily depend on the evolving cases. Welcome, edgy future!



Leave a Reply


(Note: This name will be displayed publicly)