It’s not just cloud and edge anymore as a new layer of distributed computing closer to end devices picks up steam.
The National Institute of Standards and Technology (NIST) defines fog computing as a horizontal, physical or virtual resource paradigm that resides between smart end-devices and traditional cloud or data centers. This model supports vertically-isolated, latency-sensitive applications by providing ubiquitous, scalable, layered, federated and distributed
computing, storage and network connectivity. Put simply, fog computing extends the cloud to be closer to the things that produce and act on Internet of Things (IoT) data.
According to Business Matters, moving computing and storage resources closer to the user is critical to the success of the Internet of Everything (IoE), with new processes decreasing response time and working more efficiently in a fog environment. Indeed, as Chuck Byers of the OpenFog Consortium confirms, fog computing is “rapidly gaining momentum” as the architecture that bridges the current gap in IoT, 5G and embedded AI systems.
As mentioned above, 5G networks is one area in which fog computing is expected to play a major role. As RCR Wireless reports, the convergence of 5G and fog computing is anticipated to be an “inevitable consequence” of bringing processing tasks closer to the edge of an enterprise’s network. For example, in certain scenarios, 5G will require very dense antenna deployments – perhaps even less than 20 kilometers from one another. According to Network World, a fog computing architecture could be created among stations that include a centralized controller. This centralized controller would manage applications running on the 5G network, while handling connections to back-end data centers or clouds.
Edge computing
There are a number of important distinctions between fog and edge computing. Indeed, fog computing works with the cloud, while edge is typically defined by the exclusion of cloud and fog.
Moreover, as NIST points out, fog is hierarchical, where edge is often limited to a small number of peripheral layers. In practical terms, the edge can be defined as the network layer encompassing the smart end devices and their users. This allows the edge to provide local computing capabilities for IoT devices.
According to Bob O’Donnell, the founder and chief analyst of Technalysis Research LLC, connected autonomous (or semi-autonomous) vehicles is perhaps one of the best examples of an advanced-edge computing element.
“Thanks to a combination of enormous amounts of sensor data, critical local processing power and an equally essential need to connect back to more advanced data analysis tools in the cloud, autonomous cars are seen as the poster child of advanced-edge computing,” he states in a recent Recode article.
Indeed, according to AT&T, self-driving vehicles could generate as much as 3.6 terabytes of data per hour from the clusters of cameras and other sensors, although certain functions (such as braking, turning and acceleration) will likely always be managed by the computer systems in cars themselves. Nevertheless, AT&T sees some of the secondary systems being offloaded to the cloud with edge computing.
“We’re shrinking the distance,” AT&T states in a 2017 press release. “Instead of sending commands hundreds of miles to a handful of data centers scattered around the country, we’ll send them to the tens of thousands of central offices, macro towers and small cells usually never farther than a few miles from our customers.”
Silicon and services: At the edge of the foggy cloud
Fog and edge computing are impacting chip designs, strategies and roadmaps across the semiconductor industry. As Ann Steffora Mutschler of Semiconductor Engineering notes, an explosion in cloud services is making chip design for the server market more challenging, diverse and competitive.
“Unlike data center number crunching of the past, the cloud addresses a broad range of applications and data types,” she explains.
“So, while a server chip architecture may work well for one application, it may not be the optimal choice for another. And the more those tasks become segmented within a cloud operation, the greater that distinction becomes.”
With regard to services, the National Institute of Standards and Technology sees fog computing as an extension of the traditional cloud-based computing model where implementations of the architecture can reside in multiple layers of a network’s topology. As with the cloud, multiple service models are implementable, including Software as a Service (SaaS), Platform as a Service (PaaS) and Infrastructure as a Service (IaaS).
From our perspective, edge computing offers similar opportunities, particularly with regards to SaaS and PaaS. As an example, both could be applied to the automotive sector, with companies deploying sensor-based vehicle systems that proactively detect potential issues and malfunctions. This solution, which, in its most optimal configurations, would combine silicon and services, could be sold as a hardware and software product, or deployed as a service with subscription fees generated on a monthly or annual basis.
In conclusion, fog and edge computing will continue to evolve to meet the demands of a diverse number of verticals, including the IoT, autonomous/connected vehicles, next-generation mobile networks and data centers.
Leave a Reply