The network edge will become one of the most critical metrics of the IoE and 5G, but it will require new technologies and solutions.
More attention is being focused on the edges of networks as the IoE begins taking hold.
The reason is that the current wireless infrastructure is inadequate for handling the billions of endpoints that will make up the IoE. So for the IoE to realize its full potential, it will either require some sort of add-on infrastructure or a new discovery in physics.
Edge networks were not always such a big issue. Communications over wireless networks worked relatively well over the past decade based on large, centralized cell networks. They were designed to handle millions of smartphones that were joining the ecosystem. But that vision was focused largely on voice and static data such as texts and pictures. These networks never were designed to handle the myriad of new and emerging platforms, or the billions of endpoints that will use these platforms as the IoE, the cloud, 5G, Big Data, streaming multimedia and countless other platforms take root.
And there is more. Autonomous driving vehicles and smart everything, from homes to cities to infrastructures, will join this ecosystem. On top of that will be advanced services such as video analytics, locations services, and augmented reality, all of which will place even more demand on the systems.
And if billions aren’t enough, Markets And Markets expects that by 2025 there will be 1 trillion connected devices across the IoE. If billions of devices present a challenge to the wireless infrastructure, imagine what 1 trillion will do, and what next-generation networks will be required to ensure that the IoE does not become a bottleneck in real-time acquisition, processing, and response.
But that is changing — and rapidly. The issue with current networks is that the platforms cannot handle the constant routing to and from the core, and having all the computation and intelligence handled there for billions of devices. It’s not that it cannot work that way, because it can. But the performance metrics, specifically latency and bandwidth, take a significant hit using that approach, which directly affects reliability and performance and ultimately the quality of service.
“There is still a tremendous gap between the level of data that people want and what they get now,” said Steve Mollenkopf, CEO of Qualcomm. “Of the world’s connections, less than 15% are LTE today.”
And that’s just the beginning. Mollenkopf said that between 2015 and 2019 an estimated 8.5 billion mobile devices will be shipped, even though there are only 7 billion people on the planet. Add to that tens or hundreds of billions of connected things in that time frame, with even faster growth predicted after that, and the need for a more robust infrastructure becomes even more obvious.
The ubiquity of this simply cannot be managed from the core. Figure 1 represents these platforms and their relationship to mobile edge computing (MEC).
MEC is a new paradigm that combines cloud-computing capabilities with an IT-like environment at the edges of mobile networks. The European Telecommunications Standards Institute (ETSI) published a white paper last September characterizing it as a “key technology towards 5G.”
Just to put this in perspective, 5G will offer 1Gbps speeds to tens of devices, but it also will support several hundreds of thousands of simultaneous connections for huge numbers of sensors with improved coverage and lower latency. “5G will be as disruptive to the data industry as data was to the wireless industry,” said Sanjay Jha, CEO of GlobalFoundries.
MEC plays a role in this grand scheme at the edge of the network environment by integrating base station infrastructure with data centers positioned close to the radio network. The idea is to extract network context from radio access networks and process in a distributed manner, rather than piping it all back to the central core.
It sounds good in theory, but it isn’t that simple. First of all, adding MEC platforms creates congestion. Second, every computing node that is added at the edge requires some sort of backhaul pipe to the infrastructure, whether that is fiber, microwave, or copper. Backhaul will become a significant issue as billions of devices are requesting authentication and verification. On top of that will be signaling and traffic overhead to handle the data moving to and from these billions of IoE devices. And third, the technology to do this is just now emerging. So far, there are only limited field trials or laboratory setups.
So with such challenges to overcome, how will MEC become the solution? There are two new technologies that will level that playing field — software-defined networks and network function virtualization. Both promise to make MEC a reality and practical.
Figure 2 shows one type of a reference architecture. Software-defined networks network function virtualization will allow MEC networks to fully bring virtualized applications closer to mobile users and enable network flexibility. They also will enable scalability. Virtualization facilitates an environment that enables seamlessness across access and responsiveness of content, services, and applications — all of which will improve the user experience.
These two technologies are the basis of what the next generation of wireless and wireline networks will be.
Without these technologies MEC essentially would require the same collection of hardware the core requires, just not as expansive because the processing is around a local network. These local networks can vary in size and complexity, but they are still much smaller than core networks. However, the ubiquitous deployment of such ancillary cores isn’t practical for several reasons, including cost, maintenance, power demands and interconnect. Hence the drive to virtualize.
Network virtualization involves relocating the logic that is in dedicated hardware to a pure software layer. It is the same concept that has been employed for the last couple of decades in the server landscape where server virtualization software is used to abstract from hardware to software.
Network function virtualization relocates part of the OSI stack. It takes layer 3 and moves it to layer 7. It becomes a pure software environment running mostly on, or as part of, the server virtualization platform. Virtualization technology consolidates various disparate types of network equipment and aggregates them onto high-volume, industry-standard servers, switches and storage. Figure 3 shows the before and after scenario.
On the left side of the graphic, is the classical approach. Network function virtualization transforms these hardware architectures using network functions into software to run on standard server hardware. It also transforms network operations because the software can dynamically be instantiated in any chosen location within the network — and without installing equipment. That is the crème de la crème of virtualization.
Software defined networking changes the way the network is managed. It is a form of networking in which the control plane of a whole network is generated by a single element, the SDN controller, rather than each element having its own control plane. In the traditional network model each device has a running a set of protocols that generate its “control plane.” To illustrate how that works, as an example, consider a switch that runs multiple protocols simultaneously. These and others generate the control plane of that switch. Each and every device in the traditional network mode has the same way of doing this.
Software-defined networks are managed by a single control plane rather than multiple control planes. The single control plane covers the whole network (see Figure 4).
With a software-defined network, the control plane of all of the network’s devices is delegated to the SDN controller. After provisioning the controller with this control plane, all of the data plane actions of the individual network-wide devices are “pushed” to this control plane for execution. In this type of configuration, all of the devices are controlled by the control plane and the network is perceived as a single, logical device. Simply put, the fundamental software is centralized in a single element of the network rather than being distributed over multiple devices.
This “centralization” is the heart of software-defined networking. Why this is so significant is that with dynamic as well as planned expansion and contraction of networks, it removes the need to individually control hundreds or even thousands of different devices.
As good as it sounds, MEC is not without challenges. First and foremost, MEC is still mostly a concept. Both network function virtualization and software-defined networks are in the developmental stages and there are no working platforms in the pipeline (other than beta). Therefore, the challenges are based around the current state of technology. While some of the challenges will populate forward, others may be found moot once the technology matures. However, they are certainly appropriate for inclusion in this discussion.
Always at the top of the challenge pile is security. In fact, security should be the first, and top priority as these two platforms evolve. Because these are in the developmental stages, the opportunity exists to do it right and insert security early in the game, especially at the chip level. This will make MEC strong from the beginning.
One of the biggest MEC security challenges has to do with autonomy. Since these networks are designed to be self-optimizing and regulating, there is a limited amount of control that the network operator will have over applications. This means that all security will have to be low-level and ubiquitous. So far, that is not being addressed well.
For software-defined networks there are two primary issues – centralization and the northbound/southbound interfaces. Centralization poses an obvious problem because if the centralized control plane is compromised, it has the potential to stratify though all devices. All of the data plane actions of the individual network-wide devices are “pushed” to this control plane for execution. If an attacker can modify code running on the central control plane, it can redirect traffic, relocate data, and allow the attacker to view the entire network. If an attacker is successful in getting into this controller, it can change the entire underpinning of the network.
For the interfaces, it is mainly a policy issue. Because these interfaces are programmable, especially the northbound interface, there is a slew of potential compromises that can occur, including compromised applications and contradictory rules inserted by OpenFlow. This is further compromised by the fact that SDN controllers aren’t yet sophisticated enough to put security applications at the top of the priority stack during communications.
For example, if one of the network machines gets breached, the security app may take that machine and quarantine it. However, another app used for load balancing could see that machine as operating under a low load and decide to redirect traffic to it. This is typically how a software-defined network is configured because it lacks the sophistication mentioned above.
Network function virtualization security issues
Network function virtualization introduces new security challenges, as well. Any security threats to any virtual network function can be viewed as a generic virtualization threat across all virtual machines in the network. There is also the hypervisor software, which is another vector that hackers can use to gain access to the network. Two other security concerns are resource pooling and multi-tenancy.
Overall, the biggest problem is that traditional network security models are static and cannot respond to the dynamic nature of real-time dynamic changes in network topology. When inserting security services into network function virtualization networks, the current approach is to use an overlay layer. While this works in theory, it still has issues with device coexistence across vendor boundaries. And when security it implemented in the transport layer, it tends to be resource-intensive, which is undesirable in MEC. Finally, because of the “boundaryless” nature of network function virtualization, finding good insertion points for security elements is also challenging.
There are other security considerations, as well. The fact that both network function virtualization and software-defined networks are emerging technologies means other security issues almost certainly will arise as they evolve.
Security isn’t the only challenge. It’s unclear what type and kind of computing and storage resources will be needed at these edge networks. Mobile resources are different than those in the enterprise, and traditional hardware may not be nimble and agile enough to support the stratification across the mobile clients and their varied platforms. As these entities take shape, MEC networks likely will require new technologies and platforms to accommodate this diversity.
As with any new and emerging platforms and technologies, there are bumps in the road. Because there are no models or history to examine with MEC, it is a go-as-you-roll enterprise for the moment.
Technologies such as network function virtualization and software-defined networks hold a lot of promise not only for MEC, but for all types of networks. Other platforms such as HetNets also are going to be an element in the global wireless coverage umbrella. Throw in 5G and the IoE, and this sector will get very interesting in the next few years.
How this is all going to shake out is still a big unknown. And exactly how MEC networks will integrate with this also is unsure. But with predictions that all of this will go from a malleable blob to some well-defined infrastructure by 2020, a lot of movement is going to have to happen over the next four years.
Reference 1: SDN – A New Networking Era; Mohamed Nidhal Beyrem Jaziri, 2xCCIE (R&S, SP), 23 June, 2014.