The Rise Of Dynamic Networks

Flexible networks looms large on the horizon, and so does the risk of new security breaches.

popularity

The Internet of the future, and particularly the , will be interlaced with millions if not billions of intelligent, dynamic, self-organizing networks. These networks will be full of elements that are capable of autonomic self-registration across these multitudes of networks.

It is one thing to put up a security perimeter when you know who the players are. You even can develop resources to intelligently assess what is going on and make on-the-fly changes base on AI, to some degree. But tomorrow’s networks are going to be a lot more dynamic, fluid, diverse and on-the-fly configurable than anything currently on the playing field.

What makes dynamic networks the de facto network of the future, and especially the IoE, is that they bring your own everything/anything (BYOx) movement. And with the IoE, BYOx will not just be limited to the present description of user devices such as smartphones and tablets. BYOx will include objects such as vehicles, wearables, smart appliance and electronics and a slew of other devices that will fade in and out of networks without any intervention from the user.

Dynamic network dynamics
While the definition of dynamic networks varies, depending on where the expertise comes from. But in general, a dynamic network is one that is based upon real-time networking capabilities. A major requirement is that the network is capable of automatic reconfigurability as nodes are added or deleted. This means that network topologies change in real time and that nodes and edges come and go. Moreover, loads vary with time, packets come and go, and objects are added and deleted.

Another requirement of these networks is the ability to locate any user on the network. And third, they must be capable of altering the network interconnect dynamics based upon congestion, number of elements on the network, and path properties, such as signal strength, collisions, and busy/available requests.

There are many types of dynamic network topologies. In fact, almost any traditional network can become dynamic with the integration of advanced software and artificial intelligence (AI). Many networks today already are managed by software, under the umbrella of software-defined network, or SDN. The term SDN has been used somewhat interchangeably with intelligent networks. However, an SDN in itself is not necessarily an intelligent dynamic network. That requires some AI. Today, most SDNs are simply static networks, with software to manage the resources. Dynamic networking implies the presence of software for managing the resources, along with an awareness of elements entering and leaving the network and how to integrate them when they are in the network.

Typical dynamic networks include, of course, the Internet, but also social networks, infrastructure networks, Wi-Fi networks, and cellular networks, among others. But the big challenge is not with networks such as cellular, where a very orderly process is in place, or Wi-Fi where all Wi-Fi elements follow a strict protocol. The real issue is with elements across varying technologies.

For example, a more challenging dynamic network might be one used on a corporate or academic campus. Other similar environments might be metropolitan networks such as city centers, entertainment or sports venues, and city centers. These are the types of environment that must accommodate a wide platform of objects, from smartphones, to Wi-Fi to transportation, security, and more. Let’s take a look at what some of the challenges are to today’s dynamic networks.

Dynamic network’s top challenges
Overall, the two top high-level challenges found in dynamic networks involve load balancing and network-edge dynamics. “Load balancing is probably one of the most difficult challenges to overcome,” says Chowdary Yanamadala, vice president of business development at Chaologix. The reason load balancing is such a challenge is that the network must be able to accommodate whatever objects drift into and out of the network. Looking at the various scenarios, you can see where network load can vary widely in some circumstances.

For example, in corporate and academic campus networks, the core load remains relatively steady and is generally homogeneous. Objects that float in and out of the network—visitors, vendors, and some students, and workers—are a small percentage of that known predictable load, which includes students and daily workers. In such environments, it is relatively easy to keep the load balanced because the variables are small, many of the devices are the same, and some can even be defined in advance such as vendors and visiting parents. Periods of activity are known, as well, based upon day of the week and time of day.

In city centers or art/entertainment districts, where loads can vary dramatically from period to period, it is much more difficult to know how much and what type of traffic may be in the node at any given time. There may be some periodic predictability, but most of the time the load is highly dynamic, and that is the primary metric that makes load balancing very challenging in such scenarios.

The reason this is an issue is because some devices such as wearables may have only a 1X load profile (Bluetooth), while other devices (phablets) may have a 10X or higher load profile (Bluetooth, Wi-Fi, voice, Web, media, etc.). It is relatively easy to allocate the wearable with the Bluetooth profile, but the phablet, which may use any or multiple profiles simultaneously, is much harder to allocate.

The challenge is in developing algorithms that can be used for a wide range of multiple platforms and variable load profile devices on the same network. Does the network designer develop for the worst scenario and write dozens or hundreds of algorithms so that each device has optimal bandwidth? Is that even practical from a network resource perspective? Or is it more practical, and for resource conservation, to develop fewer, broader algorithms so the “bell curve” of devices gets reasonable bandwidth, while devices at the edges of the curve have performance scaled back?

The answer to that question isn’t so simple, because it affects all parts of the supply chain.

“There are a bunch of different kinds of communications protocols because you can’t hard-code with 802.11,” said Dennis Crespo, product marketing director for Tensilica’s imaging and vision division at Cadence. “There are 20 to 30 radio types, so you have to do more in the programmable domain. Our structure in the vision arena has a similar problem as networking. It has to be general-purpose enough to support known DSP types, and then you have to add instructions onto the base instruction set to get more flexibility. You’re seeing this with the emergence of software on the DSP. It’s a crossover move away from hardware to a more flexible DSP.”

On top of that, there isn’t a consensus on how to do this, which means that at least for thee time being, there is no standard approach or common platform for developing algorithms. That has made it especially difficult to develop chips that aren’t quickly obsolete. Frankwell Lin, president and co-founder of Andes Technology, said companies are designing extra capabilities into chips so that if a new communication protocol is required, such as a ZigBee connection, it’s already there. “So a company can use the same chipset for an electronic wallet as an electronic shelf label,” he said. “You have to try various scenarios.”

That means not only more connectivity choices, but architectural changes, as well. Steven Woo, vice president of solutions marketing at Rambus, said fundamental changes are required in the way computing is done. “As the world’s digital data continues to grow, data transport must be minimized in order to improve performance and power efficiency,” said Woo. “Flexible hardware acceleration is also likely to play a major role in achieving this goal.”

How those devices connect can cause another set of issues. Load balancing is affected by the rate and the number of objects that join and leave the network, not just by what objects are on it at any given time. It is easy to see how much more complex it would be if there are a large number of high-load-profile devices coming and going quickly versus the same number of low-load profile devices.

At the edge
“One of the big problems at the edge is the diversity of devices entering and leaving. There may be anything from a smart watch to a laptop that want access to the network, so it becomes very complicated because of the different platforms and objectives of the devices,” says Yanamadala.

Signal strength is always weakest at the edge. Consequently bandwidth is reduced. And, more often than not, the edge is a moving target. So there is a dynamic there that can profoundly affect the devices working there.

Interference of various types—atmosphere, environment, electromagnetic/radio frequency interference, which would have little or no effect inside the strong envelope of the network—can cause the edge to shift dramatically. Some devices can work with degradated signals, while other will lose connectivity altogether or see reduced performance. This is where the algorithms earn their money.

Even if the edge is solid, if there is a lot of activity at the edge, such as a lot of devices coming and going, there can be conflicts. Typically, when a device enters the network at the edge, it needs to be provisioned according to its parameters. “Device ‘A’ might have access to only certain platforms within the network and it should be kept away from other resources of the network,” notes Yanamadala. It is kind of like a repairman showing up at your door. They will be granted certain privileges, access to electrical circuits, and to power boxes, if it is an electrical problem, but not allowed to do plumbing work.

If the device isn’t provisioned properly it becomes what is termed a rogue device. And it happens a lot because the edge is so dynamic. All of a sudden the electrician can mess with your plumbing, air conditioning, security system, all or part of the subsystems within your house. The same can occur with rogue devices on a network, but there they can become a serious security risk as well as place a strain on the load balancing algorithms.

Dynamic Network security
As with any other type of network, dynamic networks need to be secured. There are some unique challenges with dynamic networks that aren’t typical of other types of networks.

Today, most network security approaches revolve around firewalls, active directory and its rules. These are generally broad group-level permissions that protect the inside perimeter of the network from outside attack vectors. This works well as long as the perimeter is known and access points are controlled.

Dynamic networks, on the other hand, do not have a fixed perimeter, per se, because the idea is that any object can enter the network, and its access to network resources is determined by its privilege level and credentials. So the old assumption that if it is on the network it can be trusted is now false. “Probably the most significant challenge involves identity and trust,” says Yanamadala. With dynamic networks, each object has to have specific rights to only the resources within the network it needs, rather than open access to all. “How you determine the identity of devices entering the network and consuming resources becomes a very challenging design to put into place,” he adds.

mssecurity
Source: Microsoft

It is kind of like, in years past, if you had a key card to enter the building you could go just about anywhere you wanted. The new model opens the front door to anyone, but the profile they have determines which rooms they can go into and what they can work with.

Dynamic network security models assess each object based on a number of variables, such as the role of the object, what it is, where it is, even how long it is supposed to exist on the network and what its objective is. This is a tough challenge for dynamic security algorithms. Not only do they have to determine if access is to be granted, but what type and level, as well. And this is even tougher at the edge where objects are constantly entering and exiting. This also moves the attack vector. Now, because there is no hard perimeter, the attack will come from the inside, so a whole new set of rules must be applied.

There is a new security model that addresses this. This approach removes much of the emphasis from the network and places significant emphasis on the network-to-object relationship. Each object sets up a unique, one-to one relationship between the network, the object and the resources the object is allowed to access, and the specific conditions the object is allowed to access those resources. This creates a unique “on-the-fly” digital identity. And as the object gains more and more exposure within the network, these systems build dynamic lists of accessibility and functions that these objects are allowed to use. Most importantly, the system makes the rest of the network completely invisible to the user.

Within all of these new challenges still lie some of the old issues such a encoding of sensitive data and other cryptographic challenges. In many cases the network is dealing with large volumes of data. In some venues, sensitive data such as credit card and personal data is being exchanged at a very high rate of speed. If the load-balancing algorithms cannot effectively manage the network, then security holes can be introduced as data is held or routed through weak or unsecure channels because the load-balancing algorithms do not see that it is data that should be secured.

Finally, other factors that can challenge dynamic networks are simply ones such as the sheer volume of data, the specific requirement of security of such a diverse set of data, and the security provisioning of the object using the data. This creates a very high overhead factor that the network has to deal with.

A big problem is that all of these issues are interconnected—load balancing, edge dynamics, shifting network volumes, and security. “None of these can be viewed separate from the others, says Yanamadala. “It is not a vertical tree, it is a horizontal one, in terms of the challenges of these networks.”

Missive
Dynamic networks are just beginning to come onto the scene. There will be growing pains and initial failures over the next few years as these networks roll out. There also will be a learning curve. Some of what works with static networks will work with dynamic networks, as well. But the big challenges will be with real-time metrics. There can be few golden rules. Assessments and actions will have to occur on the fly in almost all instances. That means the network will be in a constant state of flux.

The ground is fertile for the next generation of networks to spring up. In fact, with the IoE on the horizon, dynamic networks will have to be the backbone if the IoE is going to be all it can be. But taming these networks to be reliable, accurate and secure will require raising the network bar at least one order of magnitude and redefining the “intelligence” of the intelligent network.



Leave a Reply


(Note: This name will be displayed publicly)