Open-Source NFV

Collaboration is helping make Network Functions Virtualization more efficient.

popularity

The OPNFV Summit in Beijing earlier this month brought together developers, end users, and other communities all working to advance open source Network Functions Virtualization (NFV). What’s new is an effort to make NFV more efficient.

A highlight of the event was the announcement of an exciting new platform for accelerating NFV software development, the “NFV PicoPod”. Developed in collaboration with Enea, Marvell, and PicoCluster, this is a complete, cost-effective environment that’s fully compliant with the OPNFV Pharos specification. And it all resides in a single, cubic foot package, compared to 20 cubic feet required by a typical pod.

This new form factor dramatically improves the accessibility of a standard NFV test and development platform, so developers, for the first time, can try out OPNFV right on their own desktops.

What’s under the hood? It consists of six Marvell MACCHIATObin developer boards powered by Systems-on-Chip (SoCs) based on the ARMv8-A architecture. The environment also includes a power supply, a Marvell Prestera DX Ethernet switch, and runs the latest OPNFV Danube software release for ARM, which has been integrated by Enea. This compact platform underscores ARM’s continuing commitment to empowering its developer ecosystem, and was an absolute smash hit at the event. There’s no doubt that all the “cool kids” will soon have a PicoPod on their desk.

“Our latest collaboration milestone utilizes the collective know-how of ARM, Enea, Marvell and PicoCluster, putting a data center on a desktop into the hands of NFV developers at an unmatched price point,” said Noel Hurley, vice president and general manager, Business Segments Group, ARM.

We also saw NXP speaking about and demonstrating workload efficiency for Edge applications with their ARM-based Layerscape product line, featuring offload of networking workloads. MontaVista demonstrated Service Function Chaining on the Cavium ThunderX platforms, and the ARM engineering team showcased the efficiency of containerized VNFs in the OPNFV framework.

Leveraging the ecosystem
Networking requirements are continuing to soar, with a 100 times increase in bandwidth anticipated and up to 50 times reduction in latency demanded, by 2020. In many cases, network operators will need to deliver this from existing facilities that have a fixed footprint and power envelope, so optimizing for efficiency with be crucial.

To meet these future market needs, the ARM ecosystem is uniquely positioned to deliver on what we call the Intelligent Flexible Cloud (IFC), bringing the right combination of compute, storage, and acceleration precisely where it’s required in the network. Our ecosystem will be delivering up to three times the CPU compute density, and up to ten times the workload acceleration with innovative offload capabilities that are more deterministic and power efficient to boot.

Network operators and equipment manufacturers are beginning to see the tremendous advantages and efficiencies in space, power, compute, and cost from the ARM ecosystem, contributing directly to their bottom line in this network transformation.



Leave a Reply


(Note: This name will be displayed publicly)