by Dave Duggal  |  SDN & NFV Management (ZOOM)  |  December 9, 2014

Network functions virtualization (NFV) represents an opportunity for communications service providers to become software-defined businesses and to use DevOps to realize increased automation, service velocity and business agility. It is a business-driven initiative intended to enable carriers to reduce the time, cost and risk of bringing new products to market in support of continuous innovation and the generation of new revenue streams.

The benefits of NFV are derived from liberating network functions from physical network devices, enabling them to be dynamically and flexibly composed as software elements on any Commercial-off-the-shelf (COTS) hardware. Examples include firewalls, routers, switches, load-balancers, WAN optimizers, IMS, EPC and DPI. The separation of concerns also creates a new marketplace for software companies to offer network functions encouraging innovation and healthy price-competition.

Fantastic, right? OK, but now that network functions have been separated from devices, how are network services managed, while ensuring security, resilience, performance and scalability of these virtualized functions?

To date, the go-to answer has been to say, “Orchestration,” and then rapidly move on to another topic. However, if you ask five people what orchestration means, you are likely to get seven answers. In the context of software, orchestration at a high-level is simply a process of connecting and coordinating things. Fair enough, but in the NFV context, orchestration has become an increasingly overloaded and misused term that needs to be examined.

Before we do that, let’s consider some of the implementation requirements for achieving NFV to better understand the problem space before attempting to describe the solution.


Implementation requirements

By separating concerns, network functions become loosely-coupled software applications with their own constraints and policies. This facilitates their composition into network services, which must also reference industry protocols, device controllers and management policies using a variety of evolving standard interfaces from a diverse set of standards-development organizations (for example, ETSI NFV, TM Forum, TOSCA, IETF, MEF, and ONF).
The virtual network functions (VNFs) need to be distributed over the network and instantiated on infrastructure (hypervisor, container, bare metal). They will need to be remotely monitored, scaled, potentially moved and ultimately torn down to support all of the events in the full lifecycle of a network service.
The above is complicated, but could probably be handled by conventional orchestration if there were no other requirements. Unfortunately, there are.
Network operations are highly regulated and mission-critical. They need to be resilient to change; however, network services will be compositions with many elements from multiple vendors, standards and open source projects, all of which are evolving. It will need to support flexible interoperability as elements are updated, upgraded or replaced without disrupting the network services that such elements support.

To maintain quality of service (QoS) and quality of experience (QoE) for the service lifecycle, the network service must have visibility to the state of network resources so it can observe and react to changes in traffic and faults. This requires information be aggregated and translated into a common abstraction so management applications can respond to events with any necessary re-configuration of the infrastructure.

Beyond these functional concerns, network services must also provide for non-functional concerns (for example, security/identity, business compliance, IT governance and system controls).

Lastly, there are general system design characteristics that have to be part of any real-world NFV implementation. For example, any system supporting NFV must be high-performance, high-availability, elastically-scalable, and easy to deploy and maintain.
Here’s the problem: Orchestration as currently defined can’t deliver this.


Orchestration’s limitations

The concept of orchestration comes from service-oriented architecture (SOA), and it refers to a method of integrating composite applications from web services, and now generally includes RESTful application program interfaces (APIs). Large businesses often use an enterprise service bus (ESB) to perform this function. OpenStack has at least two projects, HEAT and HOT, that deliver a related capability. As the name OpenStack suggests, it is based on a stack, in which each project is conceptually a middleware component.

Conventional orchestration, as an implementation technology approach, has material limitations that make it a poor candidate for NFV:

  • Conventional orchestration distributes a problem over a network of disparate middleware components in a static, linear, stateless manner;
  • The application middleware stack introduces cost, complexity and latency, which compound with scale;
  • Related network services are as scalable and resilient as the weakest component;
  • There is no mechanism for consistently implementing policies (security, business compliance, IT governance, system controls);
  • It is not suited for dynamic optimization that concurrently satisfies multiple constraints, such as contemplated by SDN and NFV.

Orchestration, by itself, is simply not enough. If the industry is serious about NFV, it’s going to have set its sights above interfaces to consider implementation technologies. Network operators need to take an architectural view, from the application layer down, to ensure network services can support NFV in practice, not just theory.

Our TM Forum Catalyst, Preparing NFV for primetime, is addressing these concerns. It demonstrates how a dynamic platform based on an information fabric provides the high-level abstraction necessary for policy-controlled infrastructure and ‘network-aware’ services.

CloudNFV is a working model of ‘zero-touch orchestration, operations and management’ that is aligned with and informed by the activity of TM Forum’s ZOOM project. It provides a vehicle for dynamic implementations of the Frameworx suite of standards-based tools and best practices.


Written by Dave Duggal – Founder and Managing Director, EnterpriseWeb

See more at: