By Dave Duggal, founder & CEO, EnterpriseWeb LLC
Originally Posted: April 10, 2019.
First, kudos to LeanNFV. They received a lot of media coverage last week. Their paper will prompt some healthy reflection and industry debate.
Their critique rings true enough, but it’s not exactly a news flash that NFV has largely fallen short of goal and is in the dreaded “trough of disillusionment”. The dam started to break around the 5th anniversary of the ETSI NFV whitepaper, when industry progress started to be publicly questioned in articles, blogs and talks (TelecomTV interview at Layer123, Oct 2017).
Also, their prescription is, well… “lean”. There aren’t many details. According to the whitepaper, “Lean NFV is neither a monolithic open-source effort nor a highly prescriptive architectural blueprint: instead, Lean NFV is an open architecture that only specifies the minimal requirements needed for interoperability.” The high-level paper offers design guidance more than meaningful detail, but that doesn’t stop them from stating unequivocally that with it the “community can create a new NFV ecosystem that achieves the best of both worlds: easy deployment and rapid innovation”.
That’s a big claim. It deserves a hard look
Key-Value databases, plug-ins, and centralization of NFV Management – is any of that novel in 2019? More to the point, can this alternate approach help the industry realize the strategic objectives of NFV, namely Service Velocity, DevOps automation, Business Agility? No. Here’s why –
LeanNFV echoes Kubernetes, which promotes plug-ins at the infrastructure-layer. Anyone smart and honest knows that working with Kubernetes is hard (1, 2, 3) – you end up with tons of YAML files, Plug-ins and Scripts, which become a management problem (i.e. Accidental Complexity). It’s only “simple” because they are punting complexity to developers. Without a higher-level abstraction, it’s all just code-first, one-off integration. The approach results in a tightly-coupled tool-chain that is brittle and non-transparent.
As to the Key-Value database, without a rich model providing domain semantics and normalized metadata, it will be just a long skinny table with a relatively useless index. Like a junk drawer, the more things you store in it, the harder it will be to search/discover things – let alone query their state and automate their use in processes. This limits the value of a centralized VNF and Service Management, on what basis do the elements seamlessly interoperate? The “minimal requirements of interoperability” are not detailed in the paper.
Also, consider the complexity of each solution element and the diversity. The “minimum requirements” will constrain the breadth of automation and control, this is axiomatic. Caveat Emptor: If you give up on models, you give up on being model-driven. If you give up on standard models, you also give up on industry interoperability.
Caveat Emptor: If you give up on models, you give up on being model-driven.
In a narrow domain, with few elements and infrequent change, plug-in architectures can work. However, the approach doesn’t scale with complexity of the domain. Forget about promised VNF marketplaces, innovation and competition. They might be “open” conceptually, but in-practice plug-ins will drive vendor consolidation and defacto lock-in.
This is fairly well established amongst experienced Enterprise and Cloud Architects who have to connect end-to-end solutions with many elements that are all updating regularly at different times in rapidly evolving industries. Hey, wait – that sounds like the NFV domain!!! Why, because NFV is less an infrastructure problem and more of an application-layer problem. NFV shouldn’t be re-inventing Cloud, it should be running on top of it, abstracting infrastructure complexity.
NFV is less an infrastructure problem and more of an application-layer problem
The LeanNFV paper notes that there is a problem with implementing and keeping up with all the standards necessary to support end-to-end Network and Service management. This is true, but by definition, it’s a multi-SDO problem.
How should NFV (ETSI) relate to the historically siloed areas: OSS/BSS (TMF), interconnects between carriers (MEF), cloud services (OASIS) and evolving technologies (3GPP, IETF)? This question has spawned discussions on converging to a common industry model. However, it’s not easy given the fierce independence of the existing SDOs and their different constituencies within Communication Service Providers. Initial NFV deployments have been hard-coded hair-balls with static templates and tightly-coupled middleware stacks. This path led to the bloated and brittle implementations (VNF Interoperability Challenges, Tool-Chain Pain, Orchestration is not enough for NFV).
The advent of NFV has revealed industry silos, which now stand in the way of fully connected and automated operations. This is a real industry challenge holding back transformation initiatives, but the LeanNFV solution is to ignore it.
The question to ask is how do Communication Service Providers relate standards in a model-of-models (i.e. Metamodel) in a way that is flexible, extensible and adaptableso that they can accommodate diverse solution elements, automate operations and cope with change. This modeling capability must be easier to implement and maintain than code, while opening the door to metadata-driven configuration and zero-touch operations.
We’ve been pointing this out since 2012 and the first ETSI NFV Proof-of-Concept, aptly named “CloudNFV”, which demonstrated a model-based, event-driven, policy-controlled solution. More recently, we presented the advanced capabilities in the first ETSI Zero-touch Network and Service Management (ZSM) PoC. That industry project included AWS, Amdocs, EXFO, Fortinet and Metaswitch. We demonstrated end-to-end automated SLA management in a multi-vendor, multi-domain, multi-VIM, multi-controller scenario. The no-code solution was deployed over three empty AWS zones in two days by one person and was completely configurable by a multi-SDO model.
The industry is at a fork in the road, do we reduce the problem domain for short-term convenience and give up on strategic objectives, or do we innovate, raising abstractions over the domain to hide real-world complexity, leveraging the contributions of industry standards bodies, to enable the transformation to digital service providers.