https://www.linkedin.com/pulse/vnf-interoperability-challenges-dave-duggal/
Dave Duggal, Founder / Managing Director at EnterpriseWeb LLC
Published on September 4, 2018
This recent article on VNF interoperability really encapsulates the current state of NFV initiatives.
Six years into NFV and there is a glaring reveal – interoperability and change management are hard! Of course, this has always been hard so why did Telecom punt on this item rather than address it upfront?
As I’ve noted a few times, if you are planning a Moonshot, you have to address the hardest problems first. The logic is nicely captured by Astro Teller, GoogleX, in this WSJ interview.
Unfortunately, the industry embarked on a critical transformation journey with a bottom up approach starting with things they know, vendors re-purposed existing orchestration products, which led to the bolt-on architectures. In other words, they used same old methods and expected new results…
As the article relates, the industry has failed to deliver a key element of the NFV vision – multi-vendor VNF marketplaces that foster innovation and competition.
Most of the industry has turned to static templates – essentially a conformance led strategy, jamming square pegs into round holes.
This is a big problem as it means custom coding for VNF vendors to a version of a operator’s template. This is expensive for the VNF vendors to do and to maintain, and they will end up with lots of custom deployments. Of course, if the template change because any underlying change in the environment, interoperability is broken. Some operators have taken the public position, ‘so what’ – let the vendor deal with it, but that is short-sighted as it impacts marketplace participation and change management.
Moreover, the static template approach sets up a code-first development paradigm.
- Create a static template representing a fixed environment (versions of VNFMs, NFVOs, VIMs, Cloud Hosts) in a static schema (1st order tight-coupling)
- Custom-code VNFs to templates (2nd order tight-coupling)
- Manually-integrate VNFs into Services (3rd order tight-coupling)
- Write events and handlers (generally in Kafka) with little re-use resulting in explosion of artifacts (4th order tight-coupling)
- Write scripts for configurations (Ansible, Python, etc.) with little re-use resulting in explosion of artifacts (5th order tight-coupling)
Yikes! If anything in that chain breaks or gets updated, the service fails with no visibility. All the tight-coupling makes for a non-transparent and brittle solution.
While a lot of vendors and open-source communities claim to offer “model-based” solutions, However, if any of the above bullets are true its likely simply a collection of models with no overarching model. As per the article, they have not automated interoperability or change.
There is a better way. In software, you extend automation by raising abstractions over the things you want to manage. The higher-level abstraction hides the inherent complexity of handling heterogeneous VNF Packages, composing arbitrary Network Services that run over diverse environments.
If you haven’t read it yet, I highly recommend you download the free Analysys-Mason report by Caroline Chappell – Flexible, Extensible, Adaptable: Towards a Universal Template for VNF Onboarding and Lifecycle Management.
Here’s a link to my presentation at Layer 123 last year.