DevOps automation: The curse of tool-chain pain

By Dave Duggal on January 30, 2017 

Originally published on TM Forum Inform

The title might sound a bit like a Hardy Boys or Tintin adventure story, but anyone who has implemented DevOps automation has suffered from ‘tool-chain pain’.

The symptoms are imperative software development, manual integration and tight-coupling, which may lead to feeling static and siloed. The cure – spoiler alert – is dynamic, declarative, model-driven solutions as promoted in a series of award-winning Catalysts, such as this one: Digital service marketplace: Towards same-day onboarding.

Some readers may find the following ‘too technical’, but that is exactly the problem I want to shine a spotlight on. It’s 2017 and though the world has changed dramatically, applications are still being coded in fundamentally the same way as they have been for the last 20-30 years. There has been an explosion of new technologies and the pace of innovation is accelerating, but IT can’t cope with the increased demand for interoperability. Manually coding and integrating solutions does not scale. Achieving carrier virtualization and making the transition to become a digital business requires a new approach.

To realize a solution, we first have to understand the problem so it’s worth objectively examining the root causes of tool-chain pain. By laying out a clear case, I hope to foster healthy debate and together advance the conversation within TM Forum so the industry can progress.

While admittedly generalized, the descriptions below are consistent with how our customers and partners describe their status quo. Please note: this is one of a series of related posts: Orchestration is not enough (December 2014); Dynamic APIs (Sep, 2015); Composable telecom (February 2016); The power of platform (May, 2016);  and Breaking through the cloud (November 2016).

The state of the industry

Advances in technology like the web, cloud, and now the Internet of Things have been transforming the environment for modern organizations. A radical reduction in the cost and latency of global collaboration is enabling new business models (for example, Netflix, Uber, Airbnb, Skype) and disrupting incumbents. All organizations want to be agile so they can seize opportunities, respond to threats and generally self-improve. Organizations that cannot adapt to this increasingly distributed and dynamic environment risk displacement.

A challenge for many organizations is that their existing information, communication and automation systems, in and of themselves, do not support this new mode of operations. While there are innumerable software languages and technologies that satisfy a wide variety of requirements, a significant challenge is how to flexibly and dynamically connect people, information, systems and devices for a new class of ‘smart’ processes. Applications are already rapidly evolving from centrally defined and controlled ‘monoliths’ to loosely-coupled services, APIs and microservices intended to be shared across silos, organizational boundaries, domains, and protocols and technologies.

Problem #1: Who moved my cheese?

The prevailing application development practice is to develop software solutions on top of discrete software components required to process functional requirements (i.e. middleware), which have to be deployed and pre-integrated prior to application development. The elements of the business solution are then integrated to that resulting stack or tool-chain. This practice results in siloed, static and brittle solutions — “A big ball of CRUD”.

Wait! What happened to our loose coupling! 

Note: The primary justification of huge investments in service-oriented architectures is that it promotes loose coupling for independent scaling and re-use to enable a shift from monolithic architecture. While that is achieved by ‘providers’ who expose their services, APIs and microservices, at the application layer there is no corresponding composition capability. Instead, ‘consumers’ of these endpoints manually integrate them into their solutions. Modern architectures didn’t solve the problem; they simply kicked it to a higher-level – yikes!

Problem #2: Deaf, dumb and blind process

The components (for example, enterprise service buses; business process management suites; service orchestrators; configuration tools, API gateways; monitoring tools, etc.) provide functionality based on a linear series of message-passing, which communicates application states between component processes with the output of one component being the input to the next, for an ordered series of operations (i.e. orchestration) to realize targeted application behavior. The orchestration is generally based on the business process execution language (BPEL) or degenerate forms like OpenStack HEAT. (Note: OpenStack teams considered BPEL before setting out to develop domain-specific HEAT.)

The linear message-passing does not lend itself to a broad real-time evaluation of application state, making it difficult from a processing perspective to dynamically customize application behavior per interaction as with personalizing a user experience or optimizing a transaction. This approach to orchestration implicitly throttles solution development, compute and input-output intensive applications, and makes consistent governance expensive. The workaround is to integrate yet more middleware (e.g. event processing, big data analytics, policy management, etc.) for more capabilities, while compounding the complexity of the solution architecture.

Problem #3 : Losing the thread

Components are generally represented by long-running synchronous processes. Since application state related to executing functions is distributed across components, if any component process, or executing thread, fails, state can be lost, leading to a failed transaction with no direct compensations.

Problem #4: Don’t go changin’

Since all connections between applications and components, components and components, and components and their underlying infrastructure (i.e. compute, storage and network resources) are established a priori creating a tangle of fixed dependencies (i.e. “tight-coupling”), any change to solution architecture may cause applications to fail to perform desired target behavior or crash altogether. However, the elements of a business solution (for example, service, API and microservice endpoints) are likely to be updated regularly. So are the middleware components. It is time-consuming and expensive to constantly code and re-code, integrate and re-integrate the tool-chain and the application.

Agile DevOps processes that support continuous integration and continuous delivery streamline IT delivery of updates, but don’t address the underlying problem. The net result is that in large organizations updates to solution stacks and applications are often deferred as long as possible because of the high risk and cost.

The bottom line

Conventional code-first application development practices have become a rate-limiter on emerging business requirements, business agility writ large, and the transformation of modern companies into digital businesses.

Without solving interoperability challenges, tool-chains are odd lots of tools, engines, open-source libraries and run-times tightly-coupled together to form a Frankenstein-solution bound to specific technologies, versions and distributions  – for every single stack deployment. Ugh!!! Managing this has become the central IT problem. It is the reason that the vast proportion of IT budgets goes to maintenance rather than innovation. Tool-chain pain is accidental complexity.

The industry cannot afford to be myopic. We need to re-think architecture from the needs of the business and its applications down. Components are simply other services providing middleware capabilities, which should be dynamically composed with application elements to realize smart data-driven services. This requires new abstractions and enabling technologies to realize the elusive promise of service orientation.

EnterpriseWeb has made contributions to TM Forum towards this end. Our metamodel for a virtual function (i.e. a rich schema that maps ETSI NFV, OASIS TOSCA and TM Forum Open API concepts) and dynamic APIs (i.e. a common machine-readable pattern for API interoperability and evolution) are currently being reviewed by both the ZOOM and API teams. Please take a look and join the conversation. I welcome any feedback.

Register