Cloud, microservices, and data mess? Graph, ontology, and application fabric to the rescue.

Knowledge graphs are probably the best technology we have for data integration. But what about application integration? Knowledge graphs can help there, too, argues EnterpriseWeb.

Written by George Anadiotis, Contributing Writer

Posted in Big on Data on October 27, 2021

How do you solve the age-old data integration issue? We addressed this in one of the first articles we wrote for this column back in 2016. It was a time when key terms and trends that dominate today’s landscape, such as knowledge graphs and data fabric, were under the radar at best.

Data integration may not sound as deliciously intriguing as AI or machine learning tidbits sprinkled on vanilla apps. Still, it is the bread and butter of many, the enabler of all cool things using data, and a premium use case for concepts underpinning AI, we argued back then.

The key concepts we advocated for then have been widely recognized and adopted today in their knowledge graph and data fabric guise: federation and semantics. Back then, the concepts were not as widely adopted, and parts of the technology were less mature and recognized. Today, knowledge graphs and data fabrics are top of mind; just check the latest Gartner reports.

The reason we’re revisiting that old story is not to bask in some “told you so” self-righteousness, but to add to it. Knowledge graphs and data fabrics can, and hopefully will, eventually, address data integration issues. But what about application integration? Could graphs and ontologies help with that, too?

Data integration and application integration

The “99 data stores” narrative was based on the true story of how the proliferation of data sources spells trouble for the enterprise. But what about applications and APIs? That same story is playing out there, so why not use the same cure for that disease, too? That’s what Dave Duggal and EnterpriseWeb are looking to achieve.

Duggal, founder and CEO of EnterpriseWeb, has spent most of his career starting, growing, building, and turning around companies. What motivated him to start EnterpriseWeb was his experience of building and integrating applications, and seeing how static that left operations in the companies he was running. He said:

The way that traditional software development happens even to this day is manual code, and manual integration, primarily. You code and recode, integrate and re-integrate. And, of course, that does not scale for today’s demands.

At one point, everything was on a mainframe — a big centralized monolith, but also very powerful. One of the reasons that mainframes are very powerful is that on the mainframe, data and code live together. There wasn’t this false divide between the data team and the application team.

Now we’re distributed. We have a whole host of new capabilities. But we also have a whole host of challenges. Because when we disaggregated from the mainframe and then monolithic applications, which were these tightly coupled balls of code to more service-based applications, to microservices and now serverless functions, we disaggregated without having a programming model for composition and management.

In other words, we took everything apart. Humpty Dumpty broke. All the pieces were on the floor. We failed to actually introduce a mechanism, a means or a method for composing those things back together and look at where we are today.

The core of Duggal’s thesis, and EnterpriseWeb’s offering, is that the same tools that can address data integration should also be able to address application integration: graphs and ontologies.

The case of SAP

Duggal described EnterpriseWeb as a no-code platform that uses graphs to model declarative relationships among solution elements, enable declarative composition of objects into services, and chain services into event-driven processes. If that sounds complicated, it’s because it is, frankly. EnterpriseWeb’s goal is to hide that complexity.

What EnterpriseWeb’s patented technology enables it to do is something akin to automated service discovery and integration. To demonstrate this capability, Duggal was recently invited to address SAP’s defined ontology challenge on customer order fulfillment.

SAP’s goal was to see if they can make it easier for customers to connect and use their product portfolio. The challenge made available models and files defining the process, and provided access to endpoints. The process EnterpriseWeb followed is what they do for any scenario, Duggal said.

In the beginning is the domain — SAP’s domain in this case, which is pretty big. EnterpriseWeb comes with a built-in upper ontology of generic enterprise concepts — notions of people, places, things, orders, locations, and the like.

In that scenario, SAP’s domain model was also used, leveraging EnterpriseWeb’s capability to import domain model information from sources ranging from JSON, XML, RDF & UML files, to database and application schemas, text documents, and spreadsheets. In the case of SAP, it was OData. Ingesting those sources enables EnterpriseWeb to refine its upper ontology.

What happens next is that EnterpriseWeb connects to the service endpoints to be integrated, and populates the ontology with domain objects and their properties, behaviors, dependencies, and constraints. 

This creates a knowledge graph of the enterprise operational domain and an indexed catalog of all its domain objects, one whose primary goal is service execution for business processes.

In SAP’s case, what drove the process was a BPMN description of the customer order fulfillment scenario. BPMN is a standard format used in business process management. BPMN was imported along with the SAP One Domain Model and then tasks, roles and integration points were extracted. The process was then transformed into event-driven dataflow, naturally modeled as a graph.

Traditionally, Duggal noted, the use of graphs has been for analytics and recommendations, not for processing transactions and business processes. The reason is the complexity of these individual objects, he went on to add:

“Objects have lots of properties, behaviors, dependencies, constraints, affinities. All of those things are modeled in our graphs, they’re all aggregated and addressable. They’re made so that they’re hyper-efficient for processing”.

The Application Fabric

The final part of the process is orchestrating services, i.e. executing, coordinating, and deploying them, in the right order and with the right parameters, wherever they may be – on-premises, in the cloud, or in containers.

That creates what Duggal called an “application fabric”, as an extension of the notion of a data fabric. The idea behind a data fabric is to provide a unified access plane for enterprise data wherever they may be. The idea behind an application fabric is to provide a unified access plane for enterprise services wherever they may be.

Granted, the process is not 100% code- and effort-free and Duggal acknowledge that. According to his estimates, the automated ingestion process is able to get about 95% of the domain model, processes, and services right, and some fine-tuning is required to ensure proper mappings. However, he argues, that’s a massive shift from manual approaches, and a huge productivity boost.

Duggal referred to working with telcos in 5G use cases, which entail complex networks and infrastructure, which EnterpriseWeb was able to onboard in under an hour. Mappings, or minimum viable mappings of common properties as Duggal referred to them, are a key part of this. Again, a staple of data integration and ontology-driven federated approaches, used in the context of application integration.

There’s a reason for this resemblance, which is the fact that those approaches were carefully studied and considered when designing and implementing EnterpriseWeb, Duggal said. The goal was to create a no-code platform for today’s technological landscape, one that addresses what he sees as the gaping hole in the cloud-native era:

“If you look at the Cloud Native Computing Foundation’s vendor landscape, you’ll see hundreds of cloud-native tools. What do those tools represent? Very small, granular, discrete capabilities, and you have various things doing very specific functionality.

The problem with that now is who ties that all back together. Every middleware component looks fine in isolation. Hey, look at this new component. You need it for analytics. Look at this new component, you need it for events. Look at this one thing in this component.

What people forget is the N+1 problem: every time you add another component, you’re adding overhead and complexity to your system architecture, and you’re not recognizing it. When you put those things together, who’s actually caring about the consistency problem, about immutability, idempotency, asynchrony, and concurrency?”

Coming full circle, EnterpriseWeb wants to be the new smart middleware to take on that challenge, aggregating what has been disaggregated. The promise is that this time it’s going to be different, because of federation, graphs, and ontologies. It will be interesting to see how that plays out.