Bringing context and control to Generative AI
By Dave Duggal
Jul 1, 2025
Originally posted on Medium

To be clear, Generative AI Language Models and their outputs don’t become deterministic — period. They don’t reason in an ontological sense. Deep Learning as a field is a reaction to Symbolic AI and structured, labeled and curated models rather it’s approach is stochastic processing over unstructured and unlabeled data with unsupervised training.
No one is magically changing the nature of what an LLM or SLM is. Ironically, structured knowledge and determinism are now widely understood to be critical to the success of GenAI and Agentic automation so related solutions are safe, reliable and trusted. Despite the pie-in-the-sky claims LLMs desperately require either direct augmentation like Graph RAG (scaffolding to prop up LLMs that marginally improve accuracy, consistency and hallucinations while adding significant complexity, cost and overhead/latency to solution architecture) or to be wrapped, abstracted and governed by a meta platform as we have done.
Of course there are some dogmatic proponents are eager to stay in the GenAI domain with Mixture of Experts, Multi-Agent and other techniques, but without structured knowledge and logic, they are playing a game of compounding probabilities, while doubling down on resource and energy consumption (in this approach governance itself is reduced to guardrails, etc. providing weak management).
At EnterpriseWeb, we make an ontology (a graph-based domain model) the center of the architecture, not an LLM or SLM. In this way we can use structured knowledge as as a control language over external endpoints, including multi-model GenAI and hybrid AI. Instead of using a Language Model as an “intelligence layer”, our platform agents leverage the ontology to interpret events, evaluate conditions, automate decisions and optimize actions deterministically for real-time, enterprise-grade, contextual automation. This means our platform can work over many Language Models (GPT, Mixtral, Granite, etc.) and rapidly onboard new ones so organizations can evolve with advances in the models.
Our platform agents dynamically compose and configure tasks into complex workflows deterministically. Agents can dynamically and deterministically orchestrate tools, external data source and federated components, as well as LLMs and algorithms while executing. Agents use the ontology to ground LLMs in domain knowledge and real-time operational context to optimize the interactions (low-token, low-latency, low-energy) while enforcing governance (security/identity, etc.) and deterministically validating LLM outputs based ontological constraints (domain logic).
Our approach uses Language Models in an isolated and targeted manner for GenAI NLP and inferences, shifting reasoning to our platform where our deterministic methods are highly-reflective and deeply recursive (smarter, faster, more efficient). If LLM outputs can’t be validated the agents reject them and iterate with the LLM in a background process at sub-second speed to refine the output using only 3–5 tokens at a time. In this way, the final accepted LLM outputs can be used safely as controlled inputs into managed processes.
Of course, inferences remain probabilistic in nature (they don’t become facts), but they support business use cases by filling gaps in knowledge. The key is that they are validated (or rejected). Our agents log all activity and provide a distributed trace for audit and debug. Our platform supports humans in the loop for oversight and direct control.
This isn’t just a vision, we have deployed customers, partner validations, patents, blogs and webinars, as well as innumerable LinkedIn exchanges with other industry AI, graph, data and automation experts.
Back in in February 2023, before news of GenAI broadly entered the public consciousness, EnterpriseWeb was invited by leadership of Microsoft’s Telecom group to collaborate on an industry demonstration given our reputation for delivering advanced intelligent automation solutions. Six week later we premiered the world’s first demonstration of “Telco-grade Generative AI for Intent-based Orchestration”. That public demonstration won industry awards. It’s been over 2 years, we haven’t seen anyone replicate our capability.
Last year we benchmarked and validated our systems and methods in another award-winning project run in the Intel labs on Dell servers with Red Hat OpenShift demonstrating “CPU-first Generative AI” — showcasing our low-token, low-latency and energy-efficient methods.
We are currently collaborating on Knowledge Plane’s over data warehouse, lakes and lakehouses with game-changing results over the lite semantic layers in use today with basic metadata frameworks, table directories and metrics.
Related Links:
Demo: The Telecom Ontology: The Missing Links for Level 4 Autonomous Networking
News: theCUBE names EnterpriseWeb as a finalist for Most Innovative Networking Solution award
News: EnterpriseWeb named a 2024 Fierce Network Innovation Award finalist for the AI category
Post: Agentic Automation: Knowledge is Power
Interview: Fierce Telecom