Dave Duggal, founder and CEO, EnterpriseWeb
Pleased to report that our first generative AI presentation just won a Light Reading Leading Lights award for “Outstanding AI/ML Use Case”. To celebrate we just put a new demo showcasing our advances since May and sharing our architectural perspective on Telco-grade generative AI.
The solution features Microsoft Azure OpenAI Service (GPT-4), Microsoft Jarvis, and KX’s (https://kx.com/) vector-native analytics database, KDB.ai. EnterpriseWeb’s platform provides a Telecom domain model and advanced automation capabilities.
EnterpriseWeb’s integrated solution provides a conversational interface that speeds and eases network service design, deployment and management. An effort that would typically take a team of architects and developers weeks is reduced to a few minutes. The impact is transformative both in terms of customer experience and telecom operations.
EnterpriseWeb abstracts the technical complexity so the focus is on the business requirements and the corresponding service level agreements. In the background, EnterpriseWeb’s platform translates customer intent and handles all the implementation details, which are logged and made transparent to the Telco’s network operations teams.
While many vendors in Telecom and other industries are starting to offer generative AI capabilities most are focused on prompting Large Language Models (LLMs) to answer questions or summarize large volumes of data. What distinguishes EnterpriseWeb’s integrated solution is that it supports natural language queries and commands, enabling complex enterprise-grade task automation, as demonstrated in the network service orchestration presentation.
Background
Organizations around the world are racing ahead to find practical business use-cases for generative AI. At the same time there is growing awareness of accuracy, consistency, intellectual property, cost and sustainability issues.
EnterpriseWeb mitigates these concerns by reversing the flow of control. Rather than making generative AI outputs the end-goal, it leverages generative AI as a state-of-the-art natural language interface. As with the company’s APIs and UIs, generative AI is used as a front-end to capture user input and report the platform’s responses.
The approach bridges an LLM’s probabilistic and analytical outputs with EnterpriseWeb’s deterministic and transactional methods, extending the use-cases for generative AI. With each natural language interaction, the LLM generates text representing the spoken or written request, but the processing is done on the back-end by EnterpriseWeb and KX, which minimizes the number of prompts and tokens exchanged with the LLM.
EnterpriseWeb’s generative AI solution represents a “domain-oriented” rather than “LLM-oriented” architecture. Instead of spending time and money to continuously train and tune LLMs in hopes of better and more predictable results, EnterpriseWeb’s platform safely maintains the domain knowledge, keeping customer models and activity data private.
EnterpriseWeb grounds user requests with its internal graph knowledge base (i.e., an ontology with concepts, types and policies). The requests are events, which trigger agents that leverage the graph to contextually translate the queries and commands and dynamically construct a personalized response. The platform wraps every interaction with security and identity, reliable messaging, transaction guarantees and state management.
Most processing is efficiently performed by EnterpriseWeb and KX’s low-latency, high-performance software running on a customer’s own x86 environments or cloud tenants (no GPUs necessary). This greatly reduces resource and energy consumption, while ensuring accuracy and control. EnterpriseWeb’s integrated solution is the ultimate backend for generative AI.