Blog originally posted on LinkedIn
Jinsung Choi of Deutsche Telekom recently provoked a thoughtful discussion regarding the emerging roles of generative AI in Telecom. Here’s my take – I’m not saying “never”, but there is no indication that LLMQ* or any other LLMs, even with planning modules and Reinforcement Learning, achieve application-level determinism anytime soon. So the reality is Generative AI deployments lean heavily on training, tuning and tokens to improve accuracy and eliminate hallucinations. In addition, “guard rails” are imposed, again, to control LLM outputs. So, for the immediate future LLMs will still require humans in the loop and the effort is tremendous, with questionable ROI. Are better responses and fewer hallucinations, good enough?
Setting aside IP/copyright, cost and sustainability issues, the efficacy of Deep Learning methods are then based on a use-case’s tolerance for correctness, safety and explainability. Does “five-nines” availability still matter? Are we walking away from Telco-grade? Algorithms have cost effectively ensured Telco reliability at scale for decades. That’s the first level trade-off that needs to be considered as it is the inflection between as-is and to-be.
Choi’s point re: increasing complexity (number of elements, speed of analysis/decisions/actions, rate of change) is completely valid, but the networks and the Telco domain are knowable and are modeled. The problem isn’t the models or standards, it’s the implementations. Telcos need to think in a far more loosely-coupled, event-driven, dynamic manner. It’s unbelievable that in 2023, when you lift up the hood you see that most telcos still lean on static, imperative code that is brittle. Realtime data can be hard to come by. Too many silos. Interoperability is limited. There is no unified abstraction. These are the true agility problems. That’s why prior AI initiatives have failed or had limited impacts.
The good news is that Telco investments in consolidating inventory systems and cleaning up data serve both Deep Learning and traditional Symbolic AI, analytics and logic based systems. They will improve reasoning across connected spaces so we’ll be able to flexibly use the right tools for the job – optimal technologies for the use-case. This way we avoid false binary decisions (old bad, new good).
The network vision Choi described will likely only be realized by combining Deep Learning and Symbolic AI (Neuro-Symbolic AI). That’s the practical path, one we are pioneering in order to industrialize and operationalize conversational interfaces and autonomous systems.