Reliance Jio has transitioned from testing AI to deploying it across its entire digital ecosystem. Gaurav Duggal, Senior Vice President of IT and Security at Jio Platforms Limited (JPL), explains how AI agents now manage core functions ranging from customer support in multiple languages to complex network operations.

Reliance Jio is rapidly transforming how AI agents drive operations across its digital ecosystem. Gaurav Duggal, Senior Vice President – IT and Security at Jio Platforms Limited (JPL), shares insights into how India’s largest telecom and digital services company has moved beyond experimentation into full-scale deployment of AI agents, the business impact achieved so far, and the governance framework that ensures control, transparency, and accountability.
From customer support to network operations, fraud detection, and supply chain forecasting, Jio is redefining enterprise AI by balancing autonomy with rigorous oversight—a model that other large enterprises could emulate.
CIO&Leader: How far along is Reliance Jio with AI agents, experimentation, pilots, or full production? Which functions are seeing real impact?
Gaurav Duggal: We are well beyond experimentation. AI agents are operating in live production across several core functions. In customer support, they handle a large percentage of Level 1 interactions across voice and chat in multiple Indian languages. In network operations, agents monitor performance, detect anomalies, and trigger corrective actions within defined boundaries.
“Our closed loop network operations agent has reduced incident resolution time by 30 to 40 percent and improved customer experience metrics such as latency and downtime.”
We are also using AI agents in fraud detection, marketing personalization, IT service management, and parts of supply chain forecasting. In enterprise sales and revenue assurance, we are running scaled pilots that are moving toward higher autonomy. The conversation internally is no longer about whether agents work. It is about how much autonomy we can responsibly and economically allow.
CIO&Leader: What is the most sophisticated AI agent you’ve deployed, and what measurable results has it achieved?
Gaurav Duggal: Our most advanced deployment is a closed loop network operations agent. It continuously monitors network telemetry, identifies anomalies, diagnoses probable root causes, simulates corrective scenarios, and executes approved configuration adjustments within guardrails.
The business impact has been significant. We have reduced incident resolution time by roughly 30 to 40 percent in certain clusters. We have lowered operational expenditure through fewer manual interventions and reduced field visits. We have also improved customer experience metrics such as latency and downtime, which directly correlates with churn reduction. The return on investment was achieved in under a year due to the scale of our network.
CIO&Leader: How do you determine which decisions AI agents can make on their own versus requiring human oversight?
Gaurav Duggal:We classify decisions into three levels. First, fully autonomous decisions such as routine network parameter tuning within defined limits, Level 1 customer issue resolution, campaign optimization, and standard IT remediation.
Second, human-in-the-loop decisions such as pricing adjustments, high-value enterprise contract modifications, and certain fraud escalations where financial exposure crosses a threshold.
“We classify decisions into three levels—fully autonomous, human-in-the-loop, and strictly human-controlled—based on financial exposure, regulatory impact, and reputational risk.”
Third, strictly human-controlled decisions including capital allocation, regulatory submissions, major pricing strategy shifts, mergers and acquisitions, and brand-sensitive matters. The line is drawn based on financial exposure, regulatory impact, and reputational risk.
CIO&Leader: What level of operational or financial risk are you comfortable delegating to AI agents?
Gaurav Duggal: We are comfortable delegating low operational risk decisions and bounded financial risk decisions where predefined ceilings exist. Every autonomous workflow has transaction thresholds, loss caps, and escalation triggers embedded into the system.
We do not allow any AI agent to independently create a material balance sheet impact or take actions that could cause significant reputational harm. High regulatory and strategic risks remain fully human-governed. The principle is constrained autonomy, not blind automation.
CIO&Leader:Could you trace and explain the decisions made by AI agents if a regulator asked tomorrow?
Gaurav Duggal: Yes. Every production AI agent must meet strict auditability standards. We maintain full decision logs, model version traceability, policy rule mapping, and documented approval chains for guardrails.
If required by the Telecom Regulatory Authority of India or any other oversight authority, we can reconstruct the sequence from input data to model output to executed action. We can also demonstrate testing records for bias, robustness, and compliance. Explainability and governance are foundational in a regulated environment like ours.
CIO&Leader: As AI agents connect to core enterprise systems, how do you maintain control, visibility, and accountability?
Gaurav Duggal: Agents never receive unrestricted system access. We enforce role-based access controls, zero-trust architecture, and API-mediated access instead of direct database writes. All actions are logged and monitored in real time.
We deploy sandbox simulations before allowing production-level execution. We also maintain kill switches and override capabilities. Integration is handled through a policy orchestration layer that defines what an agent can and cannot do. This ensures we do not lose control as autonomy increases.
CIO&Leader:Have you ever paused or shut down AI initiatives? What were the main challenges?
Gaurav Duggal: Yes, we have paused certain initiatives. In some cases, internal trust and change management broke before the technology did. Teams were not fully prepared for autonomous decision systems.
In other cases, we halted generative agents in sensitive workflows because hallucination risks were unacceptable. We have also rejected vendor models that lacked transparency and explainability. Governance and compliance concerns have been more limiting than technical capability.
CIO&Leader: What matters most in managing AI agents today: monitoring, human override, policy rules, or strict limits? How is control evolving?
Gaurav Duggal: Policy-defined boundaries matter most. Monitoring and human override are essential, but they are secondary to designing the right constraints upfront.
“We are redefining control from approving every action to defining the permissible operating space within which agents function. Control becomes architectural rather than supervisory.”
We are redefining control from approving every action to defining the permissible operating space within which agents function. Control becomes architectural rather than supervisory. The stronger the policy framework, the more confidently we can scale autonomy.
CIO&Leader: Looking ahead, which types of decisions will AI agents take over, and which will remain human-led? How will this evolve over three years?
Gaurav Duggal: In the next three years, AI agents will reliably own high-frequency, data-intensive decisions such as network micro-optimization, real-time fraud blocking, customer service resolution for standard issues, dynamic marketing personalization, and predictive inventory management.
Human judgment will remain essential for regulatory interpretation, ethical tradeoffs, crisis management, brand positioning, and long-term capital allocation. Machines will dominate operational velocity. Humans will focus on strategic direction and accountability.
CIO&Leader:Will AI agents become a central execution layer in enterprises, or will adoption be limited by regulation, architecture, or cost? Five-year outlook?
Gaurav Duggal: Five years out, AI agents will be a core execution layer within large enterprises like Reliance Industries Limited and its digital ecosystem. They will operate much like cloud infrastructure does today, embedded and indispensable.
However, scale will depend on governance maturity, regulatory clarity, secure infrastructure, and cost efficiency of models. The constraint will not be capability. It will be a leadership discipline in architecting responsible autonomy. Enterprises that institutionalize governance around intelligent systems will outperform those that treat AI as an isolated technology initiative.