Autonomous, Accountable, and Live: The New Era of AI-Driven Digital Services

In conversation with CIO&Leader, Naveen Bolalingappa, CEO, STL Digital, shares how AI agents have moved beyond pilots into live production.

CIO&Leader: Where are you today on AI agents across digital services? Are you experimenting, running pilots, or operating AI agents in live production across areas like service delivery, IT operations, customer experience, data platforms, or managed services?

Naveen Bolalingappa: We’re well past experimentation. AI agents are currently operating in live production across various service areas at STL Digital. This is not a sandbox proof of concept or a pilot lab exercise anymore. Delivery workflows are being directly integrated with these agents. For instance, we have directly integrated AI agents into incident management and request workflows in IT operations. They actively interpret inputs, prioritize requests, and initiate downstream procedures in addition to providing assistance. For instance, STL Digital’s ‘AInnov™ RFQ’, an AI-powered ‘Request for Quote’ system, is among our most advanced implementations. Depending on the complexity, it would take 7 to 14 days to generate a quotation previously. With AInnov RFQ, as soon as the system receives emails from customers, it uses an AI engine to interpret technical specifications, maps them against configuration catalogues, and generates aligned quotations within 15 to 30 minutes. That’s not incremental automation—that’s a structural reduction in turnaround time.

We’re developing AI-native platforms to tackle enterprise-wide issues in addition to operational efficiency. Third-party risk management is a good illustration. Continuous vendor evaluation is difficult for organizations, particularly when doing so on a large scale. STL Digital’s AInnov™ TPRM is an AI-powered platform that automates risk assessment, questionnaire evaluation, vendor onboarding, and lifecycle monitoring. The system dynamically pulls pertinent vendor intelligence, pre-fills templates, assesses inputs, and generates structured reports in a few minutes rather than having auditors manually go over responses over days or weeks.

We have also developed an AI-native hiring platform, AInnov™ Talent, for talent acquisition that handles the complete process, from setting up the interview to evaluating candidates using AI proctoring. Before sending the shortlist to the hiring manager, it conducts JD-to-profile matching, CV screening, interview scheduling, candidate evaluation, and structured scoring. We’re witnessing a 50% decrease in hiring turnaround times while maintaining human decision-making authority. AI enhances the procedure while maintaining governance.

CIO&Leader: Where do you draw the line on autonomy in client-facing environments? Which decisions can AI agents make independently today, and which client-impacting or SLA-sensitive decisions must always remain human-led?

Naveen Bolalingappa: We are very clear about where we draw the line when it comes to autonomy in client-facing settings. When it comes to anything that involves client commitments, compliance requirements, or SLA-sensitive operations, we adhere to a strict human-in-the-loop philosophy.

AI agents are initially being used in low-risk operational domains or internal workflows where mistakes can be undone. If something goes wrong in those settings, we can safely roll back without compromising customer outcomes or regulatory stance. That is where we begin.

Based on structured risk profiling, autonomy then grows. AI can function with a high degree of independence for tasks that are reversible and of low risk. For instance, with monitoring in place, processes like ticket classification, document parsing, data extraction, and workflow triggering can operate independently.

AI can produce suggestions or even carry out actions in medium-risk situations, but human approval is required before final commitment. Consider activities like vendor assessment scoring, hiring shortlists, and change approvals. A human signs off after the AI completes the laborious tasks.

High-risk choices are still made by humans, particularly those involving financial obligations, contractual SLAs, compliance exposure, or reputational risk. AI does not have ultimate authority, but it can help summarize, analyze, or make recommendations. It is impossible to compromise on oversight. We progressively increase autonomy as models develop and confidence thresholds get better, but always within well-defined bounds.

CIO&Leader: What level of operational or reputational risk are you willing to delegate to AI? In practical terms, what is the maximum service disruption, data exposure, or client impact you are comfortable allowing an AI agent to carry?

Naveen Bolalingappa: We operate under what we refer to as a 5S framework: Secure by Design, Secure Access, Secure by Default, Secure Development, and Secure Operations. This framework guarantees that AI systems are designed with embedded safeguards from day one, not retrofitted later.

In practical terms, we do not allow AI agents to create uncontrolled client impact; operational risk is limited to contained environments where disruption is reversible and isolated; data exposure is controlled through strict access management, audit logging, encryption, and rollback mechanisms.

Therefore, the idea is straightforward: controlled automation rather than uncontrolled experimentation.

We are comfortable delegating efficiency-driven tasks to AI. We are not comfortable delegating irreversible consequences.

CIO&Leader: Integration with enterprise and client systems: As AI agents connect to ITSM tools, cloud platforms, data pipelines, ERP/CRM, and client environments, what safeguards ensure access control, traceability, and auditability without slowing delivery?

Naveen Bolalingappa: We operate under what we refer to as a 5S framework: Secure by Design, Secure Access, Secure by Default, Secure Development, and Secure Operations. This framework guarantees that AI systems are designed with embedded safeguards from day one, not retrofitted later.

In practical terms, we do not allow AI agents to create uncontrolled client impact; operational risk is limited to contained environments where disruption is reversible and isolated; data exposure is controlled through strict access management, audit logging, encryption, and rollback mechanisms.

Therefore, the idea is straightforward: controlled automation rather than uncontrolled experimentation.

We are comfortable delegating efficiency-driven tasks to AI. We are not comfortable delegating irreversible consequences.

CIO&Leader: Have you paused or rolled back any AI agent initiatives? What broke first: platform maturity, integration complexity, client confidence, cost justification, or compliance concerns?

Naveen Bolalingappa: Yes, we have paused certain initiatives. The primary issues were data quality gaps and insufficient business logic fidelity. Having large volumes of data is not enough; models require high-quality and contextually accurate datasets. In some cases, poor data hygiene limits model reliability.

Another challenge was hallucination or misinterpretation of complex business logic, which increased the need for manual supervision. When the level of human intervention required to stabilize the system outweighed the automation benefits, the ROI case weakened.

In such cases, we paused deployment. Our corrective strategy focuses on strengthening the data foundation. We initiate data pipeline modernization programs to ensure clean and high-quality data before redeploying AI solutions. AI without a strong data backbone does not scale sustainably.

CIO&Leader: How are you redefining ‘control’ as AI starts acting in production? What matters most today: real-time monitoring, human override for critical actions, policy-based constraints, or strict limits on what agents are allowed to execute.

Naveen Bolalingappa: In an AI-enabled production setting, control cannot imply blind trust or micromanagement. Control has to be redefined as something that is multi-layered, observable, and enforceable.

First, policy-based constraints are the first step towards control. An agent’s decision boundaries are established in accordance with client contracts, internal governance policies, data protection laws such as GDPR, and regulatory requirements before the agent ever goes live. The agent operates within a precisely defined sandbox of what it is permitted to access, trigger, or alter, rather than in an open environment.

Second, it’s crucial to monitor in real time. We handle AI systems the same way we would production infrastructure: under constant observation. We keep an eye on confidence thresholds, anomaly patterns, performance signals, inputs, and outputs. Alerts are automatically triggered if the model starts to drift, behave erratically, or go beyond predetermined boundaries. Runtime operations are inherently observable; it is not an optional feature.

Third, human override is still necessary, particularly for actions that involve low confidence levels, high risk, or regulatory and compliance considerations. Even in highly autonomous workflows, there’s always a clear escalation path. People retain the authority to pause, stop, or roll back decisions that carry financial, contractual, or compliance risk.

Finally, clear execution boundaries are important. AI agents are not exposed to all system capabilities. We are careful about what they are allowed to do. Agents might, for instance, evaluate data, suggest corrective actions, or start preset processes, but they won’t approve financial commitments on their own or change configurations that are sensitive to compliance without verification.

What is most important, then? It’s not a single component alone. These days, control consists of a mix of human override mechanisms, scoped execution authority, real-time observability, and policy constraints. We can scale autonomy responsibly while preserving the integrity of governance thanks to that tiring model.

AI is capable of taking action, but it must always stay within predetermined bounds, be visible in real time, and be reversible when needed.

CIO&Leader: Three-year outlook for digital services delivery: In the next three years, which service delivery or IT operations decisions do you realistically see AI agents taking on autonomously (for example, incident triage, remediation playbooks, and deployment approvals), and which will always require human judgment?

Naveen Bolalingappa: Over the next three years, AI agents are likely to take on a much more independent role in day-to-day IT operations. Tasks like incident triage, log analysis, running automated remediation playbooks, and tuning system performance will increasingly be handled with little to no human involvement. In many organizations, these systems will start to resemble digital twins of the enterprise, capable of making routine operational decisions quickly and with a high degree of reliability.

That said, not everything can (or should) be handed off. High-stakes responsibilities — such as redesigning core architecture, approving major deployments, making compliance-sensitive changes, or setting strategic financial direction will still need experienced human oversight. AI may inform those decisions, but accountability and final judgment will remain firmly in human hands.

As governance models mature and enterprise confidence grows, adoption will accelerate. The

trajectory is toward broader autonomy with structured oversight and delivering measurable ROI while preserving accountability and trust.

Share on