Anuj Bhalla, SVP and Global Delivery Head at Cognizant, explains the shift to industrialised, composable AI. He also noted 30% of network tasks will be automated by 2026, with humans guiding the vision.

In this interview Anuj Bhalla, SVP and Global Delivery Head at Cognizant, discusses the shift from experimental AI to industrialised infrastructure. As India’s network demand hits new peaks, Bhalla explains how enterprises are re-engineering the relationship between cloud, edge, and human expertise to support a global, AI-native economy.
Bhalla details the transition to a “NoOps” future, highlighting that while 30% of network activities may be automated by 2026, human intuition remains essential for strategic oversight.
CIO&Leaders: How are you moving away from “bespoke” AI pilots toward an industrialised delivery model that ensures consistency regardless of the client’s geography or industry?
Anuj Bhalla: The future of AI is not bespoke; it is composable. Traditional AI approaches often struggle to scale because they are built in isolation, custom-coded for specific teams or functions, and disconnected from enterprise-wide systems and governance. As a result, many bespoke pilots remain trapped in silos and fail to translate into enterprise‑scale impact.
Moving from pilots to an industrialised AI delivery model requires three structural shifts.
First, a composable and scalable AI platform. We architect AI platforms using reusable,modular components that can be configured for different business contexts rather than rebuilt from scratch each time. This composable approach ensures interoperability, rapidly deployment, and consistency across use cases, enabling us to scale intelligence without proportionally increasing cost.
Second, global AI delivery Centres of Excellence. Our delivery model is anchored in centralised global AI Centres of Excellence that enforce uniform security policies, compliance frameworks, and architectural patterns across regions and industries. By leveraging infrastructure-as-code and policy automation, we ensure that AI solutions are delivered consistently, regardless of where they are deployed.
Third, AI as a force multiplier for workforce transformation. Technology alone does not scale AI; people do. AI skill building is therefore a core focus for us. From July 2023 to the end of 2025, we upskilled more than 330,000 associates on generative AI through over 1,000 learning programs, building a global talent engine capable of delivering enterprise‑grade AI solutions at scale.
Technology alone does not scale AI; people do.
In parallel, we are embedding AI literacy across the organisation to support this industrialised delivery model. Last year, more than 53,000 Cognizant associates across 40 countries participated in our Vibe Coding initiative to explore the future of work and accelerate hands‑on AI adoption. The initiative culminated in the world’s largest online generative AI hackathon, producing 30,601 working prototype projects.
CIO&Leaders: What is the common delivery blind spots you encounter when moving AI from a controlled sandbox to a global, high-traffic environment?
Anuj Bhalla: Sandbox success is a dangerous comfort zone that many organisations are deeply entrenched in, often creating a false sense of readiness. However, when AI is scaled to global, high-traffic environments, the delivery blind spots surface quickly.
Sandbox success is a dangerous comfort zone… often creating a false sense of readiness
The transition from sandbox to production exposes four critical blind spots that organisations frequently underestimate.
The first, and most significant blind spot is integration complexity with legacy systems. Most enterprises operate on decades-old ERP systems that were not architected for real-time intelligence. This gap becomes evident the moment AI solutions are introduced into rigid production ecosystems.
The second blind spot is data quality at scale. AI requires re-architecting data foundations, yet curated sandbox environments often mask the realities of enterprise data. In production, messy, real‑time, high‑velocity data, combined with edge cases and data drift, can quickly destabilise models and introduce systemic risk.
The third blind spot is governance and risk management. As AI systems begin influencing critical business outcomes, overlooked risks and unintended consequences become far more pronounced. The absence of well-defined accountability for bias monitoring and error handling remains a significant barrier to responsible AI adoption.
The fourth blind spot is organisational change management. Moving AI into production demands new ways of working, stronger governance, and cultural alignment. Without embedding change management from the outset, even technically sound AI deployments struggle to scale or sustain impact.
CIO&Leaders: When you take over a client’s global delivery, how do you enforce a “Security-as-Code” culture so that infrastructure and security are deployed as a single unit?
Anuj Bhalla: Infrastructure without embedded security is not deployment; it is exposure. For a broader audience, think of it this way: Today, infrastructure and security are no longer separate disciplines. They are fused, with every deployment designed to be secure from the outset rather than inspected after the fact.
Infrastructure without embedded security is not deployment; it is exposure.
When we take over a client’s global delivery, enforcing a Security-as-Code culture starts with eliminating silos, not just introducing new tools. We remove artificial boundaries between cloud, infrastructure, and security by establishing unified platform engineering teams that own the entire stack end to end. This eliminates fragmented ownership and accelerates secure delivery.
Security policies are codified directly into infrastructure pipelines. Policy-as-Code engines like Open Policy Agent (OPA)are implemented to automate enforcement and reject non-compliant deployments before resources provisioning. These embedded tools enforce guardrails for agentification across our operations.
Our approach is grounded in four foundational pillars: security, responsibility, explainability, and ethical AI. These principles ensure that AI-driven decisions across infrastructure operations are secure, accountable, transparent, and aligned with our values, compliance, and governance standards.
Our pipelines integrate “shift-smart” security withAI-driven, context-aware testing throughout the lifecycle. AI helps reduce false positives and prioritizes risks based on business impact, allowing teams to focus on what matters most.
We also deploy immutable infrastructure, where servers are replaced rather than patched, and security policies travel with the infrastructure code itself. Combined with zero-trust networking, every service communication is authenticated regardless of location.
Security enforcement is built into the system by design, not validated later through audit.
CIO&Leaders: With CIS delivery increasingly moving to the edge, how has the definition of delivery excellence changed? How do you ensure sub-millisecond reliability when the infrastructure is no longer centralised in a single data centre?
Anuj Bhalla: Edge computing has fundamentally redefined delivery excellence from a focus on availability and throughput to latency, locality, and resilience under constraint. Applications such as autonomous systems and real-time AI inference demand sub-10 millisecond latency, which is not achievablefrom centralized clouds environments.
We architect edge deployments using a hub-and-spokemodel that enables distributed intelligence with centralised orchestration. Edge nodes execute lightweight, resource-optimised inference, while the central platform manages model updates and policy enforcement, ensuring scalable autonomy with uniform control.
We are increasingly designing edge solutions such as Physical AI systems, where perception, reasoning, and action must occur within strict, deterministic latency bounds at the edge. To enable this, we utilise NVIDIA-accelerated edge platforms such as Jetson and IGX Orin, allowing perception and control loops to run entirely on-device. Using TensorRT, optimized inference pipelines are executed in microseconds.
Our applications are inherently network-aware, withcritical operations processed and executed locally, and only summarised data sent upstream. Predictive pre-caching dynamically aligns compute resources with usage patterns, while graceful degradation ensures continuity under network constraints. Using edge-enabled CDN platforms such as Cloudflare Workers and AWS Wavelength, we achieve near-real-time, sub-millisecond responsiveness within carrier networks.
AIOps-driven self-healing enables autonomous operations through automated failover across edge nodes, local anomaly detection triggering remediation without cloud roundtrips, and continuous chaos engineering to validate resilience.
We measure delivery excellence using advanced metrics such as P99 latency, geographic coverage, defined as the percentage of users within 10 milliseconds of an edge node, failure recovery effectiveness, and carbon efficiency per transaction.
CIO&Leaders: AIOps has been a focus for years, but how close are we actually to “NoOps”? What percentage of a global CIS delivery model can truly be automated today without human intervention, and where does human intuition remain non-negotiable?
Anuj Bhalla: The journey to NoOps is often misunderstood. We are neither as close to NoOps as the hype suggests nor as far as sceptics believe. While full autonomy remains distant, analyst projections indicate that close to half of global CIS operations can be agentified today. For example, Gartner notes that 30% of enterprises will automate more than half of their network activities by 2026.
The journey to NoOps is often misunderstood. We are neither as close as the hype suggests nor as far as skeptics believe.
Cognizant advocates a model of human-governed autonomous operations,where AI drives efficiency and speed while humans provide strategic direction, ethical oversight, and exception management. This Human-in-the-Loop (HITL) governance ensures that autonomy can scale without compromising accountability.
In practice, we operationalise this through agents across core IT operations, with a strong emphasis on machine-first task execution for well-defined workflows. Our AI-assisted First Call Resolution capabilities are already ahead of the curve.
While AI handles L1 and L2 tasks, this approach does not replace humans, it elevates them. Teams are moving from reactive support to designing and governing AI systems, managing exceptions, and focussing on complex, high-value problems. Engineers are increasingly transitioning into proactive system architect roles, with a stronger emphasis on governance, resilience, and strategic innovation.
CIO&Leaders: Enterprises are spending more on monitoring their clouds than ever before. How do you help CIOs find the Goldilocks zone of observability?
Anuj Bhalla: Observability costs escalate rapidly when teams default to collecting and storing everything. We help CIOs find the Goldilocks zone of observability by combining FinOps tools with intelligent data management and AIOps-driven noise reduction, ensuring they have the right signals without unnecessary spend.
The shift to GPU-intensive workloads and large language models has significantly increased both compute and monitoring costs. Enterprises are now grappling with the financial impact of high-frequency telemetry and model observability at scale.
We start at the point of ingestion. Context-aware sampling captures all errors and outliers while applying minimal throttling of successful transactions. This significantly reduces ingestion volume while preserving critical signals. Next, we deploy tiered storage architectures across hot, warm, and cold tiers, automatically moving data based on access patterns, cutting storage costs substantially.
The third and most critical lever is smarter alerting. We use AIOps for intelligent alert correlation and noise suppression. We call this Agentic Observability, where AI agents do not just observe but actively optimize and autonomously execute decisions to manage observability costs and performance in real time. A key metric we track is the alert-to-actionable-ticket ratio, with the goal of maximising noise reduction while minimising the number of actionable incidents.
We also prioritise golden signals, namely latency, traffic, errors, saturation, and business-aligned SLOs, instead of tracking thousands of metrics. This eliminates the need to monitor everything and focuses attention on what truly matters.
Finally, by allocating observability budgets as a defined fraction of infrastructure spend, we introduce cost discipline and ensure investments are driven by business impact rather than caution. The Goldilocks zone is achieved when monitoring spend is right sized to the value it safeguards.
CIO&Leaders: As AI begins to handle L1/L2 support tasks, how are you reskilling your global delivery teams to become “Infrastructure Architects” rather than “Support Engineers”?
Anuj Bhalla: AI is fundamentally reshaping what it means to be valuable at work. This shift is not about job loss; it is about job evolution. As AI increasingly handles L1 and L2 operational tasks, our engineers are moving up the value chain to take on L3 and L4 responsibilities that demand deeper expertise, design thinking, and architectural judgment.
At Cognizant, we are deliberately reskilling our global delivery teams to transition from traditional support roles to infrastructure architects. Engineers who once focused on ticket resolution are now being trained to design autonomous systems, build AI agents, manage agent life cycle management, and create self-healing infrastructure.
AI skill building is a core focus for us. From July 2023 to the end of 2025, we have upskilled more than 330,000 associates on generative AI through over 1,000 learning programs, helping our teams build the technical depth and architectural mindset required for next-generation infrastructure roles.
Within our Cloud and Infrastructure Services organisation, we also run targeted incubation programs for emerging talent where experienced architects are paired with associates to create an AI-forward deployment force multiplier. These programs accelerate hands-on learning and help engineers develop the judgment, context, and systems thinking that differentiate humans in an AI-augmented world.
Our philosophy revolves around AI handling the toil, freeing humans to think.
AI runs the tasks; humans run the vision.