“AI Is No Longer Knocking at the Door—It’s Already Inside, and Most Enterprises Have No Idea What It’s Doing”

As artificial intelligence accelerates from carefully controlled pilots into the beating heart of enterprise operations, the security frameworks built to protect businesses are struggling to keep pace. The rules have fundamentally changed — and most organisations haven’t caught up. Gone are the days when a firewall and periodic audits could contain risk. Today’s AI systems think, decide, and act autonomously across sprawling hybrid clouds, often in ways their own creators cannot fully explain or predict. In this incisive conversation, Mohan Veloo, CTO for APCJ at F5, delivers an urgent wake-up call to CIOs and CISOs navigating this treacherous new frontier — where the greatest threats aren’t break-ins, but behaviours.

Mohan Veloo
CTO, APCJ
F5



CIO &Leader: As AI moves from pilots to production, what fundamentally changes in the enterprise risk landscape?

Mohan Veloo: The biggest shift is that risk moves from being controlled and predictable to being dynamic and continuous. In traditional environments, security was largely perimeter-driven you had defined systems, known users, and relatively stable application behavior. Risk could be assessed periodically because the environment itself was more static.

Today’s environments are highly distributed, spanning hybrid and multi-cloud architectures, with applications, APIs, and increasingly AI systems interacting in real time. The moment AI becomes part of live workflows, it starts operating on real data, making decisions, and triggering actions across systems. This fundamentally changes where risk shows up.

What we are seeing now is that risk is no longer confined to the model itself it extends across data pipelines, APIs, and runtime interactions. AI systems don’t fail in obvious ways; they continue to function, but with behaviour that may be incorrect, manipulated, or non-compliant. That makes detection significantly harder compared to traditional systems.

So, the real shift is from securing static infrastructure to governing dynamic behavior. Security has to move from periodic checks to continuous, real-time enforcement at the point of interaction. Without that level of visibility and control, organisations are effectively scaling AI into environments they no longer fully understand.

CIO & Leader: Where are organisations most underprepared when it comes to securing AI in live, business-critical environments?

Mohan Veloo: The biggest gap is between how quickly organisations are deploying AI and how well they are able to govern it in production. Much of this comes from the fact that many are still approaching AI security with a traditional mindset, focusing on how systems are built rather than how they behave once deployed. They tend to focus heavily on the model how it is trained, where the data comes from, and whether it is compliant. But once AI moves into production, the real risk shifts to how that system behaves in live environments.

What we’re seeing is a clear disconnect between deployment capability and operational control. Many organisations are able to deploy AI models at scale, but they lack the visibility to understand how those systems are interacting with APIs, data sources, and applications in real time. That’s where issues like prompt injection, data leakage, and unintended actions start to emerge not in isolation, but as part of normal system behavior.

The second area of under-preparedness is around infrastructure and policy consistency. AI systems today operate across hybrid and multi-cloud environments, but security controls are often fragmented. This makes it difficult to enforce consistent policies or maintain a unified view of risk across environments.

So, the challenge is not just about new threats it’s about the fact that organisations are trying to scale AI into environments where they don’t yet have continuous visibility, consistent policy enforcement, or runtime-level control. That’s the gap that needs to be addressed before AI can be deployed confidently in business-critical settings.

CIO & Leader: How do AI agents redefine identity, access control, and trust boundaries within enterprise architectures?

Mohan Veloo: AI agents fundamentally change how identity and access need to be understood. In traditional systems, access is tied to defined roles and predictable behavior. With agents, access becomes dynamic driven by context, inputs, and real-time interactions across systems.

If an AI agent is connected to multiple systems and data sources, it can be influenced or manipulated through inputs to perform actions or retrieve information it was not explicitly intended to expose. In a banking scenario, for example, an agent designed to assist with customer queries could be prompted in a way that leads it to surface sensitive financial data or internal information, not because it was compromised in a traditional sense, but because the interaction itself was not adequately governed.

This is where identity and access control need to evolve. It is no longer sufficient to define permissions based on static roles. Organisations need to evaluate access based on context, intent, and the specific action being performed in real time.

CIO & Leader:  What new attack vectors such as prompt injection or data leakage are emerging at runtime and how should enterprises mitigate them?

Mohan Veloo: What’s emerging now is a class of attacks that don’t target access but influence how AI systems behave during live interactions. In traditional environments, security was about preventing unauthorized access. With AI, the risk increasingly sits in how systems interpret inputs and execute actions in real time.

Prompt injection is a clear example of this. An attacker doesn’t need to break into the system they can manipulate how the model responds, effectively changing its behavior. Similarly, data leakage is not always the result of incorrect permissions, but of how AI systems interact with enterprise data, APIs, and external services during runtime.

That is the important shift. Risk doesn’t emerge only when the system is built; it emerges when it is operating. AI systems often do not break in a traditional sense they continue to function but behave incorrectly. They may produce unintended outputs, expose sensitive information, or trigger actions that fall outside policy.

Mitigating this requires a move away from static controls toward governing behavior in real time. Enterprises need visibility into how systems are interacting, and the ability to enforce policy at the point of execution. The challenge is no longer just securing access it is ensuring that AI-driven actions remain within defined boundaries.

CIO & Leader:  Why is runtime visibility becoming critical for governing AI systems interacting with APIs, data, and applications?

Mohan Veloo: The core challenge with AI systems is that they don’t operate in isolation they operate through continuous interaction. They are constantly exchanging data, calling APIs, and making decisions in real time. That means the point of risk is not the system itself, but the interaction layer.

In traditional systems, you could trace decisions after the fact. With AI, that becomes much harder. Decisions are probabilistic, context-driven, and often influenced by multiple inputs across distributed environments. If you don’t have visibility at the point where those interactions happen, you lose the ability to explain, govern, or control the outcome.

This is why runtime visibility becomes critical. It allows organisations to see how inputs are being interpreted, how data is being accessed, and how actions are being executed across systems. Without that, governance becomes theoretical you may have policies defined, but you cannot enforce or validate them in real time.

The shift is from monitoring activity to understanding behavior. Runtime visibility is what enables that. It gives enterprises the ability to move from reactive security to real-time control, which is essential when AI systems are operating across APIs, applications, and data flows at scale.

CIO & Leader: How should enterprises rethink policy enforcement when AI systems operate autonomously across distributed environments?

Mohan Veloo: Policy enforcement starts to break down in environments where AI systems are operating autonomously across distributed architectures. Policies were designed to be defined once and enforced across relatively stable systems. That model breaks down when AI systems are operating autonomously across distributed, hybrid, and multi-cloud environments.

What changes with AI is that decision-making moves into the runtime. These systems are continuously interacting with data, APIs, and applications, which means policy can no longer be static or perimeter based. It has to be evaluated and enforced dynamically at the point of interaction, where actions are actually being executed.

This also creates a consistency challenge. Enterprises may have policies defined centrally, but enforcement is often fragmented across environments. As a result, the same AI system can behave differently depending on where it is running, which introduces gaps in governance and control.

Policy needs to move with the workload, interpret context in real time, and enforce boundaries based on intent, not just identity. That is what allows organisations to scale AI while maintaining control in increasingly complex environments.

CIO & Leader:  What are the top three priorities for CIOs and CISOs to securely scale AI from experimentation to enterprise-wide deployment?

Mohan Veloo: Firstly, CIOs and CISOs need to shift the focus from model security to securing AI at runtime, where systems are interacting with data, APIs, and business workflows. That’s where real enterprise risk emerges.

Secondly, they need to embed governance and explainability from the outset, so that AI-driven decisions can be understood, validated, and held accountable in real time. Without real-time visibility into how these systems behave and how data is being accessed, governance becomes very difficult to enforce.

And third, they need to design for consistency across distributed environments. AI systems today operate across hybrid and multi-cloud architectures, and without a unified approach to policy enforcement, organisations will struggle to scale without introducing fragmentation and risk.

Share on