AI Agents Are Here. Are Enterprises Ready for the Risk?

In this episode, we dive into how AI is reshaping cybersecurity from automation to autonomous action.

Welcome to the CISO Forum Studio Talk Series. In this episode, we dive into how AI is reshaping cybersecurity from automation to autonomous action. As enterprises deploy AI agents across workflows, new risks emerge around accountability, control, and trust. Joining us is Rajiv Verma, Global CISO & VP at SRF Limited, who breaks down the real security risks of AI agents, where human oversight still matters, and how leaders can prevent autonomy from becoming a liability.

CIO&Leader: Welcome to the CISO Forum Studio Talk Series. Today, I am here with a very special guest to discuss the AI shift underway and how humans can navigate their roles more effectively.
Joining us is Mr Rajiv Verma, Global CISO and VP at SRF Limited. Welcome to the series, Rajiv. Glad you could join us.

Rajiv Verma: Thank you. Glad to be here.

CIO&Leader: Attackers are using AI to scale their attacks, and cybersecurity has become integral to every business. Enterprises are also deploying AI agents inside workflows. What new risks do AI agents introduce inside enterprises, especially as AI becomes more autonomous?

Rajiv Verma:
With autonomy comes risk. The first and biggest risk is anonymity. In large organisations, you could have thousands of agents running. Tracking who is doing what and which agent is performing which action becomes very difficult.

The second is accountability. If something goes wrong and an action is taken by an AI agent, who do you hold responsible?

The third is input injection from a security perspective. AI systems are susceptible to prompt or input injections. You can craft inputs that direct AI to perform actions it was never meant to perform. AI is designed with certain boundaries, but those boundaries can be altered through injection attacks.

When AI is taking actions, it becomes very hard for humans to intervene in real time. Under these circumstances, lateral movement by attackers inside a network can spread very quickly. Stopping it in real time is extremely difficult. These are some of the major risks AI introduces into organisations.

CIO&Leader: As security becomes more autonomous inside workflows, where should humans remain in control?

Rajiv Verma: Organisations that adopt AI will naturally come to rely on it. But human oversight in cybersecurity remains critical.

Control boundaries must be defined by humans. For example, access rights should be decided by humans, not AI. Irreversible actions are another area. Deleting a database is irreversible and can cause huge damage. Actions like that should always require human approval.

Regulatory and compliance approvals must also be handled by humans. If these are done incorrectly, the damage can be serious and long-lasting. These are decisive actions, and you cannot rely fully on automated systems to make them on behalf of the organisation.

CIO&Leader: How should enterprises test and audit AI behaviour in production?

Rajiv Verma: AI is typically tested at three levels. First, what are its boundaries? Second, what is its behaviour? Third, how does it respond to user input prompts?

Tools are available on the market to test these aspects. IBM AI360 and Microsoft’s AI tools allow you to test AI behaviour based on different inputs. When you provide prompts and observe how the AI reacts, the behaviour is logged and reported.

There is also a Google tool called What If. It generates multiple variations of prompts based on your original input. It works like a conditional chain. If not this, then what? It can generate hundreds or even thousands of prompts and then score how the AI behaves across scenarios.

There are other tools as well, but the market is still in its early stages. At least in our country, organisations are more focused on adopting AI quickly than on testing it thoroughly. It often feels easier to deploy AI and get work done than to invest time in validating its safety and reliability.

CIO&Leader: Any final thoughts on the role of human oversight as AI adoption accelerates?

Rajiv Verma: We are in a phase where organisations are optimising AI for business outcomes. But cybersecurity and production systems demand strong human oversight. It is important to stay involved in defining where AI can act and where humans must remain in control.

CIO&Leader:  Thank you for clarifying the risks and how enterprises can manage them. This was very insightful.

Rajiv Verma: Thank you.

Share on