“AI agents will become core, but governance will decide who scales.”— Ajay Ajmera, CIO, Rockman Industries  

Ajay Ajmera, CIO of Rockman Industries, outlines a governance-first approach to AI adoption, where agents are deployed in controlled pilots across key functions such as IT, finance, procurement, cybersecurity, and manufacturing.

Ajay Ajmera, CIO, Rockman Industries  

Rockman Industries, a leading Indian auto‑components manufacturer and part of the Hero Group, is taking a measured, governance-first approach to AI adoption. Ajay Ajmera, CIO of the company, explains how AI agents are being piloted across IT, finance, procurement, cybersecurity, and manufacturing, with a strong focus on trust, explainability, and operational risk. 

Unlike organisations rushing toward full autonomy, Rockman prioritises secure integration, clearly defined risk boundaries, and embedding accountability directly into AI systems. Ajmera outlines how AI is evolving from an experimental tool to a disciplined digital execution layer that complements human judgment and strategic oversight.

CIO&Leader: Where are you today on AI agents? Are you still experimenting, running pilots, or operating AI agents in live production? Which business functions are using them today? 

Ajay Ajmera: We are currently operating in a structured pilot phase. While the broader industry conversation has sprinted toward full autonomy, we’ve intentionally chosen a more measured approach. Today, our AI agents are in controlled pilots across critical functions: IT service management, finance analytics, procurement insights, and cybersecurity monitoring. We’re also testing predictive signals and quality pattern detection in our manufacturing operations. 

“Right now, it’s not about handing over decision-making authority. It’s about evaluating behavior, accuracy, and integration complexity—and earning the trust of our teams.” 

Right now, it’s not about handing over decision-making authority. It’s about evaluating behavior, accuracy, and integration complexity and ultimately, earning the trust of our teams. We view these pilots not merely as technical proofs of concept, but as governance rehearsals. We are stress-testing our control frameworks and risk thresholds just as rigorously as the AI itself. We’re piloting with clear intent to achieve secure, sustainable production at scale. 

CIO&Leader: Where do you draw the line on autonomy? Which decisions can AI agents make on their own today, and which decisions are strictly off-limits for machines? 

Ajay Ajmera: I look at AI autonomy as a risk and accountability issue, not just a technological capability. Our framework for delegating decisions comes down to impact and reversibility. Today, agents act independently only in structured environments where their actions are measurable and correctable, such as triggering predefined workflows, prioritising cases, or executing small-value transactions. 

“If an error carries high or irreversible consequences, a human must retain final accountability. AI manages scale and volume beautifully, but true judgment and ethical context remain uniquely human.” 
 

However, decisions that carry material financial, regulatory, or human consequences remain strictly off-limits. Hiring, terminations, strategic investments, and vendor negotiations require nuanced judgment that machines simply don’t possess. Our operating principle is straightforward: if an error carries high or irreversible consequences, a human must retain final accountability. AI manages scale and volume beautifully, but true judgment and ethical context remain uniquely human. Autonomy will expand in our organisation, but only as explainability and governance mature alongside it. 

CIO&Leader: What level of risk have you delegated to AI? In practical terms, what is the maximum financial, operational, or reputational impact you are comfortable allowing an AI agent to carry? 

Ajay Ajmera: We take a highly structured approach to risk delegation. At our current maturity level, we cap AI autonomy at low-to-moderate risk within tightly governed, reversible parameters. From a financial standpoint, AI operates strictly within predefined transaction thresholds; any material exposure automatically defaults to human authorisation. 

Operationally, we leverage AI to optimise workflows and prioritise cases, but we firmly restrict it from actions that could disrupt core production or alter strategic commitments. We are most conservative regarding reputational risk; public-facing decisions and regulatory submissions remain entirely human-driven. Our guiding heuristic is straightforward: if an error cannot be rapidly reversed—financially, operationally, or reputationally—it is not delegated to a machine. While these boundaries will naturally expand as our governance frameworks mature, today our delegation is cautious and continuously monitored. Ultimately, autonomy without a defined risk appetite isn’t innovation; it’s exposure. 

CIO&Leader: If asked, can you trace and explain how autonomous decisions were made and who approved the rules behind them? Could you explain your AI decisions to a regulator tomorrow? 

Ajay Ajmera: Our position is uncompromising: if an AI’s decisions cannot be explained, it does not get deployed. Even in our current pilot phase, every agent operates under strict version control, documented rules, and clear business ownership. We maintain comprehensive decision logs, ensuring we can trace not only the output of the AI, but the underlying policy and the specific human authorisation behind it. Ultimately, we treat AI decisions with the exact same level of auditability as our financial decisions. 

If a regulator walks in tomorrow, we are fully prepared to articulate the agent’s purpose, its data supply chain, the governing decision logic, and our oversight mechanisms. While no system is flawless, explainability is a foundational design principle for us. The fundamental shift for modern CIOs is realising that governance must be embedded directly into the AI lifecycle from day one. Transparency is no longer optional; it is the fundamental license to operate. 

CIO&Leader: As AI agents connect to ERP, CRM, core banking, or manufacturing systems, what safeguards ensure control, visibility, and auditability? How do you integrate agents into core systems without losing control? 

Ajay Ajmera: The moment an AI agent connects to platforms like ERP or core banking, it ceases to be an experiment and becomes a foundational part of your operational backbone. For us, secure integration is rooted in architectural control. AI agents never get unrestricted access. 

We essentially manage them as digital employees operating under robust governance layers. This means enforcing strict role-based access controls (RBAC) and ensuring they interact solely via controlled APIs. We implement hard financial caps, mandate segregation of duties, and maintain real-time monitoring dashboards with immediate override capabilities. In manufacturing, agents might trigger alerts or suggest optimisations, but they cannot independently alter production parameters beyond tightly approved tolerance bands. Every action is fully traceable—we know who configured it, what rule it followed, and what data it used. You don’t lose control by integrating AI into core systems; you only lose control when governance is treated as an afterthought. 

CIO&Leader: What matters most in your model today: monitoring, human override, policy rules, or strict limits on what agents can do? How are you redefining ‘control’ as machines start acting? 

Ajay Ajmera: While all four elements are essential, I prioritise policy boundaries and strict operational limits above all else. Overrides and monitoring are necessary, but they are fundamentally reactive. True security lies in proactive design—architecting the system so the agent simply cannot operate outside clearly defined guardrails. We establish those absolute boundaries first, and only then do we implement continuous monitoring and real-time override capabilities. 

We are fundamentally redefining ‘control’ as machines begin to act. Historically, control meant manual approvals at every procedural choke point. Today, control is architectural. It is embedded in the system through segregation of duties, policy-driven automation, and transaction caps. While the human ‘kill-switch’ remains vital, our objective is to elevate supervision from an operational task to a strategic one. Control is shifting from being procedural to being systemic. That is the true evolution for enterprise leadership today: transitioning from managing processes to engineering accountability directly into the system. 

CIO&Leader: Which decisions do you realistically see machines taking over in your industry, and which will always require human judgment? What will AI agents be trusted to decide in three years? 

Ajay Ajmera: Looking ahead three years, I anticipate AI agents confidently owning data-heavy, pattern-driven decisions. In manufacturing and enterprise operations, this translates to autonomous inventory optimisation, predictive maintenance scheduling, fraud pattern detection, and dynamic pricing within strict thresholds. These environments are rich in measurable variables and perfectly suited for continuous algorithmic optimisation. 

“AI will function as a digital execution layer, trusted to manage massive operational complexity at scale while humans manage direction and values.” 

However, decisions requiring contextual nuance, ethical weighting, and long-term strategic foresight will remain strictly human-led. Machines simply lack the situational awareness and accountability required for capital allocation, regulatory interpretation, talent management, or crisis response. In the near future, AI will effectively function as a digital execution layer, trusted to manage massive operational complexity at scale while humans manage direction and values. 

CIO&Leader: Will AI agents become a core execution layer in your enterprise, or will scale stall due to regulation, architecture limits, or cost pressures? Five years out, what’s your honest view? 

Ajay Ajmera: If I’m being completely honest, AI agents will inevitably become a core execution layer—but through a measured, incremental evolution rather than an overnight revolution. The technology itself will not be the limiting factor. The true constraints will be regulatory frameworks, data architecture readiness, the cost of large-scale inference, and internal trust. Regulation will rightfully slow reckless adoption, which is a net positive for the industry. Furthermore, legacy systems that aren’t API-ready will fundamentally struggle to integrate these autonomous layers. 

Looking five years out, I don’t foresee scale stalling; rather, I see it maturing into highly disciplined adoption. AI will orchestrate a massive portion of operational execution, freeing human leaders to focus entirely on strategy and high-stakes decisions. The ultimate winners won’t be measured by the sheer volume of deployed agents. They will be the organisations that embedded observability and accountability into their foundations from day one. AI agents will become core to the enterprise, but governance will dictate who successfully scales and who stops. 

Share on