How Enterprises Can Build Trust as AI Agents Begin to Decide

Why guardrails, human judgement, and domain expertise will shape the next decade of work.

AI has moved beyond simply helping us. Now, it is making decisions, taking action, and managing workflows in businesses. Almost every week, new tools and agents appear, offering more speed, independence, and scale than ever before. As more companies use these technologies, a quiet worry is growing in both boardrooms and classrooms: Can we trust what we are creating? And what role do people still play?

Speaking with CIO&Leader, Dhrubabrata Ghosh Dastidar, Managing Director at Protiviti India, says the future of AI depends not just on its abilities but on the trust, rules, and human judgement we set for it.


“Technology will scale faster than people are ready to adopt it. That’s why governance and humans-in-the-loop will decide how far AI can really go.” ~ Dhrubabrata Ghosh Dastidar


​The Flood of AI Tools and the Trust. New AI tools, agents, or apps seem to appear every week. But not all AI is the same. Dastidar points out an important difference between the main models and the apps built on top of them. While companies will use these large models, the bigger risk comes from unregulated AI apps and agents that connect to business data.

Protecting data privacy, preventing exposure of personal information, tracking data sources, and checking for bias should be standard practice. Without these safeguards, companies risk losing accountability as they automate more tasks. Certification systems and new standards will help show which AI apps can be trusted in regulated industries.

The market is moving from ‘Can AI works to can AI be trusted, checked, and properly managed?'”

As agent-based AI becomes more common, we need to talk seriously about how much freedom these systems should have. Technology can grow quickly, but people will not use systems they do not understand or trust.

Dastidar suggests a balanced approach: AI agents can suggest actions and carry them out, but people should make the final decisions. This ‘human-in-the-loop’ setup helps reduce fear, keeps people responsible, and encourages more people to use AI. Even in important areas like defence, humans still oversee what AI does.

The next stage of AI use will depend more on how it is managed than on its technical features.

Coding Will Change. Judgement Will Matter More.

For young engineers worried about being replaced, the message is nuanced. Coding remains foundational, but its nature is changing. AI agents will increasingly write error-free code. The premium will shift to people who can understand systems, interpret business intent, and design what’s. The future will favor people who can connect coding skills with real-world context and results. This means developing talent is less about just technical skills and more about using good judgment and thinking about the whole system. system thinking.

The workforce will split between jobs done by automation and roles where people guide and manage the process.

The Rise of Non-Technical Power Roles

As enterprises push toward “touchless” processes, such as automated finance workflows or procurement decisions, domain experts become indispensable. AI needs a business context to make business sense. Subject-matter experts in finance, operations, compliance, and risk will define how AI agents are trained, constrained, and deployed.

This shows a change in the job market: non-technical skills will boost the impact of AI, rather than be replaced by it.

As AI spreads, there will be more need for experts in specific fields, not just data scientists.

​GPUs, Data Centres, and the Hard Reality of InAI’s potential depends on real-world infrastructure. Data centers and GPU systems are now the backbone of industry. India’s push to expand data centers and offer incentives shows a clear understanding: to have independent AI, a country needs its own computing power. How quickly India builds its infrastructure will decide how soon it can shift from using AI to creating it.

Leading in AI will depend as much on computing power as on skilled people.

The Human Advantage in an AI-First World

AI will accelerate. Tools will multiply. Agents will grow more autonomous. But human relevance will not disappear. It will migrate. The future workforce will be defined not by who writes the most code, but by who can govern systems, make design decisions, and apply intelligence to real-world outcomes.

The real challenge is not whether AI can replace tasks. The question is whether enterprises can redesign trust, skills, and accountability quickly enough to keep humans meaningfully in the loop.

Share on