India’s new governance guidelines are shaping how organisations are expected to manage AI, and where accountability now sits.

At 5:18 AM, while the city was still asleep, the CIO of a large Indian enterprise glanced at an operations dashboard that suddenly didn’t make sense.
Overnight, one of the company’s core AI systems, responsible for thousands of automated decisions every hour, had shifted its behaviour. Approval rates changed. Routing priorities adjusted. Risk scores moved in unexpected directions.
There was no outage.
No alert. No obvious failure.
Yet the CIO knew something was wrong. In AI-driven operations, the most serious risks rarely arrive with alarms. They surface quietly.
Within minutes, data science and security teams were on the call.
“What changed?” the CIO asked.
“The model updated itself,” an engineer replied. “It may have discovered a new optimisation path.”
“Is this drift? Bias? Or something worse?”
“We don’t know yet.”
Then came the question that increasingly defines the modern CIO’s role:
“Under India’s AI governance guidelines, does this qualify as a reportable incident?”
The room went silent. Someone pulled up the guidelines. Another checked the DPDP Act. The CISO revisited CERT-In’s six-hour reporting rule. A data analyst flagged the requirement for human oversight.
No one had a clear answer. But everyone understood the stakes.
This is the new reality for Indian enterprises. AI now runs core operations, and AI governance is becoming central to accountability. But the ambiguity is intentional.
The government has deliberately avoided defining a rigid concept of “AI incident reporting”. AI risk varies by context: what is critical in banking may be routine in retail. A one-size-fits-all mandate would either overwhelm regulators with noise or discourage innovation through over-compliance.
At the same time, silence carries its own risk. Under-reporting is increasingly seen as a governance failure, not a technical oversight. For CIOs, the message is clear: AI may be automated, but accountability is not.
7 Sutras of AI Governance Guidelines. Source: India AI Governance Guidelines
| 01 | Trust is the Foundation |
| Without trust, innovation and adoption will stagnate. | |
| 02 | People First |
| Human-centric design, human oversight, and human empowerment | |
| 03 | Innovation over Restraint |
| All other things being equal, responsible innovation should be prioritised over cautionary restraint. | |
| 04 | Fairness & Equity |
| Promote inclusive development and avoid discrimination. | |
| 05 | Accountability |
| Clear allocation of responsibility and enforcement of regulations. | |
| 06 | Understandable by Design |
| Provide disclosures and explanations that can be understood by the intended user and regulators | |
| 07 | Safety, Resilience & Sustainability |
| Safe, secure, and robust systems that are able to withstand systemic shocks and are environmentally sustainable. |
The New Governance Roadmap: Why It Should Be Taken Seriously
When India’s Ministry of Electronics and Information Technology (MeitY) recently unveiled the AI Governance Guidelines, it sent a clear signal: we know AI has risks, but we are not going to regulate blindly.
These guidelines function as quasi-regulatory signals. They do not carry penalties on day one, but they:
- Set expectations for responsible behaviour
- Shape how future laws will be interpreted
- Become reference points during audits, disputes, or investigations
If something goes wrong with an AI system, the first question regulators or courts are likely to ask is, ‘Did the company follow published government guidance?’
Ignoring these guidelines increases legal and reputational exposure, even in the absence of a dedicated AI law.
For CIOs, CTOs, and CISOs, this is not just another policy PDF. It is a new operating context. The Guidelines assume that AI will be deeply embedded in decision-making, infrastructure, and citizen-facing services and then ask:
- Who is accountable when things go wrong?
- How should risk and liability be graded across the AI value chain?
- What institutional structures and technical controls are expected inside the enterprise?
MeitY’s Guidelines prioritize how AI is used over how it’s built, encouraging graded oversight based on impact rather than one‑size‑fits‑all rules.
Existing legal framework
- DPDP Act (2023): Governs data use, consent, purpose limitation, and accuracy—forming the backbone of privacy and data governance for AI.
- IT Act & IT Rules (2021): Apply to accountability, cybersecurity, content moderation, and online harms, including risks from AI systems.
- Copyright & Consumer Protection Laws: Help address intellectual property issues and deceptive or unfair practices involving AI outputs.
What the guidelines actually expect from enterprises
Despite the absence of explicit penalties, the guidelines are unambiguous on one point: accountability rests with the organization deploying AI, not the algorithm, the vendor, or the data science team.
This marks a subtle but important shift. AI is no longer treated as an experimental technology or a specialist tool. It is positioned as an enterprise system, subject to the same expectations of oversight, control, and responsibility as core IT, cybersecurity, or financial infrastructure.
For CIOs, this reframes the question from “Is our AI compliant?” to “Can we defend our AI decisions if asked?”
The guidelines expect Indian enterprises to do four things:
- Treat AI as strategic infrastructure, not a side project.
- Classify AI systems by risk, and scale controls (testing, documentation, oversight) with that risk.
- Build formal governance structures: steering committees, ethics boards, AI Ops/MLOps, and integrated risk and grievance processes.
- Embed trust, transparency and inclusion into the design of AI products, especially those affecting livelihoods or public discourse.
Accountability is explicit, even without penalties
The guidelines make it clear that automated decision-making does not dilute responsibility. If an AI system produces biased outcomes, causes harm, or behaves unpredictably, the organisation that deployed it is expected to explain:
- Why the system was introduced
- What decisions it influences or automates
- What safeguards and oversight mechanisms exist
- Who is responsible for monitoring and intervention
The absence of a formal AI law does not create a liability vacuum. Instead, the guidelines establish a standard of reasonable care: one that regulators, auditors, and courts can reference when assessing enterprise behaviour.
For CIOs, this effectively removes the defence of “the model did it on its own.”
Risk-based thinking replaces one-size-fits-all control
Rather than prescribing uniform controls for all AI systems, the guidelines implicitly push organisations toward risk-based classification.
Not all AI is treated equally.
- A recommendation engine optimising user experience carries limited risk
- An AI system approving loans, screening candidates, or determining eligibility for benefits carries materially higher risk
The expectation is not that CIOs govern every model identically, but that they know which systems matter most and apply proportionate controls.
This approach reflects India’s broader regulatory philosophy: governance should follow impact, not technical complexity.
Human oversight is no longer optional for high-impact AI
One of the clearest signals in the guidelines is the emphasis on human-in-the-loop mechanisms, particularly for systems that affect rights, access, or outcomes.
In practice, this means:
- AI-driven decisions must be reviewable
- Escalation paths must exist
- Overrides must be possible and documented
For CIOs, this is not merely a design principle—it is an operating model decision. AI systems that run unattended in high-impact environments increasingly represent a governance risk, not an efficiency gain.
Transparency over perfection
The guidelines do not demand full mathematical explainability of every model. What they expect is reasonable transparency.
Can the organisation explain, at a high level:
- What the system is designed to do
- What data it relies on
- Why a particular decision or output occurred
Black-box systems with no internal understanding are strongly discouraged, especially in regulated or citizen-facing contexts. The emphasis is on defensibility, not academic purity.
Documentation becomes a control, not paperwork
One of the least visible but most consequential implications of the guidelines is the importance of documentation.
Model change logs, data lineage records, decision rationales, and risk assessments are no longer just internal best practices. They are increasingly viewed as evidence of governance maturity.
In the absence of such records, organisations may struggle to demonstrate that they exercised appropriate oversight, even if no harm was intended.
Where this leaves the CIO
Taken together, the AI Governance Guidelines do not impose new technology requirements. They impose new expectations of leadership.
CIOs are now expected to:
- Classify AI systems by risk and impact
- Ensure clear ownership and accountability
- Embed human oversight into critical workflows
- Prepare the organisation to explain its AI decisions
The guidelines do not ask CIOs to slow down AI adoption. They ask them to make AI governable at scale.
| CIO Action Agenda |
| Designate AI as Strategic Infra | Treat AI like cloud or cybersecurity |
| Long-term investment, not pilots | |
| Determine AI Risk Levels | Classify systems: low → high risk |
| Controls scale with risk | |
| Deploy Governance Architecture | Steering Committee |
| Ethics & Responsible AI Board | |
| AI Ops + MLOps | |
| Risk & Grievance Pathways | |
| Design for Trust | Explainability |
| Transparency | |
| Inclusion & public-impact sensitivity |
How that plays out looks very different in a bank branch, a newsroom, a GPU farm, or a paint factory. But the underlying mandate is the same: AI must be powerful, yes, but also explainable, auditable, and recoverable when it goes wrong.
How Kotak Balances Speed, Voice, and Trust in Digital Banking
“In banking, AI can drive speed and inclusion, but governance is what makes it deployable at scale.”
— Shankar Shukla, VP–Head, IT Infrastructure & Technology, Kotak Mahindra Bank
Sankar Sukhla, who heads IT Infrastructure and Technology at Kotak Mahindra Bank, describes a landscape where AI has become the face and spine of digital banking. With around 3,000 branches and a highly digital footprint, Kotak uses voice bots and AI agents to handle routine interactions, balances, EMI timelines, and self-service workflows, while human agents step in only at the final decision stage. That model is almost a live-action version of the Guidelines’ People First and Understandable by Design principles: let machines handle the repeatable work, but ensure customers still get clarity and human intervention where it matters.
The same logic applies to inclusion. For customers in tier-3 and tier-4 towns, particularly seniors, Sukhla sees voice, not chat, as the gateway to digital finance. That aligns with the national push toward multilingual, accessible AI built on platforms like Bhashini. Done right, AI becomes a way of extending financial services, not excluding those who aren’t app-native.
Yet governance is never far away. Sukhla is acutely aware that BFSI is a high-liability domain. Personally identifiable information is everywhere; fraud is endemic; trust is fragile. His answer leans heavily into the Guidelines’ Trust as the Foundation sutra: verify the caller identity, surface trusted entity names, announce known identifiers like CRN numbers, and ensure every AI workflow has passed through GRC and risk teams, including RBI outsourcing norms. It’s no surprise he describes his stance as “80–20” in favour of AI: 80% for speed and scale, 20% reserved for security, compliance and governance.
How Newsrooms Are Defending Trust Against Deepfakes
“In the media, AI is no longer just a productivity tool, it’s part of the trust infrastructure that decides what is real and what is not.”
— Ninad Raje, Group CIO, Times Group
AI isn’t just summarising text or recommending articles; it’s now part of the integrity layer that separates real from fake.
For Ninad Raje, Group CIO at the Times Group, the Guidelines are a badly needed brake on what he calls AI “going out of the roof”. AI has already transformed newsrooms: analytics are faster, insights are richer, and the organisation is finally able to “harness the power of data” in ways that were simply not possible even a few years ago. But that same capability has also supercharged deepfakes and synthetic media, and this is where governance hits operational reality.
Before a clip airs or a breaking story goes live, Raje’s teams now rely on AI-driven tools to detect manipulation regardless of whether the video is ten seconds or an hour long. That is exactly the sort of techno-legal countermeasure envisaged in the Guidelines: content provenance, watermarking, automated verification, and human editorial oversight to prevent harmful deepfakes from becoming mainstream.
Raje also surfaces a tension that boards will increasingly confront: as governance sharpens, experimentation slows. If every speculative AI project carries possible regulatory or reputational risk, does the organization still encourage bold pilots? The Guidelines try to resolve this by explicitly backing Innovation over Restraint, but the cultural shift is real.
His take on jobs is a useful compass for leadership: AI will augment people, not replace them, but only for those willing to upskill and use it. The risk is not “AI taking jobs,” but organizations that fail to integrate AI responsibly falling behind those that do.
Why GPU-Scale Infrastructure Demands Responsible Design
“At GPU scale, AI governance isn’t theoretical, it’s embedded in how you design, power, and secure the infrastructure itself.”
— Rajesh Garg, President & Group CIO, Yotta Data Services
If AI is the new electricity, hyperscale data centres are its grid. Rajesh Garg, President and Group CIO at Yotta, sits on that grid, managing what he calls “the largest AI infrastructure of Asia today, almost 10,000 GPUs.”
For him, the Guidelines simply formalise what serious AI providers already knew: infrastructure at this scale demands governance. Yotta has set up four distinct bodies: an AI steering committee led by the CEO with CIO and CISO; a risk and compliance function within GRC; AI Ops as part of operations; and a responsible AI/ethics board under the CISO. It’s almost a textbook implementation of the governance scaffolding the report recommends: clear venues for decision-making, risk ownership and ethical oversight.
Garg’s other preoccupation is the physical reality of AI. A standard CPU rack might consume 6 kW. A GPU rack can easily pull 30–60 kW and needs sophisticated liquid cooling and ultra-low-latency networking. The Guidelines’ emphasis on Safety, Resilience & Sustainability is not academic; without serious planning around power density, cooling and network design, AI’s carbon and cost footprint becomes unsustainable.
From a C-suite lens, Garg’s world is a reminder that AI governance is not just about data and models; it is also about capacity planning, sustainability and resilience of the underlying infrastructure that powers national-scale AI.
When Agents Become Colleagues
“The moment AI agents start acting, not just advising, governance becomes an organisational necessity, not a technical choice.”
— Deepak Bhosale, Associate Vice President – IT, Asian Paints
In manufacturing and consumer businesses, AI is sliding into the fabric of operations in a quieter but equally profound way.
At Asian Paints, Associate Vice President Deepak Bhosle describes the shift in almost anthropological terms. AI systems now “think like humans, reason like humans, translate voice to text, analyse images and converse like humans.” That, he says, means a new workforce/autonomous software agents has effectively joined the company.
When agents start interacting with employees, customers, and other agents across ecosystems, the risk picture changes. Bhosle’s instinct is to reach for governance: define protocols of interaction, set boundaries for what agents can trigger, and ensure they don’t have unchecked authority to make decisions that could harm business or people.
He argues that AI-related grievances should flow into the same kind of complaint mechanisms organisations already use, but with upgraded skill sets. Somebody must be able to assess whether an outcome was a model bug, a data issue, misuse of the tool, or even an unreasonable expectation of automation. Each case becomes both a risk management event and a learning loop that tightens guardrails over time.
This is exactly where the Guidelines’ call for internal grievance redressal mechanisms comes alivenot as bureaucracy, but as an adaptive safety layer in an environment where machines are increasingly “acting” rather than merely “advising.”
Why AI Accountability Starts with Enterprises, Not Regulators
“AI governance isn’t just about compliance—it’s about protecting customers from invisible bias as systems scale.”
- Aashish Kshetry, Vice President-Information Technology, Asian Paints
The AI Governance Guidelines land squarely on two pillars: people and responsibility. He argues that organisations must invest heavily in skilling teams to understand AI and embed clear governance structures so that models don’t quietly encode or amplify bias.
For Chhetry, regulators are only one part of the story. The primary accountability sits with enterprises:
They deal with customers whose data, trust and outcomes are on the line. That means implementing processes that protect data but also continuously test algorithms for bias, especially in contexts like pricing, creditworthiness, eligibility or access to services.
He also offers a realistic forecast for the near term: AI’s pace is only increasing. New use cases appear daily. That, by definition, means governance will need to tighten, mature and expand rather than relax. For C-suites, this is a signal: AI governance is not a one-time board agenda item; it is an ongoing strategic capability that must evolve with the technology and the business.
The threat is already here: Security, hallucinations and AI vs. AI
“AI can be misutilised or utilised appropriately and that can have catastrophic effects.” ~ Pradipta Patro, Head of Cyber Security & IT Platform, RPG Group.
For Pradipta Patro, Global CISO and Head IT at RPG Group, governance starts with a blunt admission: AI is already woven deep into the threat landscape.
He breaks the journey into stages rather than a single solution. First, get the data rightreduce noise and the likelihood of hallucinated outputs. Next, monitor models and outcomes continuously, ensuring that each use case is tested thoroughly and domain-specifically before going into production. This stepwise, iterative approach mirrors the Guidelines’ emphasis on continuous monitoring, reassessment and incident reporting for AI systems in sensitive domains.
Patro is clear that trust in AI is not yet 100%. For now, human intelligence and AI must work together to earn that trust case by case, KPI by KPI. Some use cases will prove out faster than others; none should be considered the final word without validation.
At the same time, he acknowledges a reality most CISOs are now living with: in cybersecurity, it is increasingly AI versus AI. Attackers are using AI to probe, phish, automate and scale. Defenders are using AI to detect, correlate and respond. There is no future in which enterprises can opt out; the only strategic choice is how responsibly, effectively and ethically they leverage AI in this arms race.
Building the AI Value Chain and Taking Responsibility
“High-risk systems must face stricter validation, audits, oversight and documentation; low-risk ones can be streamlined, but everyone must maintain audit trails.” ~ Sujoy Brahmachari, CIO & CISO, Rosmerta Technologies
If the Guidelines provide macro principles, Sujoy Brahmachari offers a practical operating model for how enterprises can align.
He breaks the AI value chain into three roles:
- Developers who design and train models, ensuring accuracy, fairness, and compliance.
- Deployers who integrate these models into production manage infrastructure and monitor performance.
- Users who apply AI outputs, provide feedback, and follow governance guidelines.
Overlaying that with an AI Governance Committee and strong feedback loops gives the shape of an internal AIGG/AISI equivalent: strategy, risk, security, legal, and business all sharing responsibility for AI decisions. In this world, the CIO leads the technology strategy, the CISO anchors security and privacy, legal ensures compliance, and business units validate outcomes and align AI to operational goals.
On risk, Brahmachari is clear that systemic risk—large-scale, interconnected impact—is the hardest to assess and therefore the one that demands the most caution. His approach to the Guidelines’ graded liability framework is straightforward: match the intensity of controls and documentation to the risk level.
For high-risk systems affecting livelihoods or critical infrastructure, that means:
- rigorous data quality checks
- regular model evaluations and explainability assessments
- structured bias and fairness testing
- strong human-in-the-loop oversight
- and detailed post-deployment monitoring with audit trails.
For lower-risk applications, processes can be lighter but not absent. In all cases, evidence of due diligence becomes the organisation’s best defence with regulators, courts, and the public.
Moving Beyond Pilots to Enterprise Accountability
“If an organisation cannot explain and defend an AI decision, it has no business deploying that system.”
— Harnath Babu, Partner & CIO, KPMG India
Harnath Sahu emphasizes that India’s AI Governance Guidelines mark a fundamental transition: enterprises can no longer treat AI as isolated projects but must manage them as end-to-end lifecycles with continuous accountability. He argues that governance maturity begins with clarity of roles—developers embedding ethics and explainability, deployers enforcing operational and security controls, and users exercising judgment rather than blind reliance.
Sahu stresses that explainability and bias mitigation are non-negotiable in high-impact domains such as hiring, credit decisions, and pricing models. “If you cannot defend an AI decision, you should not deploy it,” he notes, reinforcing the Guidelines’ stance on fairness and transparency. For him, the biggest challenge is assessing systemic risks, which arise not from single models but from intertwined digital ecosystems and supply chains.
Sahu also warns that compute scalability will become a bottleneck unless organizations modernize data pipelines, adopt distributed training frameworks, and ensure secure access to India’s expanding GPU and national datasets. As AISI standards emerge, he believes governance teams must evolve toward engineering-grade rigour, tracking data lineage, documenting risk decisions, and demonstrating safety outcomes rather than intent.
For Sahu, grievance mechanisms are the final and most human layer of trust. The ability to hear, diagnose, and resolve AI-related harm, he says, will separate responsible AI adopters from those merely chasing automation.
The CIO’s new reality: challenges and gaps
The Critical Analysis is candid: the Guidelines are ambitious but high-level, leaving CIOs and CISOs with significant interpretation work.
Key pain points include:
- Operational ambiguity: Broad definitions of “AI systems” and “risk frameworks” without detailed thresholds make it hard to know what exactly is in scope.
- Uncertain liability: Graded liability sounds good, but “due diligence” is undefined; boards can’t easily estimate legal exposure.
- Voluntary vs. mandatory: Many controls are “voluntary” today but may become mandatory later, making it hard to justify investments or design long-term architectures.
- Talent gaps: AI governance talent is scarce; training can cost ₹5–10 lakh per employee annually, and programmes currently cover only a tiny fraction of India’s IT workforce.
- Multi-regulator complexity: CIOs might need to align with MeitY, CERT-In, sector regulators, and upcoming bodies like AIGG and AISI, each with evolving expectations.
CIO Playbook: A Practical Framework for AI Governance Readiness
Translating the Guidelines into action, a pragmatic seven-step CIO playbook emerges – closely aligned to the report’s own practical guidance and the Critical Analysis.
1. Discover – Build an AI inventory
- Catalog all AI/ML systems – from simple scoring models to generative assistants and agentic workflows.
- Map data sources, vendors, DPI integrations, and business owners.
- Tag systems that affect livelihoods (hiring, lending, pricing) or safety-critical domains.
2. Assess – Risk, fairness, bias, and security
- Classify systems using a risk lens that includes malicious use, transparency, systemic risk, and loss of control.
- Run bias and fairness assessments on high-impact models, especially in HR, credit, and healthcare.
- Conduct security reviews for data poisoning, model theft, and adversarial input risk.
3. Design – Embed explainability, oversight, auditability
- For high-risk use cases, demand explainable models or strong post-hoc interpretability.
- Design human-in-the-loop checkpoints, especially where law or ethics demand human accountability.
- Architect logging and audit trails for model decisions, inputs, and overrides.
4. Operationalise – Build processes and committees
- Set up AI steering and governance committees on the Yotta model.
- Integrate AI risk into enterprise GRC frameworks – from risk registers to board-level reporting.
- Clarify RACI (who is Responsible, Accountable, Consulted, Informed) across CIO, CISO, legal, business.
5. Monitor – Watch for drift, anomalies, and misuse
- Implement continuous monitoring for model drift, data quality degradation, and abnormal behaviour.
- Use red-teaming and adversarial testing, especially for generative and public-facing models.
6. Report – Align with CERT-In, DPDP, sectoral rules
- Prepare incident playbooks that integrate AI harms into existing breach and outage processes.
- Design reporting workflows that can meet aggressive windows (e.g., six hours) where required.
- Ensure grievance channels exist for employees, customers, and partners – with clear SLAs and escalation.
7. Improve – Close the loop
- Use grievances, incident learnings, and audit findings to refine models, datasets, and processes.
- Track KPIs like reduction in bias metrics, false positives, and grievance volumes.
- Feed learnings back into training, policy updates, and board conversations.
What “Ready” Really Means for the C-Suite
Taken together, India’s AI Governance Guidelines and these CIO perspectives converge on a simple but demanding message for leadership:
Being “ready” is not about having a few pilots or a policy PDF on a shared drive. It means:
- You have a clear inventory of AI systems, mapped to business owners and risk tiers.
- You have formal governance forums—steering, ethics, risk, and AI Ops—where AI decisions are discussed, documented, and owned.
- Your organisation can explain and justify AI-assisted decisions in high-impact areas like lending, hiring, pricing, healthcare, or content.
- You treat grievances and incidents as inputs to improve models and guardrails, not as one-off crises.
- You see trust, explainability, and fairness not as compliance burdens but as differentiators in a market where customers and regulators are visibly losing patience with opaque AI.
The Guidelines set the direction. The CIOs are already moving. The question is whether your organization is willing to do the slow, sometimes uncomfortable work of turning AI from a clever tool into a governed, trusted part of how you run the business.
Because the age of “unchecked AI” in India is over. And what comes next will be defined not just by how much intelligence you deploy but by how well you govern it.