AI Governance for Business in India: From Compliance to Confidence

India is entering the phase where artificial intelligence stops being a “technology initiative” and becomes an everyday decision-maker—shaping who gets a loan, which patient gets priority, which job applicant is shortlisted, which citizen gets flagged, and which message goes viral. That shift changes the question from “Can we build it?” to “Can society trust it —at scale?”

The countries moving fastest on AI are converging on one idea: governance is not anti-innovation; it is the infrastructure that makes innovation deployable. Europe is drawing legal red lines for unacceptable use. Singapore is building practical governance tool kits and assurance. The United States is largely standards-led, with a patchwork of federal signals and state-level enforcement. India—now with the DPDP Rules, 2025 notified—has an opportunity to build a governance approach that is not copy-paste, but globally interoperable and locally realistic.

Why this matters to Indian business now

For Indian enterprises, the pressure is coming from three directions at once.

First, customers and citizens are raising expectations. They increasingly want to know: Was this decision made by a human or an algorithm? Why did I get rejected? Was my data used to train a model? Is this video real? Europe is making parts of this expectation enforceable through transparency and labeling rules.

Second, regulators are catching up globally. If you do business with European clients, build products used in Europe, or train models that could be deployed in EU contexts, the EU AI Act is no longer theoretical; the first prohibitions already apply.

Third, inside organizations, AI is increasingly embedded—in CRMs, HR tools, manufacturing quality systems, customer support, and compliance workflows. That makes governance aboard-level responsibility, not just an IT or data science concern.

What Europe Bans Outright: The “Unacceptable Risk” Line

A lot of people talk about “responsible AI” as if it’s a soft guideline. Europe is taking a hard enroute: it prohibits certain AI practices altogether (with narrow, defined exceptions in some cases). The EU describes eight prohibited practices, including manipulation, exploitation of vulnerabilities, social scoring, and real-time biometric identification for law enforcement in public spaces.

The AI Act Service Desk summary and the legal text behind it make the prohibited line clearer. The following are especially relevant for Indian business leaders because they map directly to real-world product and process temptations:

  1. Manipulative or deceptive AI that distorts human choice and causes harm. Think subliminal influence, or systems designed to push people into decisions they wouldn’t otherwise take—especially where the outcome can reasonably cause significant harm.
  2. AI that exploits vulnerabilities (age, disability, or socio-economic situation). This matters in India because our scale makes “segmentation” easy—and therefore misuse easy. If a model learns that a vulnerable segment is more likely to click, borrow, gamble, or share data, governance must stop “optimization” from becoming exploitation.
  3. Social scoring. Not just credit scoring—something broader: ranking people overtime based on social behavior or inferred characteristics and then using that score to treat them unfairly in unrelated contexts, or disproportionately.
  4. Criminal offence risk prediction “based solely on profiling” or personality traits. This is a key warning: AI should not be allowed to turn patterns into punishment, especially when the reasoning is personality-based rather than grounded in objective, verifiable facts.
  5. Untargeted scraping of faces from the internet/CCTV to build facial recognition databases. This practice is explicitly called out because it industrializes surveillance and creates irreversible consent and dignity problems.
  6. Emotion recognition in workplaces and education (except medical/safety reasons). It’s a direct signal: “productivity” is not a free pass to invade cognitive liberty at work or school.
  7. Biometric categorization to infer protected characteristics (race, political opinions, religion, sexual orientation, etc.). In India’s context, even the attempt to infer sensitive identity traits from biometrics should be treated as a governance red flag.
  8. Real-time remote biometric identification for law enforcement in public spaces—prohibited with narrow exceptions and strict safeguards. The exceptions are tightly framed (e.g., specific missing-person searches, imminent threats, serious crimes), and even then require proportionality safeguards and prior authorization rules in most cases.

    The lesson for India is not “copy the EU.” The lesson is that every country eventually has to define a non-negotiable boundary—the small set of things AI must not do, no matter how efficient or profitable.

What “GPAI” governance actually means

Here’s the point you raised—and it’s crucial: governance can’t focus only on “applications.” In 2024–2026, the bigger story is general-purpose AI models (GPAI)—foundation models that become the invisible engine underneath thousands of downstream tools.

Europe has introduced explicit obligations for GPAI providers. For all GPAI model providers, the EU highlights three baseline duties: technical documentation, a copyright policy, and a published summary of training content.

For GPAI models with systemic risk, the obligations expand into governance that looks like critical infrastructure: notification, risk assessment/mitigation, incident reporting, and cybersecurity protections.

This matters for India even if you don’t train frontier models, because Indian enterprises increasingly buy or integrate them. That creates a new procurement reality: you’re not just buying a tool; you’re inheriting a risk profile. A mature buyer will start demanding: What documentation exists?

What training-data transparency exists? What incident response exists? What security testing exists?

Singapore and the US: two pragmatic governance signals India should learn from

Singapore has leaned heavily into practical governance frameworks and testing/assurance, positioning them as tools businesses can implement rather than legal texts businesses fear. IMDA has been driving a Model AI Governance Framework direction and a generative AI governance consultation approach, backed by the ecosystem around AI Verify.

AI Verify itself is positioned as a governance testing framework/toolkit intended to help organizations assess implementation against recognized governance principles.

The United States is more standards-led: NIST’s AI Risk Management Framework (AI RMF) is explicitly designed as a voluntary framework to help manage AI risks across organizations and society.

Policy signals have shifted over time; for example, NIST notes that the 2023 AI Executive Order (EO 14110) was rescinded on January 20, 2025, and the White House issued a January 23, 2025 order framed around removing barriers to American AI leadership.

Meanwhile, enforcement pressure is also emerging through states—for instance Colorado’s “high-risk AI” consumer protection law had its implementation delayed to June 30, 2026.

So globally, three governance archetypes are emerging:

Europe: hard prohibitions + compliance duties.

Singapore: implementable frameworks + assurance/testing.

US: risk management standards + patch work enforcement.

India can borrow from all three—without inheriting their weaknesses.

What India needs next: a governance compact across government, public institutions, private sector, and citizens

India already has building blocks. The DPDP Rules, 2025 strengthen citizen rights and formalize accountability for digital personal data governance.

India’s national AI ecosystem conversational so increasingly reference “safe and trusted AI” as a pillar of long-term adoption.

And India has published Responsible AI principles and operationalization thinking through NITI Aayog’s work.

But AI governance for business needs to move from “principles” to “predictable practice.” That requires clarity on four fronts:

  1. What government should do (the minimum viable guardrail + market confidence). India should define a shortlist of non-negotiable prohibited practices (aligned with constitutional rights and local risk realities), publish sectoral guidance for high-impact areas (finance, healthcare, employment, education, public benefits), and create interoperable compliance expectations so Indian exporters are not surprised by EU-style requirements later. Just as importantly, government procurement should reward vendors who can evidence transparency, testing, security controls, and grievance handling.
  2. What public institutions should do (trust-by-design at scale). Public bodies will deploy AI in citizen-facing contexts, where harm becomes legitimacy damage. They should normalize algorithmic impact assessments for high-impact systems, require auditability and logging, and ensure human appeal routes. If citizens cannot contest outcomes, trust collapses.
  3. What private organizations must do (governance as an operating system, nota PDF). Businesses need internal controls that resemble financial controls: clear model ownership, third-party risk checks for GPAIvendors,documentedintendeduseandmisusescenarios,red-teamingforsafety,privacy-safedatapipelines,andtransparencyUX (simple explanations, disclosure when AI is used, and frictionless grievance redressal). The EU’s approach to transparency and labeling is a preview of where consumer expectations are heading anyway.
  4. What end-consumers should expect—and what organizations can reasonably expect from them. Consumers should expect disclosure when they interact with AI, protection from manipulation, data minimization, meaningful consent, and a way to challenge harmful outcomes. In return, governments and organizations can legitimately expect citizens to practice basic AI hygiene: not forwarding synthetic media blindly, reporting impersonation/deepfakes, safeguarding sensitive personal data, and understanding that “free” AI services often monetize data or attention.

The future will reward the countries that can convert AI governance in to a competitive advantage: trusted products, trusted institutions, and trusted markets. India’s scale means we can either scale trust—or scale harm. The next 24 months will decide which one becomes our default.

-Authored by Amman Walia, Group CIO / Digital Transformation & AI Leader (Manufacturing & Enterprise Tech) at Kanodia Group.

Share on