Why India’s courts may have just handed legal tech its defining moment

Advertisements

Suppose you are a judge. You have a garden-variety property dispute. The other side objects. You reject them and, in support of your reasoning, you refer to four judgments of the Supreme Court. The references look right. The format is correct. The names of the cases ring a bell.

CHINMAY BHOSALE
Co-Founder, NYAI | India’s Legal AI Platform

But none of those judgments exists.

So, it happened in August 2025 before an Andhra Pradesh trial court. The references are generated by an AI tool. This issue was not raised at the trial court level. The High Court of Andhra Pradesh subsequently took note of the matter and issued what it described as “a word of caution.” By February 2026, the issue had reached the Supreme Court of India, which described the use of AI-generated fake precedents as a matter of “considerable institutional concern” and officially took up the conduct as judicial misconduct.

That case Gummadi Usha Rani v. Sure Mallikarjuna Rao has come to represent a wider institutional turning point, increasingly. Not because it is unique, but because it is not unique.This Has Been Building for a While

The Andhra Pradesh issue did not occur in isolation.

The Bengaluru Income Tax Appellate Tribunal, in December 2024, was dealing with a tax dispute amounting to ₹669 crore, in which authorities were fabricated using ChatGPT. The order was withdrawn within days when it became apparent that none of the judgments cited existed.

A Karnataka trial court judge even cited an AI-generated precedent back in January 2025, which neither side really relied on. In September 2025, the Delhi High Court dismissed a petition after the other side proved that even references to paragraphs in a non-existent judgment were completely fake in the quoted passages. In January 2026, the Bombay High Court imposed a cost of ₹50,000 on a litigant for submitting content which the Court described as having obvious AI hallucinations. These included repetitive formatting patterns and verification tick marks which are characteristic of raw AI-generated outputs.

The problem was not just that the AI systems produced inaccurate legal material. The problem is that the mistakes were passed through several layers of professional review.

That distinction is important.

AI systems are very good at generating authoritative-sounding text. They are far less reliable to guarantee the existence of the authority. In those legal systems in which the legitimacy of the institution is a function of verifiability, the difference between plausible and accurate is fundamental, not technical.

The Judiciary Understood the Distinction Earlier Than Most

There is an important irony in the current discussion about legal AI. The Indian judiciary has been using AI systems for years, albeit essentially in the form of architectures built around assistive rather than autonomous functionality.

SUPACE, or the Supreme Court Portal for Assistance in Court Efficiency, helps judges locate relevant precedents and organize factual records. It pops up information. It doesn’t make a decision. Judges have total discretion over what they accept, reject, or rely on.

The Supreme Court is using an AI-driven translation platform SUVAS which has translated over 36,271 judgments in 19 Indian languages. But there are still human review processes to make sure legal jargon survives translation properly.

Adalat.AI is now live across 3,000+ courts across eight states, automating the transcription of witness depositions while retaining judicial oversight of the final record.

None of these systems works as a black box. Each has its limits. Each has a human checkpoint before the output turns to action. In other words, both are defensible.

Increasingly, that word — defensible—seems to define the future of legal AI.

What Does Defensible AI Actually Mean?

Defensible AI is not a marketing slogan. It describes systems with outputs that are explainable, verifiable, traceable and institutionally scrutinized.

This is not an expectation new to the legal profession. The profession already functions on standards of defensibility. There gotta be authorities. Citations should be verifiable. Arguments must be resistant to adversary scrutiny. Judicial reasoning must be reviewable.

The standard itself was not changed. The change is the scale and speed with which non-defensible outputs can now enter legal workflows.

Today’s AI systems can write realistic case summaries, credible citations, and well-structured legal briefs in a matter of seconds. There are real efficiency gains. So are the risks.  Speed is getting ahead of the verification process of the generated material.

So a number of safeguards are increasingly central to serious legal AI deployment.

First, references must be fully verifiable. Each authority that an AI system cites should be linked to retrievable and authentic legal material.

Secondly, audit trails are becoming a necessity. Legal AI systems are increasingly required to maintain documented records of the data that was used, how outputs were produced, and where human review was performed.


Third, Retrieval-Augmented Generation (RAG) architectures are becoming a popular design pattern in serious legal AI systems. RAG systems are not merely probabilistic generators. Their outputs are based on established legal databases. The goal is not only to enhance performance but also to boost traceability and decrease the risk of hallucination.

Finally, accountability itself is increasingly subject to human review at high-stakes checkpoints.
They are more than technical choices. They are institutional obligations that derive directly from legal practice.

Increasingly, these principles are guiding the design of legal AI systems in India. At NYAI, a legal AI platform designed for Indian legal workflows, this has meant prioritizing verifiable citations, jurisdiction-specific legal retrieval, audibility and human-in-the-loop review mechanisms across research and drafting systems. That is not to produce faster legal output but output that is institutionally reliable in Indian legal practice.

India’s Regulatory Direction Is Moving the Same Way

The move towards embedded accountability is increasingly visible in India’s nascent architecture of AI governance.

Both the India AI Governance Guidelines (November 2025) under the IndiaAI Mission and the white paper of the Office of the Principal Scientific Adviser (January 2026) recommended what they termed a “techno-legal” approach to governance.  The fundamental premise is to construct compliance mechanisms into the system architecture from the start, not to bolt them onto systems later.

Logic is similar to privacy-by-design approaches in data governance. accountability is built in, not added on later

This has implications for these legal AI systems . Audit logs, source traceability, explainability mechanisms, verification structures, and training provenance documentation are becoming requirements, not product features. “These are basic governance expectations.”

 The Digital Personal Data Protection Act, 2023, adds yet another responsibility especially when dealing with sensitive client data in legal proceedings.

Platforms that can show this level of accountability will likely be better positioned in the evolving regulatory landscape.

The Bigger Picture

India’s courts have a backlog of more than five crore pending cases. AI systems can be a big help in reducing procedural friction in research, drafting, translation, transcription and information-management workflows.

Yet those efficiencies only arise where systems remain institutionally trustworthy.

The hallucinations that occur in Indian courts have repercussions that go beyond the individual cases in which they occur. They have also opened the door to more general judicial scepticism of legal material generated by AI itself. Every fake citation erodes trust not only in one output but in the reliability of the category as a whole.

That’s why it’s important to be defensible.

Defensible AI is not a premium compliance feature for highly regulated deployments. It is becoming the minimum institutional standard in legal systems increasingly. The next phase of legal AI adoption will likely be defined by platforms that can embed explainability, verification, provenance tracking and meaningful human oversight into their architecture.

“The fundamental principle on which the legal profession has always operated is that any claim advanced before a court must, in the end, be able to be explained and verified.

The tools might have changed. There have been no accountability standards.

About the Author

Chinmay Bhosale is the Co-Founder of NYAIi, India’s legal AI platform built for the complexity of the Indian legal system. NYAI is designed to serve Indian advocates, in-house counsel, and legal teams with AI tools trained on Indian law, Indian courts, and Indian practice.

Share on