Phantom Precedents: The Rise Of AI-Generated Case Law In Indian Courts

Himanshu Mishra

17 March 2026 10:00 AM IST

  • Phantom Precedents: The Rise Of AI-Generated Case Law In Indian Courts
    Listen to this Article

    The legitimacy of common law adjudication depends on the authenticity of precedent. If fabricated authorities enter judicial reasoning, the integrity of the doctrine of stare decisis itself is compromised. Recently, however, a disruptive and highly concerning trend has emerged across global jurisdictions, the submission of phantom case law. Judicial officers reviewing written submissions are increasingly discovering perfectly formatted citations for judgments that simply do not exist. The culprit is not intentional forgery by advocates, but rather the unverified integration of generative artificial intelligence into legal research.

    The Emergence of Phantom Precedents

    An advocate might submit a brief containing a citation that looks entirely legitimate, perhaps formatted as a standard Supreme Court Cases (SCC) or All India Reporter (AIR) reference. A judge reviewing the matter attempts to locate the ratio, only to discover the case is a phantom. This is not the case of intentional forgery by legal practitioners. The root cause is the unverified use of artificial intelligence platforms for legal research. In artificial intelligence research, this phenomenon is referred to as “hallucination,” where a language model generates information that appears plausible but lacks any factual basis. Unlike traditional search engines, generative models do not retrieve verified documents; they construct responses probabilistically.

    Recently in the United States District Court for the Southern District of New York during the proceedings of Mata v. Avianca Inc. In that matter, legal counsel submitted a brief heavily reliant on non-existent judicial opinions complete with fabricated quotes and internal citations generated entirely by ChatGPT. The presiding judge imposed financial sanctions on the attorneys, emphasizing that while utilizing advanced technology for research is permissible, lawyers bear absolute and non-delegable responsibility for the accuracy of their filings.

    Judicial Alarm in Indian Courts

    The Supreme Court of India has recently taken a very hard line on this issue. Just last month, in early 2026, the Supreme Court issued notices to the Attorney General and the Bar Council of India after discovering that a trial court had relied on AI-generated, non-existent rulings to pass an order. The bench made it absolutely clear that basing a decision on fake judgments is not a mere error in judicial decision-making it is flat-out misconduct that may invite serious legal consequences. Around the same time, Chief Justice Surya Kant slammed the growing practice of filing AI-drafted petitions without verification. Justice B.V. Nagarathna specifically highlighted that she encountered a reference to a fictitious case titled Mercy v. Mankind while hearing a Public Interest Litigation (PIL).

    This is not an isolated blunder limited to the Supreme Court. High Courts are routinely catching litigants trying to pass off AI hallucinations as binding precedent. In January 2026, the Bombay High Court slapped a hefty cost of ₹50,000 on a party for dumping fake case laws into their written submissions. The judge in that matter pointed out that the filing had obvious giveaway features of a raw AI output, complete with green tick-marks and repetitive formatting.

    Similarly, in September 2025, the Delhi High Court saw a petition withdrawn in embarrassment after the opposing counsel exposed the citations as completely fabricated. The petition even went so far as to invent phantom paragraphs from the landmark “Raj Narain v. Indira Nehru Gandhi, (1972) 3 SCC 850 judgment, heavily quoting paragraphs 73 and 74 from a ruling that only contains 27 paragraphs in reality.

    Why Generative AI Fabricates Case Law

    To understand why a sophisticated piece of technology would confidently lie to an officer of the court, you have to look at how these platforms are built. Chatbots are not legal research engines. A common error is treating these platforms like a smarter version of existing law reporters . There is a false assumption that the software searches through a hidden, verified database of Indian case law to retrieve the correct document. These systems do not retrieve documents from verified legal databases.

    When an advocate requests an AI model for case law supporting a specific argument, it calculates the most statistically likely sequence of words. It knows that an Indian legal citation requires an appellant, a respondent, a reporter volume, and a year. So, it simply stitches those elements together to create a mathematically probable sequence. The machine is not trying to deceive the court, it simply lacks any internal concept of truth. It generates text that looks incredibly authentic but is completely unanchored from actual jurisprudence.

    These models operate strictly as prediction engines. The underlying code maps word associations across massive datasets, meaning a prompt for case law never actually triggers a document retrieval. Requesting a legal brief merely forces the software to calculate the most probable sequence of words.

    This structural limitation creates serious professional risks within legal practice. The obligation for the advocates to verify primary sources remains absolute. Delegating this critical task to these AI Models directly contravenes the fundamental standards of advocacy. Submitting fabricated judgments wastes judicial time and risks severe disciplinary action for professional misconduct under the rules framed by the Bar Council.

    Professional Responsibility and Ethical Duties of Advocates

    Under the Indian legal framework, slipping a phantom judgment into a written submission goes beyond a simple administrative error. It strikes at the core of statutory ethical mandates. The Bar Council of India (BCI) Rules, specifically Part VI, Chapter II, comprehensively outline an advocate's Duty to the Court. Rule 3 explicitly mandates that an advocate shall not influence the decision of a court by any illegal or improper means. More importantly, the overarching principles of professional ethics dictate that an officer of the court must never intentionally mislead the bench.

    Presenting an algorithmically generated, fabricated judgment as binding precedent breaches an advocate's duty to the court and may constitute a false statement of law. Depending on the facts particularly the advocate's knowledge or recklessness and the material effect on the proceeding such conduct can amount to criminal contempt and will also attract disciplinary action under Section 35 of the Advocates Act, 1961 (which empowers State Bar Councils to refer complaints to disciplinary committees and impose suspension or removal).

    The Bar Council's Standards of Professional Conduct expressly prohibit attempts to influence the court by illegal or improper means. Courts and bar associations may soon need to develop clear guidelines governing the use of artificial intelligence in legal drafting. Mandatory verification protocols, disclosure requirements, and professional training on AI tools could help prevent the submission of fabricated authorities while still allowing lawyers to benefit from technological assistance.

    Artificial intelligence certainly provides immense value for streamlining administrative workloads or synthesizing dense policy research. However, treating these systems as autonomous legal researchers is fundamentally flawed. Until developers manage to firmly anchor generative models to verified legal repositories, every algorithmically produced citation requires intense scrutiny. Algorithmic efficiency can never replace the rigor of reading the actual judgment.

    Author is a PhD Candidate at National Law University, Delhi. Views are personal.

    Next Story