Artificial Intelligence (AI) tools are increasingly relied upon and being used in Indian courts by lawyers. The unbridled growth of AI tools in courtrooms have sparked a fundamental debate: is reliance on AI in legal practice a legitimate evolution, or does it undermine the ethical foundations of advocacy?
AI, in essence, draws upon vast repositories of human knowledge recorded digitally across the web. Its initial entry into the legal domain came through search engines that provided case citations, tools that complemented, rather than conflicted with, the craft of legal reasoning. However, AI has since evolved into systems capable of generating arguments, synthesizing authorities, and even mimicking legal reasoning. In doing so, it has also exposed a critical vulnerability: the tendency to fabricate information, or 'hallucinate.'
This risk is no longer hypothetical. In India, the Supreme Court itself witnessed an instance where a lawyer, cited a non-existent case, Mercy versus Mankind, prompting Justice B.V. Nagarathna to highlight the dangers of unverified AI reliance. Similar instances have surfaced across High Courts, where pleadings contained fictitious citations and distorted legal propositions. In an earlier case, the Supreme Court had exposed a lawyer from Andhra Pradesh High Court, who had relied on AI tools and cited fake cases, obtaining an order in favour of his client.
A particularly striking example emerged before the Delhi High Court, where a petition was found to be based on entirely fabricated case laws and quotations generated, using AI tools. The Court rejected the plea and underscored the seriousness of such conduct. Likewise, the Bombay High Court imposed costs for reliance on false AI-generated citations, warning that such practices waste judicial time and erode the integrity of proceedings.
The concern has reached the highest level. The Supreme Court of India has described AI-generated hallucinations as an issue of “institutional concern,” even indicating that reliance on fabricated citations may amount to professional misconduct. Indian courts have also begun dealing with AI-related harms beyond legal research.
These developments mirror a global pattern. In the United States, the case of Mata v. Avianca, Inc. marked a turning point, where lawyers were sanctioned for submitting briefs containing fictitious AI-generated citations. It resulted in exemplary fines on the erring attorney and the story was featured on the front page of New York Times.
Similarly, in United States v. Michael Cohen, fabricated citations generated through AI tools found their way into court submissions, reinforcing that responsibility lies with the lawyer, not the technology.
Many District Courts and Courts of Appeals in the United States now require lawyers to certify that AI-assisted filings have been independently verified. These measures collectively assert a clear principle that AI is a tool, not an authority!
At present, there are dozens of major AI tools globally, alongside hundreds of niche platforms. Enterprise solutions such as Westlaw AI, Lexis+ AI, and CoCounsel, along with emerging platforms like Harvey, have transformed legal workflows. General-purpose tools like ChatGPT and Claude are also widely adapted for legal tasks. These systems span research, drafting, contract analysis, litigation support, compliance, and risk management.
In India, adoption has been comparatively slower, due in part to language barriers, cost constraints, and accessibility challenges. Nonetheless, a growing ecosystem of domestic tools has emerged. Platforms such as Vidur AI, Jhana AI, NyaySaathi, and Draft Bot Pro assist in research and drafting, while CaseMine, LegitQuest, Manupatra, SCC Online, and Indian Kanoon provide AI-enhanced legal research and analytics. Contract-focused platforms like SpotDraft and Simply Contract further expand the landscape.
Yet, no single Indian tool currently satisfies the full spectrum of legal requirements. A practical approach, therefore, is to use a combination, or “stack” of tools: dedicated research platforms for case law, specialized drafting tools for pleadings, and General AI systems for quick queries.
A comparative gap remains evident. Global leaders like Harvey AI and CoCounsel increasingly function as full-fledged AI legal associates, capable of cross-jurisdictional reasoning and synthesis. Indian tools, while strong in domestic research and citation tracking, are still evolving in advanced reasoning and contextual analysis. For now, they function more as research assistants than as autonomous legal reasoning systems.
The nature of Indian judicial writing presents an additional challenge. Judgments often contain extensive reasoning, multiple citations, and wide-ranging references, from classical literature to philosophical texts, before arriving at the operative conclusion. While these observations may be integral to the judicial process, AI tools frequently struggle to distinguish between core legal reasoning and peripheral commentary, increasing the risk of misinterpretation.
At the same time, the legal profession must confront its own inefficiencies. Lengthy, repetitive pleadings burden the judiciary and complicate both human and machine analysis. There is a compelling case for concise, structured submissions that enhance clarity and facilitate faster adjudication. Similarly, the lack of uniformity in court procedures and documentation, even within the same jurisdiction, adds unnecessary complexity. Standardization could significantly improve efficiency for both courts and AI-assisted workflows.
Beyond efficiency lies a deeper concern: accountability. If an AI tool generates a false citation, who bears responsibility? Courts across jurisdictions have answered this unequivocally, the lawyer does. The duty of verification is absolute and cannot be delegated to a machine.
Equally critical are issues of confidentiality and bias. The use of AI tools often involves inputting sensitive client data, raising concerns about privilege and data security. Moreover, AI systems trained on historical judicial data may inadvertently replicate existing biases, embedding them into future legal outputs.
Yet, it would be misplaced to view AI solely through the lens of risk. Properly deployed, AI has the potential to democratize access to legal knowledge, assist smaller practitioners, and reduce the backlog of cases that burdens judicial systems. In a country like India, the future of legal AI may well depend on the development of robust, multilingual platforms capable of operating across diverse linguistic and socio-economic contexts.
Artificial Intelligence, therefore, should neither be uncritically embraced nor reflexively demonized. It must be treated as a powerful but imperfect assistant, one that enhances human capability while remaining subject to human judgment.
The message from courts in India and across the world is unequivocal: technology may evolve, but responsibility does not. AI can assist, accelerate, and even impress but it cannot be trusted blindly. The burden of accuracy, integrity, and accountability will always rest with the lawyer.
Let us be candid, AI is already part of our professional toolkit. But the difference between assistance and abdication lies in control. A machine can generate answers at astonishing speed; it cannot distinguish truth from fabrication unless compelled to do so. It has no stake in justice, no sense of consequence, and no duty to the court.
The future of law will not be decided by how advanced our tools become, but by how disciplined we remain in using them. In the end, the courtroom will not judge the machine, it will judge the mind that chose to rely on it.
Author is an Advocate Practicing at Supreme Court of India & High Court at Calcutta. Views are personal.