Truth, Trust, And Technology: Legal Profession In Age Of AI Hallucinations
The advent of Generative Artificial Intelligence (GenAI) has enabled Large Language Models (LLMs) to completely revolutionise the way humans interact with technology. The immense popularity of models like ChatGPT, Gemini, and Grok among various segments of the population underscores the personal and societal impact these models have on our daily lives. People from diverse backgrounds now heavily rely on these models for performing various tasks. According to OpenAI, as of mid-2025, there are more than 700 million active ChatGPT users each week. The use of ChatGPT ranges from seeking personal advice to higher-order problem-solving. It is now becoming increasingly clear that Generative AI has a positive impact on productivity, time management, and innovation.
However, to rely on the hackneyed adage, all that glitters is not gold. In the case of Generative AIs of the scope of the Large Language Models (LLMs), their glitter can sometimes be just another mirage. The mirage in question relates to the phenomenon of AI hallucination. Like human beings who have lost touch with reality, LLMs are now increasingly hallucinating, or in other terms, generating fake, bogus and distorted information. While this is an issue that negatively impacts every profession, I firmly believe that for the legal fraternity, AI hallucinations will pose a unique threat. This post will therefore examine how AI hallucinations can negatively impact the legal profession by analysing some of the key judicial decisions, and finally explore the ways by which the legal fraternity can navigate amidst Hallucinated AIs.
Does an AI Hallucinate or confabulate?
Although there is no universally agreed-upon definition of AI Hallucination, multiple research works have explored the nature and scope of this phenomenon. One paper defines an AI hallucination as a situation wherein the AI chatbot produces “fictional, erroneous, or unsubstantiated information in response to queries”. An AI that has hallucinated lies or presents false statements as truth, but one paper now considers hallucinations as a form of Bullshitting. The authors argue that lies and truth belong to the realm of philosophy, and attributing a human-like quality, such as hallucination, to a machine is yet another form of anthropomorphism. By explicitly rejecting a human-like quality in an AI, they have now coined the term "confabulation." In human psychology, confabulation ,“occurs when someone's memory has a gap and the brain convincingly fills in the rest without intending to deceive others”. The authors argue that this is a more effective way of explaining misleading information from AI, as a “creative gap-filling principle is at work”
Whereas another researcher is calling for the replacement of the word 'hallucination' with 'fabrication,' as it accurately captures what the AI is actually doing. Notwithstanding the contemporary debates surrounding the adoption of the correct terminology, for the context of this paper, it can be stated that an AI is said to hallucinate, bullshit or confabulate when it generates false, erroneous, incorrect, or distorted information without any verification and with complete confidence. Distorted information is, therefore, the hallmark of an AI that has hallucinated. If AI can now hallucinate, bullshits or confabulate, what does it mean for the legal profession?
Distorted Information in the Legal Profession
The Legal profession is one that heavily relies on information retrieval. Every member of the legal fraternity is connected daily with the extraction of information that must be gathered from both online and offline resources. While the legal fraternity has always leveraged information technology (online resources like Westlaw and LexisNexis) to its advantage, the GenAI revolution has presented both a boon and a bane for legal professionals. As a boon, LLMs like ChatGPT have enabled lawyers to generate large amounts of case-related information in a short span of time and have also been a helpful research assistant in summarising complex case laws, building legal arguments, and editing petitions. However, on the other hand, its baneful nature has come to light in cases like lawyers submitting fake and bogus cases, and in citing distorted bibliographies in petitions. Currently, the legal landscape faces a unique challenge with respect to GenAI, specifically related to the prevalence of Fake and bogus citations.
Judiciary's Approach to AI Hallucinations
One of the earliest rulings on lawyers' duty to verify AI-generated information was given in 2023 by the United States District Court for the Southern District of New York. In Mata v. Avianca Inc. (2023), the court was confronted with the presence of fake case citations in an attorney's pleadings. Although the attorney had entirely relied on ChatGPT to generate these citations, the judge did not consider it an impropriety when attorneys use a popular and reliable artificial intelligence tool. However, that does not mean the judge gave lawyers a free pass to use GenAI as they like. According to the judge, counsel has a duty to ensure the accuracy of their filings, which is the gatekeeping role assigned to lawyers. The court also highlighted the negative fallout of attorneys simply copying hallucinated information by identifying time wastage, author's reputational damage, those who were falsely quoted and in promoting “cynicism about the legal profession”.
In another recent US ruling on AI-generated fake case citations delivered in 2025, the District Court for the Eastern District of Oklahoma ruled that the duty of a lawyer is to file a pleading with human conviction and poetically held that “Generative technology can produce words, but it cannot give them belief. It cannot attach courage, sincerity, truth, or responsibility to what it writes. That remains the sacred duty of the lawyer who signs the page”. The judge then identified fabricated case laws, incorrect cases, the quotation of fictitious law, and even the drafting of misleading legal statements as instances of unethical usage of GenAI. From this case, it is pretty clear that the Courts did not consider the use of AI as a wrong act, but instead treated the attorney's admission of having fully relied on ChatGPT without verification as illustrative of an attorney's lack of application of the human mind.
In both cases, the judges ruled similarly. The Court ordered a monetary penalty as sanctions for the subjective bad faith shown by the lawyers in withholding information regarding the use of GenAI for creating fake case laws. But in the 2025 decision, the judge specifically ruled that the sanctions imposed are purely restorative in nature and are not intended to punish the attorneys. Further, the judge observed that the attorneys had violated Rule 11(b) of the Federal Rules of Civil Procedure, which states that “an attorney or unrepresented party certifies that to the best of the person's knowledge, information, and belief, formed after an inquiry reasonable under the circumstances”. By reading Rule 11(b), the Judge rightly observed that a reasonable inquiry cannot be delegated to an artificial entity, and it must be one that should be done only by human judgment. Going one step further, the judge also ordered the attorneys to amend their pleadings to certify that a verification and review were done to determine the accuracy of each cited case and legislation, followed by a signed certificate of verification.
The Road Ahead: Deterrence, Verification and Eternal Vigilance
In India, the negative impact of Artificial Intelligence has now been felt in courtrooms. In three cases, the High Courts were startled to find case laws quoted in petitions that had never actually existed, a clear case of AI hallucination. Again, the Indian Judiciary also reminded advocates not to trust AI-generated information unquestioningly, but urged lawyers to cross-check and validate the authenticity of such information. Currently, one online database reports a total of 508 AI hallucination litigations worldwide. Despite LLMs like ChatGPT warning potential users about its proclivity to make errors, it appears evident that the trend of AI hallucinated litigations will continue to increase in the future.
For the legal profession and all those involved in it, this calls for urgent introspection. It must begin with lawyers who must undoubtedly adhere wholeheartedly to their duty towards their clients and the courts. In the age of AI Hallucinations, the duty to be truthful and to exercise due diligence must be consciously practised by lawyers, especially when drafting petitions and making statements in the courtroom. The State and regulatory bodies, especially the Bar Councils, must proactively assist the legal fraternity in drafting policies and amending laws that specifically address the consequences of relying on hallucinatory information. Unless regulatory bodies and Courts impose an express sanction, there is no deterrent effect. Although various US courts have reprimanded lawyers and even imposed monetary sanctions on them for not verifying hallucinated information, in countries like India, as of now, the courts have not proceeded firmly against erring lawyers. Similarly, law schools in India must fully adapt their curricula to incorporate AI-enabled teaching and learning methods, and professors must enable students to acquire the skills necessary to accurately distinguish between hallucinated and non-hallucinated AI content.
Overall, the most appropriate antidote to AI hallucination from legal professionals is eternal vigilance, and as Justice H R Khanna had reminded us years back, “Eternal vigilance is the price of liberty and in the final analysis, its only keepers are the people”. In the AI age, eternal vigilance, along with thorough verification, is the price we must pay when outsourcing tasks to AI. GenAI can undoubtedly generate information much more quickly than humans. Still, it is human beings, possessing intuition, conscience, and the capacity for contemplation, who should stand as gatekeepers and verify every piece of information generated by AI.
Author is Assistant Professor, School of Law, Christ University, Bengaluru
Views Are Personal.