Need For An Ethical Framework On Usage Of Generative AI In The Legal Realm

Update: 2023-03-30 07:52 GMT

Generative AI chatbots, such as ChatGPT and Bing Chat, is one of the most remarkable advancements in the line of language processing applications. With an estimated user base of 100 million just two months after it was first launched, ChatGPT is also one of the most quickly adopted innovations in history. The alluring feature of generating detailed text outputs with nothing but a prompt or...

Your free access to Live Law has expired
Please Subscribe for unlimited access to Live Law Archives, Weekly/Monthly Digest, Exclusive Notifications, Comments, Ad Free Version, Petition Copies, Judgement/Order Copies.

Generative AI chatbots, such as ChatGPT and Bing Chat, is one of the most remarkable advancements in the line of language processing applications. With an estimated user base of 100 million just two months after it was first launched, ChatGPT is also one of the most quickly adopted innovations in history. The alluring feature of generating detailed text outputs with nothing but a prompt or a gentle nudge in the form of a short text input has invited appreciation and provoked fear in equal measure. These applications are increasingly relied upon by lawyers, judges, academicians, and law students. For example, lawyers are using it in different aspects of legal services like drafting pleadings and patent applications, reviewing documents, e-discovery process, and legal research in general. As some of the recent news reports indicate, these chatbots are also making their way to the decision making process, and one of the High Court judges has mentioned with caution that he used ChatGPT to “assess the worldwide view” on one of the finer points of law. Law students and academicians are also using it widely for diverse purposes including research and preparation of manuscripts.

Most of the uses of AI tools in the legal arena are happening without an exhaustive assessment of the potential consequences and limitations of many of these tools. This includes the inherent biases, lack of transparency and lack of awareness about the cut-off date of the underlying training dataset used by such tools. While the fast adoption of AI tools raises many ethical questions that need extensive deliberations in their specific institutional contexts, we intend to highlight just one dimension - the need for at least initiating a broader discussion within the Indian legal fraternity on the potential consequences of the use of AI tools. For instance, while the generative AI applications has made inroads to the Indian legal arena, not much attention has been given to the plethora of concerns related to privacy rights of clients and lawyers’ fiduciary obligations. One of the recent incidents may help as a case study to illustrate the complexity of this issue. On March 20, certain users of ChatGPT found themselves able to view the headings for conversations not pertaining to their own chat history, but which related to conversations other users had with the chatbot. This episode also exposed payment related information related to a few subscribers active at that time, leading to the application being taken offline for a few hours. OpenAI attributed this incident to a software bug in their open-source library.

The privacy policy of ChatGPT enlists instances where personal information of users’ may be disclosed without serving notice, which includes – vendors and service providers, business transfers, affiliates and for addressing legal requirements. Further, their privacy policy provides that “You should take special care in deciding what information you send to us via the Service or email. In addition, we are not responsible for circumvention of any privacy settings or security measures contained on the Service, or third party websites.”

As is clear from their privacy policy, OpenAI will not take any responsibility for privacy breaches due to use of an AI tool like ChatGPT. Lawyers making use of augmented drafting services owe a responsibility to their clients to ensure confidentiality with respect to the latter’s information and it is evident that use of tools like ChatGPT can violate it. Common law conception of attorney-client privilege and statutory mandate under Section 126 of the Indian Evidence Act place an embargo on legal practitioners from disclosing any information related to the case during the period of their employment, with the right to waive such privilege vesting exclusively with the client.

Yet another example worth highlighting is that of patent drafting. AI tools have made strong inroads when it comes to drafting patent applications. But resorting to generative AI chatbots by patent agents for the purpose of drafting patent claims can open up the possibility of data breach, thereby threatening the novelty requirement, an essential prerequisite for the grant of a patent. Such uses may even result in disclosure of vital information to the competitors of the patent applicant prior to the submission of the concerned application before the patent office.

Hence a lawyer must ensure that any AI application or tool they use comes with a sound privacy policy that guarantees security of the clients’ data. In the Indian context, we should also be mindful of the fact that India lacks a robust and comprehensive data privacy legislation. This is posing grave concerns as AI applications thrive on processing of sensitive user data. Seeking assistance from language processing applications invites threat in the form of unauthorised gathering of personal information by the AI tools and there may be limited remedies available within the current legal framework. While some of these concerns are also present with cloud based technology solutions in general, generative AI applications amplify these concerns because of their function being content generation.

While incorporation of technological solutions in order to provide efficient and better legal services to the clients may be a factor that prompts more lawyers and law firms to adopt the usage of AI tools in their day to day work, it is imperative that this happens within a robust regulatory framework. The Bar Council of India (BCI) and the various lawyers associations may initiate discussions in this regard and craft a holistic framework for regulating the use of AI in legal tech applications. The BCI Rules, especially the section under "duty to the client" under chapter-II (Standards of professional conduct and etiquette) needs to be updated to match the current progress in the field of AI and its impact on law practice. Judges also need to be made aware of the potential consequences as well as the limitations of AI tools, particularly with the help of legal and technological experts working in the area.

One can already see initiatives being taken in other major jurisdictions. For example, the American Bar Association (ABA) has amended their Model Rules of Professional Conduct (hereinafter “MPC”) to carve rules in accordance with the evolving facets of AI. The competence rule of the MPC prescribes lawyers to stay abreast with the evolving technologies along with the promises and pitfalls associated with their use in the legal field. The confidentiality rule of the MPC emphasises that lawyers take reasonable care to safeguard client information from disclosure while taking technological assistance. In the UK also, the Solicitors Regulation Authority (SRA), in a report which it had released in 2021, had discussed the benefits and limitations associated with the adoption of technology in legal services. The report also sheds light on the ethical responsibility of lawyers and the impending liability associated with their use.

It is evident that the current professional rules of conduct in India require a thorough review and overhauling to account for the advances in technology. But as many scholars have tried to flag, there is an urgent need to acknowledge the potential consequences of AI deployment without an appropriate review and regulatory framework.

[Arul George Scaria is an associate professor of law at National Law School of India University, Bengaluru and Vidya Subramanian is a researcher at National Law University, Delhi. Authors are part of an interdisciplinary research project titled ‘Algorithmic Bias and the Law: An exploratory investigation in the Indian context’, jointly funded by IIT Delhi and NLU Delhi. Views expressed are personal.] 

Tags:    

Similar News