Use Of AI In Court Adjudication Presents Both Opportunities & Challenges : CJI DY Chandrachud

Anmol Kaur Bawa

13 April 2024 12:54 PM GMT

  • Use Of AI In Court Adjudication Presents Both Opportunities & Challenges : CJI DY Chandrachud

    Chief Justice of India, Dr DY Chandrachud in his welcome address at the Indo-Singapore Judicial Conference spotlighted the intervention of Artificial Intelligence (AI) in the realm of legal practice and judicial decision-making. Stressing the need to embrace the technological developments and the onset of AI in the legal domain, the CJI also cautioned against 'high-risk' AI tools which may...

    Chief Justice of India, Dr DY Chandrachud in his welcome address at the Indo-Singapore Judicial Conference spotlighted the intervention of Artificial Intelligence (AI) in the realm of legal practice and judicial decision-making. Stressing the need to embrace the technological developments and the onset of AI in the legal domain, the CJI also cautioned against 'high-risk' AI tools which may lead to unexplained bias and indirect discrimination when applying in the field of judicial adjudication. 

    Turning to the key challenges posed by AI when used in the adjudication process, CJI highlighted how its integration into the legal domain raises ethical concerns and practical challenges.

    "The integration of AI in modern processes including court proceedings raises complex ethical, legal, and practical considerations that demand a thorough examination. The use of AI in court adjudication presents both opportunities and challenges that warrant nuanced deliberation"

    The CJI further noted how the use of AI is no longer limited to administrative efficiency but has also seeped into legal research and judicial brainstorming. While several legal firms regardless of their size, are leveraging AI to enhance their research efforts, AI is increasingly being experimented with in judicial adjudication across the globe.  This was illustrated through the instance of using ChatGPT (an AI chatbot) by a Judge in Columbia in 2023 to clarify questions of law in a casing concerning insurance claims for an autistic child. 

    " In his groundbreaking decision, Justice Padilla incorporated a unique element into his judgment: a conversation with the ChatGPT bot. The court sought clarity on whether an autistic minor should be exempt from paying fees for their therapies by engaging in ChatGPT. Based on this exchange, he rendered a verdict affirming that, under Colombian law, an autistic child is indeed exempt from such fees." 

    The Columbian judge also clarified that AI in no way replaces the judges for their independent and rational thinking. Recalling a similar incident in Indian Courts, the CJI mentioned how the Punjab & Haryana High Court in Jaswinder Singh v. State of Punjab relied upon ChatGPT to get a broader outlook of bail jurisprudence across the globe. 

    "However, it is crucial to clarify that ChatGPT's input was not considered by the High Court when assessing the case's merits. Instead, it was intended to offer a broader understanding of bail jurisprudence, particularly in cases involving cruelty as a factor."  

    Recalling how the outbreak of COVID-19 led to the adaptation of digital proceedings, the pandemic period was a rippling era of increased reliance on technology by the Courts. drawing parallels from the United Kingdom's 'Digital Case System', the CJI mentioned of Indian Court's usage of the Case Management System for efficient managing of cases from the filing stage up until the disposal.

    Lauding the creative usage of AI in the administration wing of the Justice delivery mechanism, CJI gave the example of the recent live transcription services introduced by the Supreme Court. The AI-based system, 'Supreme Court Vidhik Anuvaad Software' makes the legal proceedings and judgements easier to understand in 18 regional languages and Hindi, thus bridging the gap between the common man and the judiciary. 

    On The Worries Of The AI Wonder: Addressing Concerns Of Bias & Discrimination 

    Stressing on the flipped side of the coin, the CJI noted that several drawbacks also persist while making good of the AI. Mentioning how in one of the cases in the US, a lawyer submission AI generated bogus decisions in a legal document in an attempt to mislead the court. This, he highlighted was an example of "hallucinations" - the AI generating false narratives or misleading responses. 

    Without robust auditing mechanisms in place, instances of “hallucinations” – where AI generates false or misleading responses – may occur, leading to improper advice and, in extreme cases, miscarriages of justice. A cautionary tale from the US serves as a stark reminder of the risks, where a New York lawyer submitted a legal brief containing fabricated judicial decisions. 

    The element of bias is an important issue when it comes to dealing with AI. The CJI explained that AI systems may tend to mete out unfair treatment without the intention of doing so, thus bringing in an element of 'indirect discrimination'. To supplement this, he gave the example of the European Court of Human Rights in the case of DH&Ors v Czech Republic. In the said instance, the neutral tests evaluating the intellectual capability of children led to a disproportionate number of minority Roma children being sent to special schools due to cultural and linguistic differences from the tests. 

    The CJI further analysed how such discrimination may occur-  Firstly when it's learning from data that has mistakes or is incomplete. Secondly, when it makes decisions using secret methods, making the reasoning for such unfairness opaque for the human mind to process. Such processes of the AI are called 'black-box algorithms'. 

    "In the realm of AI, indirect discrimination can manifest in two crucial stages. Firstly, during the training phase, where incomplete or inaccurate data may lead to biased outcomes. Secondly, discrimination may occur during data processing, often within opaque “black-box” algorithms that obscure the decision-making process from human developers. A "black box" refers to algorithms or systems where the internal workings are hidden from users or developers, making it difficult to understand how decisions are made or why certain outcomes occur."  

    An example of such an instance of indirect discrimination could also be seen in the case where hiring is done by automated hiring systems. Here the developers may not know the reason why the algorithm favoured certain individuals over the others. 

    "This lack of transparency raises concerns about accountability and the potential for discriminatory outcomes."

    In light of this, the CJI referred to the European Commission's proposal for regulating the 'high-risk AI'  in judicial settings due to their 'black box nature'. 

    Such high-risk AI manifestation, as illustrated by the CJI could be found in Facial recognition technology (FRT). FRT is a tool that identifies or verifies an individual by just detecting their face. It's identification capabilities, however, may impinge upon the privacy of individuals. 

    "Facial recognition technology (FRT) serves as a prime example of high-risk AI, given its inherently intrusive nature and potential for misuse. Defined as a "probabilistic technology that can automatically recognize individuals based on their face in order to authenticate or identify them," FRT is the most widely used type of biometric identification." 

    Recently, such a privacy violation could be seen in the case of Glukhin v. Russia, where the European Court of Human Rights looked at a case from Russia where a man was found guilty because he was identified by FRT at a peaceful protest. The court said using FRT like this is deeply intrusive and there must be strict rules to prevent misuse. 

    CJI Pushes For Capacity Building & Training In AI: The Solution To Maximise Effective Utilization Of AI  

    In terms of the solutions, the CJI stressed that capacity building and training are the bedrock for responsible usage of AI. it was essential that the legal professionals be skilled to propel through the challenges that AI poses. 

    "By investing in education and training programs, we can equip professionals with the knowledge and skills needed to navigate the complexities of AI, identify biases, and uphold ethical standards in their use of AI systems. Additionally, capacity-building initiatives can foster a culture of responsible innovation, where stakeholders prioritize the ethical implications of AI development and deployment" 

    The CJI lauded the endeavours of the Council of Europe in bringing forth the AI Regulations and drafting a Framework Convention on Artificial Intelligence, Human Rights, Democracy, and the Rule of Law that reflects a commitment to developing global standards for AI governance. 

    Looking At The Brighter Side 

    The CJI raised concerns of social divide and deepening of class discrimination when it comes to equal access to AI & its benefits in the future. referring to the famous American freedom fighter, Federick Douglass, the CJI emphasized that the worries of purposefully enriching the rich at the cost of the poor in tapping the benefits of AI futuristically, especially on the aspect of seeking justice. 

    The American abolitionist Frederick Douglass once said, “Where justice is denied, where poverty is enforced, where ignorance prevails, and where any one class is made to feel that society is an organized conspiracy to oppress, rob and degrade them, neither persons nor property will be safe.” This quote resonates deeply with the concerns surrounding the potential implications of increased reliance on AI in the legal domain. There is a fear that the adoption of AI may lead to the emergence of two-tiered systems, where access to quality legal assistance becomes stratified based on socioeconomic status.The poor may find themselves relegated to inferior AI-driven assistance, while only affluent individuals or high-end law firms can effectively harness the capabilities of legal AI. Such a scenario risks widening the justice gap and perpetuating existing inequalities within the legal system."  

    However, the CJI urged us to look at the brighter side. He referred to the observations made by Harvard Law Professor - Professor David Wilkins - "that technology also disrupts established power structures, as seen in the music industry's transformation". 

    It was highlighted how online streaming platforms like  Spotify and Apple Music have challenged the dominance of traditional record labels and thus broke the barrier to accessing the music industry and showing one's talent irrespective of one's social background. Drawing a similar parallel with the law industry, the CJI explained how the use of advanced technology can bring the underprivileged to the forefront to gain access to justice. The virtual hearing facility by the Supreme Court was a case in point. 

    "The adoption of hybrid-mode hearings by the Supreme Court of India represents a significant shift in the country's judicial landscape, with far-reaching implications for access to justice and the legal profession. By embracing hybrid-mode hearings, which allow virtual participation, the Supreme Court has removed geographical barriers. Now, lawyers from any corner of the country can represent their clients before the highest court without the need for expensive and time-consuming travel to the capital. This democratized access to the Supreme Court of India." 

    The CJI concluded that while the advancement of technology is inevitable a fair use of AI can only be done through embracing collaboration in the International regulatory regime. 

    "As we navigate the integration of AI into the legal domain, it is imperative that we remain vigilant in addressing the systemic challenges and ensuring that AI technologies serve to enhance, rather than undermine, the pursuit of justice for all. By embracing collaboration and fostering international cooperation, we establish a framework that promotes the responsible and ethical use of AI technologies across borders. This paves the way for a future where technology empowers and uplifts every member of society, fostering inclusivity, innovation, and progress. Together, we shape a world where the promise of AI is realized for the betterment of humanity."


    Next Story