Byte The Ballot: Elections In The Age Of Artificial Intelligence

Dhruv Singhal

27 Feb 2024 1:55 PM GMT

  • Byte The Ballot: Elections In The Age Of Artificial Intelligence
    Listen to this Article

    “Constitutional guarantees cannot be subjected to the vicissitudes of technology.” - Justice K.S. Puttaswamy v. Union of India

    On 16th February 2024, 20 technology giants including Google, Microsoft, OpenAI, Amazon, and Meta signed a “Tech Accord to Combat Deceptive Use of AI in 2024 Elections.” Portrayed as a voluntary framework of 7 principles, this Accord comes in the context of the increasingly dangerous challenge posed to electoral democracy by Artificial Intelligence (“AI”) in an exceptional year riddled with elections in more than 60 nations.

    Traditionally, AI has been understood to mean systems capable of comprehending complex datasets and producing logical predictions based on that data. With innovation, Generative AI has become capable of creating new data based on the dataset it is fed. On the one hand, the threat posed by AI to electoral democracy stems from the ability of AI to micro-target voters and use their data to influence their political choices and electoral outcomes on a large scale, as exemplified by the Cambridge-Analytica scandal. On the other hand, AI accelerates the spread of misinformation through deceptive content, especially deep fakes which can subtly mimic or alter the appearance, voice, or actions of politicians, election officials, and key stakeholders in the electoral system to misinform voters.

    In several countries including the United States, Canada, South Korea, Argentina, and even in the Russia-Ukraine war deceptive videos or audios of politicians have emerged making statements that are not factually attributable to them. In some cases, these could have potentially affected the voter perception of these politicians. Closer home, in February 2024, fellow party members of the former Prime Minister of Pakistan, Imran Khan who is currently incarcerated, released an AI-generated victory speech for him, just hours after Nawaz Sharif also claimed to have won the 2024 General Elections. In India, AI-generated videos of the late M. Karunanidhi, K. Chandrasekhar Rao, and Manoj Tiwari have been doing rounds.

    Artificial Influence: Hijacking the Voters

    Across the globe, it has been recognized that the increased deployment of AI has detrimental consequences for the fundamental rights of citizens at large. By making hyper-visible certain groups, and entrenching the invisibility of the marginalized, it could potentially violate the right to equality. Concerns have also been raised regarding the large-scale processing of personal data, which is often retrieved non-consensually or through 'dark patterns' which could violate the right to privacy. Similarly, the increased deployment of AI by political parties and private actors could deprive voters of a meaningful exercise of their freedom to vote.

    Free and fair elections have been held to constitute the unamendable basic structure of the Indian Constitution.[1] Well-established jurisprudence of the Indian judiciary, summarized in Union of India v. Association for Democratic Reforms (2002) has identified voting as a constitutionally protected form of expression under Article 19(1)(a).[2] This judgment further recognized the importance of ancillary aspects necessary for the effective exercise of the freedom to vote like knowledge about the criminal antecedents of the candidates standing in elections.[3] Therefore, having adequate information for voting to be a meaningful exercise is as much a part of the constitutional scheme as the freedom itself. The Court held that “freedom of speech and expression includes the right to impart and receive information which includes freedom to hold opinions.”[4]

    Algorithmically driven feeds along with deceptive AI-generated content fueling our confirmation bias have serious consequences for our abilities to receive unbiased information, think independently, form opinions, and engage freely in public life which is essential to advancing democratic values. Increased use of AI for content generation and dissemination leads to the establishment of hierarchies of algorithmic control over a person's worldview as well as their ability to make autonomous decisions. This is especially concerning because of the low levels of education in a nation where social media is a prime source of information.

    This makes for not only a policy argument in favor of regulating AI but also a constitutional mandate for the Government to address the question of deep fakes. Along similar lines, the Delhi High Court recently admitted a Public Interest Litigation (Chaitanya Rohilla v. Union of India) which sought guidelines from the Central Government to regulate AI and deep fakes.

    Charting a Course: How to Regulate?

    The Indian Government has hitherto maintained a light-touch approach to AI regulation to avoid excessive restrictions on its growth. Mr. Ashwini Vaishnav, Minister of Electronics and Information Technology recently informed the Parliament that the Central Government was not planning the promulgation of any law on AI for the time being, but acknowledged the ethical concerns and associated risks.

    The Ministry of Electronics & Information Technology has even warned social media intermediaries to take steps against deep fakes and misinformation, lest their safe harbor immunity be lost. However, tackling deceptive AI-generated content is more often than not, a race against time. Even before such content can be identified, flagged, and removed from a platform, it ends up doing its damage. Therefore, to substantially address this challenge, comprehensive legislation aimed at regulating AI software is the need of the hour.

    The need for such legislation has been recognized across various jurisdictions, including the European Union, the United States, and China. The European Union's AI Act, for instance, addresses the AI conundrum through a risk-based approach with different regulatory measures for AI systems with minimal risk, high risk, 'unacceptable' risk, and transparency risks. The yardstick for what constitutes risk is whether the deployment of the AI system can potentially create an adverse impact on user safety, or their fundamental rights. The Act is to apply to both technology providers (who are to conduct appropriate conformity assessment procedures) and deployers (who are to ensure that the required documentation of the assessment is available).

    Currently, the Act has been passed but not implemented, and therefore its ramifications are yet to be apparent. So, while the Act provisionally agreed to mandate the tracing of AI-generated content, it is to be seen how such measures are to operate. One solution is mandatory watermarking of AI-generated content which would entail the embedding of an identifiable and recognizable signature that is not visible to the human eye but 'algorithmically detectable,' which would not only prevent copyright infringement but also aid the identification of AI-generated misinformation, fake news, and deep fakes. However such measures have been stalled because of lack of consensus, and non-standardization of watermarking techniques.

    U.S. President, Joe Biden in October 2023 signed an executive order to develop effective mechanisms for not only watermarking but also red-teaming for generative AI entailing high risks. Red teaming for generative AI includes suggesting malicious prompts to produce offensive and harmful content, including outright lying to produce derogatory stereotypes to violent pictures. The U.S. Administration even hosted a “red-teaming” public assessment event where thousands of hackers assessed the models of leading AI companies. However, in light of the legal, documentation, and technical costs of red-teaming, policymakers ought only to introduce such regulations for high-risk systems, to not create barriers to entry for start-ups & quell innovation.

    Along with legislation, there have also been several recommendations to designate a nodal authority for AI regulation. NITI Aayog recommended sectoral regulators like the Telecom Regulatory Authority of India (“TRAI”) to regulate AI technologies in their sector, and recommended the constitution of a Council for Ethics and Technology. Recently, TRAI also suggested the establishment of a domestic statutory body called the Artificial Intelligence and Data Authority of India (“AIDAI”) that would be independent and would take a risk-based approach to regulating AI. TRAI also suggested the formation of a Multi-Stakeholder Body to act as an advisory body for AIDAI. But none of these proposals have yet been fructified.

    Until legislation can be framed or an authority constituted, there is a case to be made for interim measures and increased vigilance on the part of the ECI in regulating complaints of deepfakes and misinformation campaigns by developing protocols for swift grievance redressal. This can be achieved by appropriate capacity-building measures for the ECI officials. A prohibition on deepfakes can be imposed through the Model Code of Conduct, which though legally unenforceable, acts as a yardstick of conscience for political parties. Notably, laws from California and Texas have prohibited the dissemination of deceptive content featuring political candidates for a certain period before elections. There should also be a simultaneous emphasis on digital and media literacy and more prominently, AI literacy as a part of electoral awareness campaigns hosted by ECI.

    The confounding non-realities of AI, force us to grapple with the post-truth world in which we live. In a year with heavy political undertones, comprehensive AI legislation based on internationally recognized best practices, and an independent regulator are indispensable. Until that is achieved, it is incumbent upon the ECI to ensure that the legitimacy of the elections is not tarnished by the ominous influence of AI.

    The Author is a student at NLU, Jodhpur. Views Are Personal.

    1. Kesavananda Bharati v. State of Kerala, (1973) 4 SCC 225.

    2. 5 SCC 294, ¶54.

    3. Gautam Bhatia, Offend, Shock, Or Disturb: Free Speech Under the Indian Constitution (Oxford University Press, 2016)

    4. Id. at ¶34


    Next Story