Deepfakes And Breach Of Personal Data – A Bigger Picture

  • Deepfakes And Breach Of Personal Data – A Bigger Picture
    Listen to this Article

    On November 07, 2023, a day after a licentious video of actor Rashmika Mandanna surfaced on several social media platforms, she came out decrying about its authenticity. The actor’s face unwittingly had been superimposed on the body of a British Indian influencer.

    This is an example of what is called a ‘Deepfake’ video because of the Artificial Intelligence tools used to manipulate the images and videos. The new images thus created are a form of disinformation and whether they are objectionable or not will depend on the context and how the same is perceived. The celebrities are an easy target and their objectionable videos most marketable commodity.

    How are deepfakes created?

    Deepfakes are videos creating delusion with the use of deep learning, AI and photoshopping techniques to create images of events to spread disinformation. The technologies namely, GANs (Generative Adversarial Networks) or ML (Machine Learning) are interplayed to create the videos.

    That is how late actor, resurrection of Paul Walker was created for Fast & Furious 7. In 2020 Indian legislative assembly elections politician Manoj Tiwari‘s speech delivered in English was manipulated to be disseminated in the ‘haryanvi’ dialect. The creator of deepfake video however, is required to first train a neural network on many hours of real video footage of the person to give it is realistic understanding or touch to the video. Thereafter the trained network is combined with computer-graphics techniques to superimpose a copy of the person onto a different actor.

    Deepfake imagery could be an imitation of a face, body, sound, speech, environment, or any other personal information manipulated to create an impersonation.

    Greatest risks

    Gender Inequity

    Deepfakes the term was coined in 2018 by a Reddit user who created a Reddit forum dedicated to the creation and use of deep learning software for synthetically face swapping female celebrities into pornographic videos. According to a report of an Amsterdam based cybersecurity firm, Deeptrace, deepfake pornographic videos are aimed and targeted majorly at women than men, thereby increasing gender inequality.[1] Women form 90% of the victims of crimes like revenge porn, non-consensual porn and other forms of harassment and deepfake is one more in the list.

    Political Risks

    In March 2022, a video message of Ukranian President, Volodymyr Zelenskky surfaced on social media platforms wherein the President is seen imploring Ukrainians to lay down their arms and surrender. Resultant, the President’s office immediately disavowed the authenticity of the video and noted it to be deepfake. The video was the first high-profile use of deepfake during an armed conflict and marked a turning point of information operations.

    Financial Risks

    Deepfakes are not just limited to creation of imagery and videos, in fact as elaborated above, there exist AI tools to clone the voices of individuals to execute financial scams. About 47% of Indian adults have experienced or know someone who have experienced some kind of AI voice scam. As per McAfee’s report on AI Voice Scams, around 83% of Indian victims responded by saying that they had a loss of money with 48% losing over INR 50,000.

    Laws regulating Deepfakes in India

    The Ministry of Electronics and Information Technology has issued in its latest Advisory dated November 07, 2023 directing the significant social media intermediaries to:

    • Ensure that due diligence is exercised and reasonable efforts are made to identify misinformation and deepfakes, and in particular, information that violates the provisions of rules and regulations and/or user agreements and
    • Such cases are expeditiously actioned against, well within the timeframes stipulated under the IT Rules 2021.
    • Users are caused not to host such information/content/Deep Fakes
    • Remove any such content when reported within 36 hours of such reporting
    • Ensure expeditious action, well within the timeframes stipulated under the IT Rules 2021, and disable access to the content / information.

    The Information Technology Act, 2000

    It further reiterated that any failure to act as per the relevant provisions of the Information Technology Act, 2000 (hereinafter referred to as the “IT Act”) and Rules, Rule 7 of the Information Technology Rules (Intermediary Guidelines and Digital Media Ethics) Code, 2021(hereinafter referred to as the “IT Rules”) would render organisations liable to lose the protection available under the Section 79(1) of the IT Act.

    Section 79 (1) of the IT Act exempts online intermediaries from liability for any third party information, data, or communication link made available or hosted by him. Rule 7 of the IT Rules empowers the aggrieved individuals to take platforms to court under the provisions of the Indian Penal Code.

    Section 66E of the IT Act prescribes punishment, for violation of privacy of an individual through publishing or transmitting the image of private area of such a person without his or her consent, with imprisonment of three years with fine of INR 2 lakh.

    Section 67, 67A and 67 B of the IT Act specifically prohibit and prescribe punishments for publishing or transmitting obscene material, material containing sexually explicit act and children depicted in sexually explicit act in electronic form respectively.

    In case of impersonation in an electronic form, including artificially morphed images of an individual, social media companies have been advised to take action within 24 hours from the receipt of a complaint in relation to any content. In view of the same, Section 66D of the IT Act provides punishment of three years with fine upto one lakh rupees for anyone who by means of any communication device or computer resource cheats by impersonation.

    Advisory for the Aggrieved

    The Union Ministry further encouraged the aggrieved persons to file First Information Reports (FIRs) at their nearest police station and avail remedies provided under the IT Rules.

    Global Regulation on AI

    The Bletchly Declaration – A collective effort in collaborative spirit

    Twenty-nine countries, including the US, Canada, Australia, China, Germany, India alongside European Union have joined in to prevent the ‘catastrophic harm, either deliberate or unintentional’ that arise from the ever increasing use of AI.

    The Declaration, lays down a step forward for countries and nations to cooperate and collaborate on the existing and potential risks of the AI and sets in the agenda aimed at:

    1. Identifying risks in the arena of AI;
    2. Building respective risk-based policies across countries aimed at increasing transparency by private players developing frontier AI capabilities

    Countries that have taken proactive steps towards curbing the menace of deepfakes

    The UK government has planned to introduce national guidelines for the AI industry evaluating the implementation of legislation that would require clear labelling for AI generated photos and videos.

    The European Union has enforced Digital Services Act which obligates social media platforms to adhere to labelling obligations, enhancing transparency and aiding users in determining authenticity of media.

    South Korea passed a law that makes it illegal to distribute deepfakes that could cause harm to public interest with offenders facing up to five years or imprisonment or fines upto 50 million won (aproximately 43,000 USD).

    In January 2023, China, the Cyberspace Administration of China and the Ministry of Industry and Information Technology and the Ministry of Public Security, stressed have that the deepfakes must be clearly labeled in order to prevent public confusion.

    The United States, has advocated the Department of Homeland Security (DHS) to establish a task force to address digital content forgeries, also known as “deepfakes.” Many states have enacted their own legislations to combat deepfakes.

    Lawsuits that paved the way against Deepfakes – India

    Closer to home, Bollywood actor, Anil Kapoor had filed a lawsuit after finding AI generated deepfake content using actor’s likeness and voice to create GIFs, emojis, ringtones and even sexually explicit content. In this lawsuit, Anil Kapoor v. Simply Life India and Ors.[2] the Delhi High Court granted protection to actor’s, individual persona, and personal attributes against misuse, specifically through AI tools for creating deepfakes. The Court granted an ex-parte injunction that effectively restrained sixteen (16) entities from utilizing the actor’s name, likeness, image and employing technological tools such as AI for financial gain or commercial purpose. On the same line, the legendary actor Mr. Amitabh Bachchan in the case Amitabh Bachchan v. Rajat Negi and Ors.[3] was granted ad interim in rem injunction against the unauthorized use of his personality rights and personal attributes such as voice, name, image, likeness for commercial use.

    Ways to combat

    Technological Solutions - Use of Blockchain in combating deepfakes

    Axon Enterprise Inc, the leading maker of US police body cameras, has upgraded its security devices that could aid discrediting deepfake videos. The rollout of Axon’s Body 3 camera has emerged as a crucial evidence against the alleged police misconduct after defense lawyers questioned the integrity of police videos citing noticeable edits to shorten a scene or adjust a timestamp. The upgraded security camera has introduced additional security rendering captured footage inaccessible for playback, download or edit by default without a form of authentication such as a password.

    Responsibility and Accountability of social media platforms

    Pictures and images are sensitive personal data of an individual that are capable of identifying that very individual as defined under the Digital Personal Data Protection Act, 2023. Deep fakes are, thus, breach of personal data and violation of the right of privacy of an individual. Data publicly available may not fall under the law but do the social media giants will still have to own up if the information put on their sites can be mined for the purposes of creating misinformation. Further the dissemination of this disinformation also takes place through the social media channels and controls have to put in place for the same. Youtube has recently announced measures requiring creators to disclose whether the content is created through AI tools. The need will be to create a uniform standardization that all channels can adhere to and are common across borders.

    The Union Minister of Electronics and Technology has announced that the government is likely to unveil a framework to deal with the misuse of technology for AI and ‘Deepfake’ on November 24, 2023.


    Authors: Vikrant Rana (Managing Partner), Anuradha Gandhi (Managing Associate) and Rachita Thakur (Associate Advocate) At S.S. Rana & Co. Views are personal.




    1. DeepTrace, The State of Deepfakes, Landscape, Threats, and Impact https://regmedia.co.uk/2019/10/08/deepfake_report.pdf

    2. 2023 LiveLaw (Del) 857

    3. 2022 SCC OnLine Del 4110.


    Next Story