Three Hours To Comply: India's New Rules For AI-Generated Content And Deepfakes
Aayushman Gaikwad & Smruti Mishra
21 Feb 2026 12:29 PM IST

The February 2026 amendment to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 represents India's most assertive regulatory intervention into AI-generated content to date. By compressing takedown timelines, mandating technical traceability, and redefining intermediary obligations, the government has shifted from reactive moderation toward proactive algorithmic governance.
A major update emerged from India's tech policy landscape on February 10 when new rules took effectunder the authority of the Ministry of Electronics and Information Technology.[1] These revisions mark a significant shift - aimed squarely at controlling how AI-made material spreads online. Instead of broad oversight, the focus zeroes in on manipulated videos and fake audio clips flooding social networks.
While earlier versions of the guidelines touched lightly on such issues, this round digs deeper. With digital fakery growing harder to spot, regulators moved to impose clearer responsibilities on platforms hosting user content. Notably, the burden now includes proactive detection measures. Rather than waiting for complaints, companies may need systems ready before harm occurs.
The amendment introduces four key structural changes:
- A statutory definition of “deepfake.”
- A drastic reduction in the takedown timeline from 36 hours to three hours.
- Mandatory technical disclosure and traceability measures for AI-generated content.
- Expanded compliance and dispute resolution obligations for significant intermediaries.
Together, these measures signal a regulatory model centered on speed, traceability, and enforceable accountability.
What Counts as a Deepfake?
Now, the guidelines defines something a "deepfakeas content generated using algorithmic or computational techniques - to produce sound, visuals, or both, so convincingly that they could pass as authentic representations of real individuals.[2] Such material includes pictures made by artificial intelligence, replicated voices, or video clips crafted to mislead an audience.
However, the definition excludes routine creative work. Standard video corrections, noise removal, image format conversions, and accessibility improvements do not qualify as deepfakes.[3]Similarly, PDFs and textual materials with illustrations remain outside the definition's scope, and no false documents are covered.
Perhaps the most striking shift comes with the takedown timeline. Earlier versions had required intermediaries to remove unlawful content within 36 hours of receiving notice. The new amendment slashes that to merely 3 hours—a 92% reduction in response time, reflecting government concern about how quickly misinformation spreads on social media.[4]
This shortened period applies upon receiving either of two types of notice: knowledge through a court order or a written communication issued by an authorized officer not below the rank of Joint Secretary, specifying legal grounds and identifying the exact web address of the material in question. Notably, all such notices must undergo a three-level review before being acted upon.
If especially delicate material appears - like non-consensual private photos, altered pictures depicting terrorist actions, or anything likely to provoke harm - platforms must report it straightaway. When children are involved, as defined under the Bharatiya Nyaya Sanhita 2023 (formerly governed by the Protection of Children from Sexual Offences Act, 2012)[5], information has to reach the National Commission for Protection of Child Rights within one day. Crucially, these reports should keep evidence intact, avoiding intrusive markings such as visible stamps.
Detection and Transparency
To enable users to recognize and assess AI-generated content, platforms must now deploy technical disclosure measures.[6] This marks India's first regulatory step beyond basic content moderation.
Specifically, intermediaries must embed metadata or use other unique technical markers that allow tracing AI-generated content back to its original source. Unlike basic watermarks that users can easily see, this approach cannot be bypassed with simple editing tools. It helps verify authenticity while combating coordinated disinformation campaigns.[7]
Significantly, if usage exceeds internally declared thresholds, automated systems must then check content for at least 50% accuracy. Any content that fails this threshold must be flagged. The amendment tightens language from previous draft text of 'endeavor to use' to simply 'must'—no longer allowing intermediaries to claim they demonstrated good faith effort.[8] Instead, functional deployment must be operational at least every three months, with simple audits available upon request.
However, the amendment does not clearly define how “accuracy” is to be measured — whether through machine confidence scoring, human verification, or third-party auditing — leaving operational ambiguity that may complicate enforcement.
The applicability of these rules extends across certain categories of intermediaries: significant social media intermediaries as defined under Rule 3(1)(w), and community-facing online gaming intermediaries rather than passive infrastructure hosts.
Dispute Resolution must now be resolved within 7 days, down from the previous 15-day window.[9] Instead of relying on voluntary self-regulatory relief mechanisms, this codifies a default dispute protocol. However, the safe harbour provisions referring to due diligence remain intact, though notably fail to specify what happens if procedures go awry and lead to errors—potentially exposing intermediaries to liability and provoking a chilling effect.
This approach represents a hybrid regulatory model. The rules aim to strike a balance between government oversight and platform-level pre-publication structural controls, but the architecture itself pushes toward algorithmic content moderation.
Constitutional and Policy Dimensions
From a legal perspective, these rules engage Article 19(1)(a) of the Indian Constitution, which guarantees freedom of speech and expression.[10] Any restriction on this fundamental right must satisfy strict constitutional tests: Under the Supreme Court's proportionality framework — articulated in Modern Dental College v. State of Madhya Pradesh (2016)[11] and reaffirmed in K.S. Puttaswamy v. Union of India (2017)[12] — any restriction must pursue a legitimate aim, adopt rational means, be necessary in the absence of less restrictive alternatives, and include adequate procedural safeguards.
Critics argue the three-hour takedown mandate imposes burdens, implementing what amounts to a prior restraint, and avoids meaningful procedural safeguards—all of which could be construed as censorship and could create a chilling effect on free expression.[13] Satirical content, for instance, may become more difficult to distinguish from genuine deepfakes, raising concerns if enforcement lacks nuance and subjects material to pre-emptive removal.
If enforced strictly as written, vagueness in constitutional jurisprudence about what must be narrowly tailored and adequately filtered will emerge. . Meeting filing demands requires significant effort, while seeking redress unfolds through layered prior analysis - proportionality assessments come first, followed by four-step connections, arguments based on essential need, alongside options for milder, step-by-step penalties. After-the-event scrutiny offers minimal safeguarding, shifting discussions about personal privacy or societal damage into broad regulatory promises without clear boundaries.
Looking Ahead
India joins the European Union, California, China, and others in developing comprehensive legal frameworks to address deepfakes. Unlike the European Union's risk-classification model under the AI Act, India's framework is timeline-centric and compliance-driven. China mandates visible watermarking of synthetic media, while California primarily targets election-related deepfakes. India's approach blends rapid takedown enforcement with technical traceability requirements, positioning it between speech regulation and structural platform governance.
Small and medium-sized platforms will need to build specialized verification tools. Deployment will remain imperative as both misinformation risks and encryption protocols evolve.
More challenges seem inevitable. The phrase "AI-generated content" will continue to expand in scale and complexity as tools become more sophisticated and widely available, driven by rapidly advancing technologies and commercial incentives. This evolution will intensify conflicts with end-to-end encryption and freedom of online expression. The balance of meeting public safety needs against preserving the foundations of democratic solidarity will require continuous refinement.
The February 2026 amendment marks a decisive shift from intermediary neutrality to algorithmic accountability. Whether this framework strengthens democratic resilience or constrains expressive freedom will ultimately depend on judicial interpretation, administrative restraint, and the evolving balance between technological innovation and constitutional liberty.
Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026, Ministry of Electronics and Information Technology Notification, G.S.R. 120(E) (India), https://www.meity.gov.in. ↑
Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026, Rule 2(1)(d), G.S.R. 120(E), Ministry of Electronics and Information Technology (India). ↑
Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026, Rule 2(1)(d) Explanation, G.S.R. 120(E) (India). ↑
Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026, Rule 3(1)(b)(ii), G.S.R. 120(E) (India). ↑
Bharatiya Nyaya Sanhita, 2023, No. 45 of 2023, § 69-72 (India); Protection of Children from Sexual Offences Act, 2012, No. 32 of 2012 (India). ↑
Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026, Rule 4(4)(a), G.S.R. 120(E) (India). ↑
Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026, Rule 4(4)(b), G.S.R. 120(E) (India). ↑
Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026, Rule 4(4)(c), G.S.R. 120(E) (India). ↑
Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026, Rule 4(2)(d), G.S.R. 120(E) (India). ↑
India Const. art. 19(1)(a). ↑
Modern Dental College & Research Centre v. State of Madhya Pradesh, 2016 INSC 408 : (2016) 7 SCC 353. ↑
Justice K.S. Puttaswamy (Retd.) v. Union of India, 2017 INSC 937 : (2017) 10 SCC 1. ↑
Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026, Rule 3(1)(b)(ii), G.S.R. 120(E) (India); see also Internet Freedom Foundation, “Analysis of IT Amendment Rules 2026” (Feb. 2026), https://internetfreedom.in. ↑
Views are personal
