Centre Notifies IT Rules Amendment To Regulate AI-Generated Content; Platforms Must Takedown Illegal Content Within 3 Hrs

Update: 2026-02-10 14:16 GMT
Click the Play button to listen to article

The Central Government has notified sweeping amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, significantly tightening the regulatory framework governing synthetically generated and artificial intelligence-based content on digital platforms.

The amendments, notified by the Ministry of Electronics and Information Technology in exercise of powers under Section 87 of the Information Technology Act, 2000, will come into force on February 20, 2026.

Legal Recognition Of Synthetic Content

For the first time, the Rules formally define and regulate “synthetically generated information”, covering audio, visual and audio-visual content that is artificially or algorithmically created or altered in a manner that appears real or indistinguishable from natural persons or real-world events.

The definition expressly targets deepfakes and AI-generated impersonations, while carving out exceptions for routine editing, accessibility enhancements, academic and training materials, and good-faith formatting or technical corrections that do not materially alter content.

The amendment clarifies that any reference to “information” used for unlawful acts under the IT Rules will now explicitly include synthetically generated information.

Mandatory Labelling And Metadata Requirements

Intermediaries offering tools that enable creation or dissemination of synthetic content are now required to ensure prominent labelling of such content. Visual content must carry clear on-screen labels, while audio content must include prefixed audio disclosures.

Further, platforms must embed permanent metadata or provenance markers, including unique identifiers, to trace the intermediary resource used to generate such content, to the extent technically feasible. Intermediaries are expressly barred from allowing removal or suppression of such labels or metadata.

Prohibition On Illegal AI Content

The Rules mandate intermediaries to deploy automated and technical safeguards to prevent generation or dissemination of synthetic content that violates existing laws. This includes content involving child sexual abuse material, non-consensual intimate imagery, obscenity, impersonation, false electronic records, and content related to explosives, arms or ammunition.

Synthetic content that falsely depicts individuals or events in a deceptive manner is specifically prohibited.

User Declaration And Platform Verification Obligations

Significant social media intermediaries are now required to obtain a declaration from users on whether uploaded content is synthetically generated. Platforms must also deploy appropriate technical tools to verify the accuracy of such declarations.

Where content is confirmed to be synthetically generated, intermediaries must ensure clear and prominent disclosure before publication. Failure to act on such content may result in the platform being deemed to have failed in exercising due diligence.

Faster Takedown And Compliance Timelines

The amendment substantially reduces response timelines for intermediaries. Takedown or disabling of access pursuant to lawful orders must now be undertaken within three hours, as against the earlier 36-hour window.

Grievance redressal timelines have also been shortened, with initial acknowledgements reduced from 15 days to 7 days, and certain content removal obligations reduced to as little as two hours.

Enhanced Reporting And Law Enforcement Coordination

Platforms must now inform users that violations involving synthetic content may lead to account suspension, content removal, disclosure of user identity to victims, and mandatory reporting to law enforcement authorities where required under criminal law, including offences under child protection and election laws.

The amendment also updates statutory references by replacing the Indian Penal Code with the Bharatiya Nyaya Sanhita, 2023, in line with recent criminal law reforms.

Safe Harbour Protection Clarified

The Rules clarify that removal or disabling of access to unlawful or synthetic content by intermediaries, including through automated tools, will not amount to a violation of safe harbour protections under Section 79 of the IT Act, provided due diligence obligations are met.

 Click here to read the Rules


Tags:    

Similar News