Introduction
The Union Government has amended the Information Technology (IT) Rules to mandate clear and prominent labelling of AI-generated synthetic content, particularly photorealistic images, audio, and videos. The amendments, set to come into effect from February 20, represent one of India’s most direct regulatory interventions in the age of generative AI. As deepfakes and hyper-realistic digital fabrications proliferate, the government has moved to strengthen platform accountability, reduce misinformation risks, and protect individuals from reputational and privacy harms.
Why the Amendment Was Necessary
The rapid advancement of generative AI tools has dramatically lowered the barrier to creating convincing synthetic media.
-
Deepfakes can now:
-
Replicate faces and voices
-
Fabricate events that never occurred
-
Produce non-consensual intimate imagery
-
-
Such content poses multiple risks Electoral misinformation, Reputational damage, Financial fraud, Online harassment
-
The absence of visible disclosure often leads viewers to assume authenticity,share content without verification
The amendment seeks to address this transparency gap and reinforce accountability in digital ecosystems.
Provisions of the Amended IT Rules
1. Mandatory Labelling
-
Platforms must ensure that AI-generated content is:
-
Clearly and prominently labelled
-
Identifiable as synthetic or artificially created
-
-
The objective is to prevent viewers from mistaking fabricated media for real events
This provision targets particularly photorealistic media, where deception is most plausible.
2. Definition of Synthetic Content
-
Synthetic content includes Audio, Visual, Audio-visual material
-
It applies when such content is:
-
Artificially created, modified, or altered using computer-based tools
-
Presented in a manner that appears real or authentic
-
The final definition is narrower than earlier drafts, focusing specifically on content that could mislead audiences.
3. Shorter Takedown Timelines
The amendments significantly tighten response timelines:
-
Content identified by courts or government as illegal must be removed within 3 hours (earlier 24–36 hours).
-
Highly sensitive content, such as Non-consensual nudity, Deepfakes must be removed within 2 hours.
This shift signals urgency in preventing viral spread of harmful synthetic media.
4. User Disclosure Obligations
-
Platforms must require users to disclose when content is AI-generated.
-
This reinforces shared responsibility between Content creators and Intermediary platforms
5. Synthetic Content Treated as “Information”
-
AI-generated content will be treated as “information” under the IT Rules.
-
This ensures that synthetic media falls within the existing legal framework governing unlawful content.
Safe Harbour and Platform Liability
A central enforcement mechanism lies in the doctrine of safe harbour.
-
Safe harbour protects digital platforms from being treated as publishers of user-generated content.
-
If platforms fail to comply with labelling and takedown obligations:
-
They risk losing safe harbour protection.
-
This could expose them to direct legal liability for hosted content.
-
By linking compliance to liability protection, the amendment increases the regulatory pressure on intermediaries.
Important Clarifications
-
Minor automatic enhancements by smartphone cameras such as lighting adjustments or filters are exempted from labelling requirements.
-
The final definition avoids overbreadth by excluding trivial modifications and concentrating on deceptive synthetic content
This distinction attempts to balance innovation with regulation.
Broader Implications
The amendment marks a significant step in India’s digital governance trajectory.
-
It signals a shift toward proactive regulation of AI-driven misinformation and greater emphasis on transparency in digital content
-
It reinforces faster grievance redressal and Platform accountability
-
It may also Influence future frameworks on AI ethics and data governance and shape debates around content moderation and free expression
At the same time, implementation will be critical. Overbroad enforcement or inconsistent interpretation could raise concerns about censorship or excessive regulatory control.
Conclusion
By mandating clear labelling of photorealistic AI-generated content and tightening takedown timelines, the government has responded to the escalating risks posed by deepfakes and synthetic media. The amendments reflect an attempt to adapt existing IT regulations to a rapidly evolving technological environment. While the framework emphasises transparency and accountability, its success will depend on careful enforcement that balances misinformation control with digital freedoms. In the era of generative AI, clarity about what is real and what is synthetic is becoming not just a technical issue, but a cornerstone of democratic trust.
