The Ministry of Electronics and Information Technology, by notification dated 10 February 2026, has amended the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, with effect from 20 February 2026. The amendment introduces a structured regulatory framework for AI-generated content and significantly strengthens the compliance obligations of intermediaries.

At a broad level, the amendment reflects a clear shift towards regulating emerging risks arising from artificial intelligence, particularly deepfakes and manipulated digital content. Intermediaries are now required not only to act upon unlawful content but also to proactively prevent misuse of AI tools, ensure transparency, and maintain traceability of such content.

1. Statutory Definition: Synthetically Generated Information (SGI)

The 2026 Rules formally define SGI as audio, visual, or audio-visual information created or altered algorithmically to appear real or authentic.

  • Included: Deepfakes, voice cloning, and AI-generated realistic images.
  • Excluded: "Good-faith" editing (such as colour correction, noise reduction, or routine formatting) that does not materially alter the meaning of the content.

2. The "3-Hour" Takedown & Rapid Response

India now has some of the fastest response timelines for AI-related harms:

  • Court/Government Orders: Illegal AI content must be removed within 3 hours (earlier 36 hours).
  • High-Risk Content: Content involving impersonation, morphed images, or non-consensual content must be removed within 2 hours of user reporting.
  • General Grievances: Other complaints must be resolved within 7 days.

3. Mandatory Transparency & Labelling

For platforms or entities using AI tools, the following are now mandatory:

  • Prominent Labelling: All AI-generated content must be clearly identified as such.
  • Immutable Metadata: Content must carry permanent identifiers or metadata indicating its origin, which cannot be removed or altered.
  • Quarterly User Notifications: Platforms must inform users every three months about the legal consequences of misuse of AI tools.

Further, significant social media intermediaries are now required to introduce a pre-publication compliance mechanism. This includes obtaining user declarations on whether content is AI-generated, verifying such declarations using technical measures, and ensuring appropriate labelling before such content is made publicly available. Failure to comply may result in loss of due diligence protection under the law.

Additionally, intermediaries are now mandatorily required to deploy appropriate technological tools, including automated systems, to detect and prevent unlawful content. This marks a shift from a reactive framework to a proactive compliance model, where platforms are expected to actively monitor and regulate content on their systems.

Conclusion

The 2026 amendment represents a significant development in India's digital regulatory landscape, particularly in relation to AI-generated content. It introduces clear definitions, strict timelines, and enhanced obligations around monitoring, labelling, and traceability. Overall, the framework places increased responsibility on intermediaries to prevent misuse of AI technologies while ensuring transparency and accountability in the digital ecosystem.

Note:

The contents of this article are for informational purposes only and should not be interpreted as soliciting, advertisement or legal advice. Specialist advice should be sought for specific circumstances.