MeitY Implements New Deepfake Disclosure Rules Under IT Regulations

New Delhi – In a significant step toward regulating synthetic media, the Ministry of Electronics and Information Technology (MeitY) has unveiled new rules addressing the rise of AI-generated content, including deepfakes. Effective February 20, 2026, the amended IT Rules require clear labeling, traceability, and disclosure of synthetically generated media. This regulatory move positions India among the leaders globally in tackling the growing risks posed by deepfake technology. However, experts warn that implementing these measures may be far more challenging in practice than they appear on paper.

Challenges in Detection and Compliance

One of the most pressing issues highlighted by experts is the widening gap between the speed at which deepfake content is created and the ability to detect it. Generative AI tools are now capable of producing highly realistic deepfake videos in seconds. Meanwhile, detection systems remain reliant on probabilistic models that struggle to keep pace with rapidly advancing synthetic techniques. Experts have observed that detection tools require constant updates and retraining to address the emergence of new models.

Another challenge lies in ensuring compliance across the diverse tech ecosystem. Large social media platforms may have the resources to adopt sophisticated solutions such as watermarking, provenance tagging, and AI moderation systems. However, smaller startups and open-source platforms often lack the financial and technical capacity to implement such measures. This disparity raises questions about how regulatory compliance will be enforced equitably across the industry.

Traceability and Cross-Platform Breakdowns

Even with metadata markers embedded in synthetically generated content, traceability weakens significantly once content is transferred between platforms. According to global studies on digital watermarking, the process of downloading, compressing, or re-uploading content often strips away crucial metadata. This creates gaps in the provenance chain, making it difficult to reliably trace the origins of manipulated media.

Bad actors further complicate the issue. The regulations assume voluntary compliance by users in disclosing synthetic content, but those seeking to distribute malicious or misleading deepfakes are unlikely to self-label such material. Additionally, cross-border content flow introduces jurisdictional challenges. Deepfakes created outside India but consumed domestically create enforcement complexities that are difficult to address without international cooperation.

Operational Pressures on Intermediaries

The new rules also demand faster content takedown timelines, adding significant operational pressure on intermediaries. This requirement necessitates the deployment of advanced AI moderation tools and the creation of skilled review teams capable of identifying and removing deepfake content promptly. Experts caution that these operational demands may strain resources, particularly for smaller organizations.

A Long Road Ahead

While the new deepfake disclosure rules represent a decisive regulatory action by MeitY, experts agree that long-term success will hinge on overcoming several hurdles. Among the key factors determining the effectiveness of these measures will be technological advancements, economic feasibility, and broader digital literacy among users. Implementation, as experts have noted, will not be an overnight achievement but a "marathon – not a sprint."

India’s move to address the challenges posed by deepfake content underscores the importance of proactive governance, yet it also highlights the complexities of regulating a rapidly evolving technological landscape. Whether the new rules can meet their intended goals remains to be seen, but they mark a critical step in grappling with the realities of AI-driven media.

Read the source