India’s multi-layered defence against deepfakes

Digvijaysinh Chudasama, Partner, Deloitte India, talks about how deepfakes continue to be a major threat globally, with increasing sophistication in AI-generated content.

The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, are set for a major reform to address the growing risks of the evolving digital world. The Ministry of Electronics and Information Technology’s (MeitY) suggested changes will make India safer and more resilient, helping the country counter deepfake threats and keep its digital ecosystem safe.

Deepfakes and AI-generated media are powerful tools used to commit fraud, electoral manipulation and cause major financial losses. The proposed changes will enable accountability and transparency. They are part of a larger effort to secure technology, uphold human oversight and raise awareness among the public about digital challenges.

The changes to the Information Technology Rules, 2021, prevent the misuse of synthetic media. The main provisions include a new clause that defines artificially created, modified or altered content that appears authentic. The updated rules classify such content as unlawful. Section 79(2) of the IT Act will continue to protect platforms that make reasonable efforts or act on user complaints to remove or block fake content.

Intermediaries must label synthetic content with permanent metadata and visible identifiers. These should cover at least 10 percent of the content so that users can recognise them immediately. Significant Social Media Intermediaries (SSMIs) are required to collect user disclosures regarding synthetic content, verify these declarations using appropriate technical tools and clearly display labels or notices that identify such content.

These amendments introduce clear definitions, visible labels and metadata embedding, helping users identify deepfakes and protecting platforms. 

Tech-driven safeguards

The implementation of these changes requires a strong technological system where platforms can detect, verify and trace synthetic content. Machine learning algorithms can detect synthetic content, such as odd facial expressions or mismatched audio. Content signing and blockchain-based tracking are two technologies that make it easier and faster to find out where the content originated from and whether it is real.

Platforms can use forensic tools to assess high-risk content such as political messaging or viral misinformation. This involves examining its source, methods of alteration and patterns of circulation. Detection systems should be treated as a critical part of the digital infrastructure. They must have features such as version control, auditing and quick patching to prevent misuse.

Human oversight and governance

Under these amendments, users must state if their content is synthetically created. These declarations will be verified using technical checks. Synthetic content can be identified using advanced technology; however, human oversight is still necessary for high-impact situations. Platforms should make teams of legal experts, technologists and content moderators to check flagged content and make smart decisions. Record keeping of moderation decisions and their availability for review helps build confidence among the general public and demand that platforms act responsibly.


Pilot programmes in high-signal environments enable improvement of detection models and moderation workflows before full-scale deployment. ~ Digvijaysinh Chudasama


Telling “reel” from “real”

Awareness is key to helping citizens recognise synthetic media. Various entertainment media/platforms can play an important role in educating users on how to detect deepfakes and understand labels for verification of sources. Platforms will need to embed educational cues, such as pop-ups and verification links, into user experiences. Digital literacy programmes will help people spot fake content and reduce misinformation online.

India’s path forward

The reforms to the IT Rules, 2021, will act as a digital shield against deepfakes; however, protecting citizens and maintaining trust in the digital world calls for a layered approach. Clear legal standards, AI detection capabilities, human oversight and public awareness will help users tell the difference between real and fake, making internet use safer and more accountable. 

Share on