Deepfake Regulation in India: Legal Challenges & Solutions
This Article is written by Disha Hirwani, she is a 2nd-semester LL.B. student at Aishwarya College of Education and Law. She also serves as an author at Lexful Legal.
Introduction
Artificial intelligence has given life to deepfake technology which has moved very quickly from being a mere high-end tool to being a powerful weapon for misinformation, harassment, and fraud. This type of development has raised major concern regarding individual rights, public order, and integrity of India. On the one another hand, the Information Technology Act, 2000, the Bharatiya Nyaya Sanhita (BNS), 2000, and the Digital Personal Data Protection (DPDP) Act, 2023 are some of the few laws that somehow manage to provide the judicial power against deepfakes. However, they are not such effective as they are like having no dedicated framework related to deepfakes.
The time has come for India to build a legal structure that is effective, comprehensive, forward looking, specially addresses the issue of deepfakes by making criminalised, supporting victims at the same time adapting to technology by considering rights such as privacy and free speech which are fundamental rights.
The Deepfake Threat in India:
A deepfake is an AI-generated or altered hyper-realistic audio, video, or image created to fool viewers and listeners into believing a person has said or done something which they never did those things. In India, deepfakes are making their way into the following:
Gender abuse: Deepfakes in the way of fake intimate images which is referred to as “deepfake porn mainly target women, creating fake sexual content with their likeness and causing psychological trauma to women, loss of reputation, and even extortion was there.
Political manipulation: Fake videos of politicians shows provocative remarks or being involved in corruption activities can alter the public’s views, provoke riots, and compromise the integrity of the vote, especially in a huge, varied democracy in India.
Financial scams and impersonation: Voice deepfakes impersonate relatives, bank workers, or entertainers to deceive victims into granting access to personal data, information or making money transfers.
Defamation and damage to reputation: Deepfakes can be employed to harm the reputation of the individual, public figure, or institution by placing them in a false way, thus ruining their social and professional life.
The recent cases of Suniel Shetty (the actor), and Sadhguru (the spiritual leader) deepfake videos have proven that the distribution of AI-generated content can be extremely rapid, causing courts to issue urgent injunctions and takedown orders. The use of deepfakes, therefore, is not a matter of the future but an issue that needs to be dealt with immediately through structural legal response.
Existing Legal Framework Gaps
India is using a mixture of laws to address deepfakes, but none of the existing laws provide that much protection specifically to synthetic media, and as a result various scenarios are unregulated:
1. “Deepfake” definition is not clear at all: Neither in the Information technology Act of 2000 nor in the Bharatiya Nyaya Sanhita (BNS), 2023 provide any definitions for “deepfake” or “synthetically generated content” thereby compelling the courts and the police to place such cases under broader offences like impersonation, cheating or obscenity. This creates a situation where it is not clear what is really forbidden and how one can demonstrate it and how to get protected.
2. General offences are heavily relied upon: Deepfakes fall under the following sections for prosecution:
- Information Technology Act, Section 66C which tells identity theft and 66D which tells cheating by impersonation having with a maximum 3 years imprisonment and fine.
- Information Technology Act, Section 66E which tells invasion of privacy for the distribution of intimate images without consent, with punishment of 3 years imprisonment and a fine of at least ₹2 lakh.
- Information Technology Act, Sections 67/67A which relates to distribution of obscene materials.
- Section 356 relates to slander of the BNS for defamatory or pornographic deepfakes.
- Section 353 relates to public mischief statements and Section 111 tells cybercrime conspiracy of the BNS for deepfakes that create a danger to public order or national security.
The above provisions can be used, yet they are not meant to address the issues related AI-generated content, deep fakes like establishing creation, intention, and the media’s being synthetic.
3. Victims’ Rights Unable to Protect Them: Victims suffer from deepfake misconduct are likely to experience to much delays in takedown actions, slow process, struggle to get the criminals identified, and have very limited remedies. The lack of a specific right to one’s image and voice in Indian law makes it even harder to obtain damages for violations of privacy rights.
4. Unclear Intermediary Liability Rules: In the case of Section 79 of the IT Act and the Intermediary Guidelines, platforms have a “safe harbour” status if they conduct due diligence, but the regulations do not explicitly require deepfakes to be detected easily or AI-generated content to be clearly labelled, hence inconsistent enforcement is possible.
Recent Regulatory Steps taken and Their Limits
The government has acknowledged these deficiencies and taken some significant measures accordingly:
Ministry of Electronic and Information Technology’s IT Rules Amendment in October 2025: The Ministry of Electronics and Information Technology has made changes to the IT (Intermediary Guidelines) Rules, 2021 by including a clear reference to “synthetically generated information” which is defined as content that is “artificially or algorithmically created, generated, modified or altered using a computer resource, in a manner that appears to be reasonably authentic or true.”
Labelling requirement: Major Social Media Intermediaries (SSMIs) are obliged to label AI-generated content permanently and in a clear way (e.g., “AI-Generated”) along with unique metadata for the purpose of making it easy for users to know about synthetic and real content.
Speedier removal: It is anticipated that platforms and intermediaries will quickly deal with deepfake complaints, particularly in relation to women, elections or public order, in order to not lose the protection of safe harbour and maintain privacy.
While these actions taken are quite considerable, they are still dependent on rules and procedures. Neither do they a create a separate criminal offense for making or distributing harmful deepfakes nor do they provide a clear civil cause of action for the victims.
Need for New Laws on Deepfakes
India will require a dedicated legislation a Deepfake Prevention and Criminalisation Act which will:
1. Provide a perfect and unambiguous definition of deepfakes and synthetic data: The law should explicitly mention “deepfake” and “synthetically generated content” in a technological neutral manner, covering audio, video, and images which are AI-created or altered to misrepresent a person’s identity, speech, or actions.
2. Establish certain criminal offenses:
- Making it a crime to create, distribute, or use deepfakes without Data principle’s consent, especially when such acts are directed toward the target’s harassment, defamation, humiliation, or bearing financial loss.
- There is need of more severe punishment for deepfakes in the cases of intimate content, political manipulation, or national security threats.
- Making it illegal to change or get rid of required AI labels or watermarks on synthetic content.
3. Upholds civil and personality rights:
- Statutorily acknowledging a right to one’s own image, voice, and likeness, thus enabling victims to seek damages, injunctions, and de-indexing of deepfake content.
- Quick and fast civil remedies (interim injunctions and mandatory takedowns) are in place and these are similar to the “dynamic” and injunction that was granted in the Sadhguru case.
4. Settles the matters of platform and intermediary obligations:
- Platforms and AI-tools should evidently have a proper detection mechanism, watermarking, and tracking systems by which easy to track criminals.
- There should be a clear and permanent labeling of AI-generated content, and a stringent timeline should be established for the removal of harmful deepfakes.
5. Establishes institutional mechanisms:
- Form a National Deepfake and Digital Authenticity Task Force to direct policy, technical detection advancements, and standard setting consultations.
- Create a fund specifically for the purpose of supporting research and development of deepfake detection tools by both academic and private sectors.
6. Balances rights and innovation:
- Specify clear exceptions for parody, satire, art, education, and other legitimate uses of deepfake technology, in order to prevent the cooling of free speech.
- Guarantee that court proceedings will be fast.
A dedicated deepfake law should be marked with:
- Campaigns are there for awareness among the public and to teach them how to identify and report deepfakes.
- Training for law enforcement, prosecutors, and judges on digital evidence and AI.
- Partnership between tech companies and the academic community to come up with detection tools and standards tailored to the Indian context.
Conclusion:
Deepfake technology has become a serious which changing threat to the privacy, dignity, and public trust in India. The existing laws like the Information Technology Act, BNS, and DPDP Act do give such remedies but still they are not enough to deal with the harms that AI-generated synthetic media causes much. A proper and procedural deepfake law is now not just an option but a necessity. The law which defines deepfakes properly, provide victim-centric remedies, and lay down obligations for the platforms and intermediaries, all while protecting free speech and innovation. Only a comprehensive, forward-looking legal framework can ensure accountability, protection of fundamental rights, and preserve digital integrity in India’s rapidly transforming technological landscape.