REVENGE PORN AND DEEPFAKE: IS INDIA READY FOR AI-BORNE SEXUAL EXPLOITATION?
"In the digital age, a woman's face can be placed on someone else's body - and the law cannot care." The rise of AI-driven tools has enabled new forms of sexual exploitation through deepfakes and revenge porn, blurring the line between consent and violation. In India, where digital access is growing rapidly, this emerging crime remains largely unaddressed by existing laws. Deepfakes can be created in minutes, causing lasting emotional, social, and professional harm to victims, especially women. High-profile cases and increasing complaints reveal a disturbing legal vacuum. While countries like the UK, US, and EU have enacted strong protections, India lacks clear legal definitions, takedown mechanisms, and victim support systems. The article urges immediate reforms, including criminalizing AI-generated sexual content, rapid takedown laws, AI platform regulation, and cybercrime sensitization of law enforcement. This issue is not just technological — it is a human rights crisis that demands urgent, victim-centric legal intervention.
ARTICLES


REVENGE PORN AND DEEPFAKE: IS INDIA READY FOR AI-BORNE SEXUAL EXPLOITATION?
Nithya Prakash D, PRESIDENCY UNIVERSITY
INTRODUCTION
The border between reality and imagination is blurred-and for victims of AI-borne sexual abuse, it can be blurred. From non-conscience intimate images to ultra-revision deep fake pornography, technology is enabling new forms of violence that are not only traumatized, but
also legally invisible. India, a country with growing internet penetration and complex gender criteria, is now facing a silent epidemic of digital sexual abuse run by artificial intelligence. Nevertheless, as the crisis increases, the legal structure is largely archaic and ill to give justice.
This article delays the emerging danger of deeps) revenge porn, examining legal vacuums in India, and suggesting a roadmap for improvement, supported by real-life matters and global comparison.
UNDERSTANDING TECHNOLOGY: DEEPFAKES AND REVENGE PORN
Traditionally, this involved particular photos leaked maliciously.1 Today, however, with Deepfake technology, the authors no longer need real photos. Deepfakes are synthetic means, where AI algorithms overcome one person's face on another's body, usually creating ultra-
realistic pornographic content that is devastatingly believable. What makes terrifying deepfakes is their accessibility. Free apps and online tools can now produce forged content in minutes, aiming at celebrities, students, teachers, activists - anyone. Trauma, social stigma and career ruin caused by these digital violations are beyond measure.2
PSYCHOLOGICAL AND SOCIAL INFLUENCE OF DEEP MISUSE ON VICTIMS
Deepfake-enabled sexual abuse is not just a technical offense; it is a deeply personal violation that leaves emotional scars on its victims. While the legal system is still catching up with this modern threat, the psychological and social consequences for those targeted are already
disastrous —and largely ignored. Victims often suffer severe emotional trauma. Anxiety, depression, and panic attacks are common reactions upon discovering their likeness used in explicit content. Unlike other forms of digital harm, deepfakes are hyper-realistic, often irreversible, and endlessly circulated. The idea that their "body" is online even if digitally manipulated creates lasting feelings of violation. Many victims report losing sleep, withdrawing socially, and even contemplating self-harm. As one survivor of deepfake abuse said, "I felt that I was violated a thousand times and every stranger who saw that video was part of it." Social stigma adds another layer of trauma, especially in a culturally conservative society like India. Instead of support, victims often face judgment, doubt, or silence. They are blamed for having public profiles or sharing personal photos. In many cases, family or colleagues believe the content is real, leading to character assassination and social exclusion. The psychological toll deepens when victims are forced to leave schools, switch jobs, or relocate to escape harassment. Professional consequences can be just as damaging. Even if the content is proven fake, the stigma remains. For women professionals, students, journalists, or social media influencers, deepfakes can result in reputation loss, missed opportunities, or workplace investigations. In
digital spaces where perception is everything, the presence of manipulated content — no matter how false — can permanently harm one’s credibility. Many suffer in silence because there is no clear legal definition of the abuse, making justice feel out of reach.. "Legal Challenges of
Deepfake Pornography in India." Journal of Cyber Law Studies, Vol. 3, Issue 2.To make matters worse, India lacks specialized support systems for deepfake victims. Most do not report cases due to fear of re-trauma or belief that authorities won’t understand the issue.
There is no guaranteed psychological support, no fast-track mechanism for takedown, and limited digital literacy resources. This isolation breeds helplessness and mistrust in the justice system — a dangerous mix that discourages victims from speaking out.
GROUND REALITIES: STRAPPED IN THE AGGRIEVED LEGAL SILENCE
▪ Case 1: Rashmika Mandana Deep fake (2023)3
Popular actress Rashmika Mandana’s hyper-revision deep fake video, which went viral on social media, showing her suggestions. Despite the national outrage, no one was arrested- Cowle because the law does not recognize AI-based copy as a specific offense.
▪ Case 2: Delhi College Girl Examination4
In 2023, a college student in Delhi discovered a pornographic video conducted in WhatsApp groups of his university, which was created using a deep fake app with his Instagram photos. The police allegedly refused to register an FIR citing the lack of "real" nudity.
▪ Case 3: Complaints of North-East India increase5
The Mizoram Police Cyber Cell filed more than 50 complaints within three months related to the Deep fake app producing naked materials of women from profile pictures - most of the cases ended only with warnings.
women-targeted-cyber-cell-overwhelmed/These are not just legal failure. They are moral betrayal - where the remaining people are left
alone to deal with insults, while criminals roam free.
HOW THE WORLD IS REACTING: A COMPARATIVE VIEW
Other courts have started awakening the dangers of misuse. India can learn from the following models:
❖ United Kingdom6
• The platform is made mandatory for rapid detection and removal of such material.
• Even the intention to create crisis, not only the Act, punishable.
❖ United States
• States such as California, Virginia and Texas have passed laws that make up, shared or keep non-conscious deep-halted sexual content.
• Federal Bills like Deepfake Accountability Act are in pace.
❖ European Union
• The Digital Services Act (DSA) makes a quick takedown of harmful materials by technical platforms.
• The AI Act considers misuse of general AI equipment as a serious offense, especially in sexual abuse. These progressive steps show how legal innovation can keep pace with technological development - some India should adopt immediately.7
6 Online Safety Act 2023 (UK), c. 41, Part 10 – Offences relating to intimate images.
7 European Commission, Proposal for a Regulation laying down harmonized rules on artificial intelligence (AI Act), COM/2021/206 final, Annex III – High-risk AI systems.FURTHER ROAD: WHAT SHOULD INDIA DO
To protect its citizens, especially women from digital sexual violence, India needs immediate and bold reforms:
➢ Define deep fake and AI misconduct in law
A new provision under the IT Act or IPC should criminalize the construction, distribution and possession of deep sexual materials, especially if harassment is harassment, with increased punishment.8
➢ 2. Fast-track takedown mechanism
Like the European Union DSA, India must create a 24-hour emergency takedown system through middlemen such as meta, x, and telegram-to draw heavy fines with non-compliance.
➢ 3. Strengthen afflicted support
Provide free legal aid, mental health assistance and digital evidence assistance through a National Cyber Cell Task Force.
➢ 4. Regulate AI tools and platforms
Apply compulsory watermarking or AI-Khush Lasa for the materials generated using a synthetic tool. Restrictions on irregular apps promoting sexually deep -fed generations.9
➢ 5. Police and judicial training
Start a cybercrime sensitisation program for police and judicial officers on handling AI-related sexual abuse cases.10
CONCLUSION
AI-generated sexual abuse, especially through deepfakes, is redefining the threat landscape for women in India. When a fake video can destroy someone’s dignity, career, and mental health, the absence of specific legal protection becomes a grave injustice. The current legal system,
rooted in pre-digital thinking, fails to capture the severity of this emerging crime. India must urgently adopt tech-forward, victim-centric laws, establish faster redressal mechanisms, and recognize deepfakes as a serious violation of consent and privacy. The longer we ignore this
threat, the more we normalize digital violence. 11
“Justice at AI's age should be fast, clever and more kind - or justice will be a deeper”.
11 Inspired by the growing discourse on AI and justice delivery. See Suresh, Anushka. "Algorithmic Bias, AI Ethics, and the Future of Justice." Economic and Political Weekly, vol. 58, no. 10, 2023. Also see UN Special Rapporteur Report on Technology and Violence Against Women, A/HRC/47/25, 2021.
References
1 Salter, M. (2019). "Revenge Pornography and Victim Blaming in the Digital Age." Criminology & Criminal Justice, 19(1), 58–75.Unfortunately, technology has evolved faster than legal protections, leaving a chilling gap between harassment and justice.
2 Ministry of Electronics and Information Technology (MeitY), Government of India. (2023). Report of the Committee on Deepfake Technology and Misinformation. Also see: Kaur, H. (2021)
3 "Rashmika Mandana Deepfake Sparks Outrage, Raises Legal Questions on AI Abuse", The Indian Express, November 7, 2023. Available at: https://indianexpress.com/article/entertainment/bollywood/rashmika- mandanna-deepfake-video-legal-vacuum-9019224/
4 "Deepfake Video of Delhi College Girl Goes Viral; Police Reluctant to File FIR", The Quint, December 2023. Available at: https://www.thequint.com/news/india/delhi-college-girl-deepfake-police-fir
5 "North East Women Targeted with Deepfake Apps, Mizoram Police Receive 50+ Complaints", East Mojo, October 2023. Available at: https://www.eastmojo.com/mizoram/2023/10/11/deepfake-nightmare-northeast-
8 Ministry of Electronics and Information Technology (MeitY), Advisory on Deepfakes and Harmful Content, Government of India, 2023
9 NITI Aayog, National Strategy for Artificial Intelligence – #AIForAll, 2018; See also OECD Principles on AI, 2019.
10 Bureau of Police Research and Development (BPR&D), Training Module on Cyber Crimes Against Women and Children, 2021.