Sexual deepfakes, images or videos entirely generated or altered by artificial intelligence, are no longer just a theoretical danger. They have become a very real form of cyberviolence, particularly against minors. Recently, a case involving a dozen middle-school girls highlighted the scale of the phenomenon: their faces were used to create realistic intimate edits, which were then shared on social media by other adolescents.
This particularly serious episode shows how the accessibility of AI tools is transforming the landscape of school bullying and cyberharassment, and underscores the urgent need for a collective response commensurate with the stakes.
When Artificial Intelligence Becomes a Weapon Among Teenagers
AI-based image generators are now accessible with just a few clicks. Simple applications allow a face to be placed onto a body in realistic scenes, sometimes sexual in nature. While these technologies can have legitimate uses, they become extremely dangerous when misused.
This is exactly what happened in a case where middle-school girls were unknowingly involved in sexualized edits created by their peers. Although the images were fictional, they mimicked reality closely enough to deceive students and seriously harm the victims’ reputations and psychological safety.
Behind the seemingly ‘joking’ intent, the consequences are severe: anxiety, shame, isolation, fear of returning to school… For these adolescents, the impact is immense.
A Crucial Report: The Importance of Anti-Bullying Measures
What allowed the case to be revealed quickly was largely the courage of the victims and the existence of an internal system designed to prevent bullying. Thanks to this reporting mechanism, students were able to alert the authorities promptly, before the situation escalated further.
Such a tool proves essential in a context where images circulate at lightning speed on social media. Without a clear and easily accessible procedure for reporting abuse, many victims do not dare to speak out or simply do not know whom to turn to.
Schools are therefore increasingly encouraged to implement mechanisms that allow students to report any humiliating, violent, or non-consensual content.
Twelve Victims and an Open Investigation: An Institutional Awakening
In this case, the authorities confirmed that an investigation had been opened. Legal orders were issued to identify the creator(s) of the edits, as well as those who participated in their dissemination.
A striking element is that some of the victims were not even enrolled in the same school as the perpetrators. This highlights a worrying characteristic of sexual deepfakes: the authors can target anyone, without necessarily having a direct connection to the person.
Prosecutors and law enforcement are now facing a new type of cybercrime, still relatively recent but rapidly growing. Investigations involving sexual deepfakes require specific technical expertise, and the legal framework is still adapting to these new forms of digital violence.
Rising Sexual Deepfakes: An Underestimated Phenomenon
Human rights organizations and associations have been warning for several years about the rise of sexual deepfakes. The victims are primarily women, increasingly often minors.
The French National Consultative Commission on Human Rights (CNCDH) recently emphasized that the response from digital platforms and public authorities remains insufficient. Social networks do not yet have fully effective systems to detect and remove this type of content, and takedown procedures are often too slow.
This gap between the speed of dissemination and the speed of response constitutes one of the main challenges.
Educate, Prevent, Hold Accountable: An Absolute Necessity
Faced with these dangers, a major question arises: how can adolescents be protected in a world where AI can create dangerous synthetic content in seconds?
Several approaches are essential:
Digital education
Adolescents need to understand:
what AI is and how it can be misused,
the legal risks involved (creation, dissemination, sharing),
what digital consent means,
the real impact an image, even a fake one, can have on a person.
The central role of families
Parents need support to better communicate with their children about online practices, the apps they use, and risky behaviors.
Strengthening school mechanisms
Tools like internal reporting platforms play a key role. Their implementation should be widespread and reinforced.
Adapting the legal framework
The law must continue to evolve to firmly punish non-consensual deepfakes, especially when minors are involved.
Conclusion
The recent case involving a dozen middle-school girls shows that deepfakes are no longer just a technological issue: they have become a real societal concern.
The ease with which students were able to misuse AI to harass and humiliate their peers should alert parents, teachers, public decision-makers, and digital platforms.
Protecting young people from these new forms of violence requires coordinated action: education, prevention, vigilance, and a clear legal framework.
AI must not become a weapon in the hands of adolescents — and this responsibility falls on all of us.
Read more here.
External source : CNEWS