European Union lawmakers has backed a ban on artificial intelligence tools that create sexually explicit deepfakes without consent, marking a significant step toward tightening safeguards within the bloc’s digital regulatory framework.
The proposed restriction targets applications capable of generating explicit images of individuals without their approval, reflecting growing concern over the misuse of rapidly advancing AI technologies.
The development follows a similar vote by EU member states, which backed the proposal ahead of upcoming negotiations between lawmakers and national governments. These discussions are set to address broader revisions to the EU AI Act, as the European Commission considers easing certain provisions to ensure Europe remains competitive in the global technology landscape.
On March 13, ambassadors representing EU countries reached a preliminary agreement supporting a bloc-wide ban on AI systems that generate explicit imagery without the subject’s consent. The agreement, confirmed by a spokesperson for the Cypriot Presidency of the Council of the EU, indicates a growing consensus among member states on the need for stronger protections against harmful applications of AI.
Momentum for the proposal has been fueled by controversy surrounding Grok, an AI assistant linked to Elon Musk’s platform X. Earlier this year, the chatbot faced backlash after users generated large volumes of explicit images depicting real individuals without their consent, including minors. The incident intensified scrutiny of generative AI systems and exposed gaps in existing safeguards.
In response to the backlash, X announced that it had adopted a “zero-tolerance” policy toward sexually explicit deepfakes involving women and children. The company also introduced measures to curb the creation of such content. However, reports indicated that certain controversial features remained accessible on Grok’s platform, raising further concerns among regulators.
The European Commission subsequently opened an investigation into the AI assistant and later incorporated the proposed ban into its broader effort to refine and simplify AI legislation. The move highlights the EU’s attempt to balance innovation with accountability as it adapts its regulatory framework to evolving technological risks.
Michael McNamara, who is leading negotiations with member states, emphasized the public expectation for decisive action, stating that “a proposal to ban so-called nudification apps is something that our citizens expect of the co-legislators.”
The European Parliament is scheduled to vote on the proposal on March 26, 2026. Following the vote, negotiations between lawmakers and EU governments will determine the final shape of the legislation. Any agreed changes will then be implemented as part of the evolving AI regulatory framework, potentially setting a precedent for how governments worldwide address the growing challenge of AI-generated deepfakes.
Tackling AI Deepfakes and Sexual Exploitation on Social Media

The rise of artificial intelligence has brought tremendous innovation, but it has also created new avenues for abuse.
Across Europe and beyond, AI-generated sexual deepfakes images and videos that manipulate real people into explicit content without consent pose serious ethical, legal, and social challenges. Recognizing the growing threat, policymakers and organizations are taking decisive action to curb the misuse of these technologies.
Support for the ban has also been voiced at the national level. Belgian Digitization Minister Vanessa Matz announced her backing for the proposal, noting a shift in Belgium’s usual stance.
“Belgium usually abstains due to a lack of internal consensus, but this time we are joining the countries that are most ambitious in terms of protecting citizens, particularly girls and women, who are more often the victims of this kind of abuse.”
Vanessa Matz
Similarly, Sergey Lagodinsky emphasized the broader implications of the issue, arguing that “it is not just about individual scandals such as Grok. It is about how much power we are prepared to give AI to humiliate people.”
While the European Union focuses on legal enforcement, the United Nations International Children’s Fund (UNICEF) is tackling the issue from a global child protection perspective. In collaboration with End Child Prostitution, Child Pornography and Trafficking of Children for Sexual Purposes (ECPAT) and the International Criminal Police Organization (INTERPOL), UNICEF has documented a dramatic rise in AI-generated sexualized content involving children.
According to a press statement, a UNICEF, ECPAT, and INTERPOL study across 11 countries found that at least 1.2 million children reported having had their images manipulated into sexually explicit deepfakes in the past year.
UNICEF has classified such AI-generated content as child sexual abuse material, adding that “deepfake abuse is abuse, and there is nothing fake about the harm it causes.”“The harm from deepfake abuse is real and urgent. Children cannot wait for the law to catch up,” it stated.
READ ALSO : Markwayne Mullin To Face Confirmation Hearing For DHS Secretary











