By Andrew Romanchik, Co-Editor-in-Chief

Europe’s digital regulatory framework encodes an ambitious normative project — but one that systematically underprotects women. The EU’s Digital Services Act (DSA), General Data Protection Regulation (GDPR), and AI Act, alongside the Council of Europe’s Reykjavík and Istanbul Conventions, collectively purport to discipline the digital sphere in the name of rights, safety, and equality. The DSA in particular imposes on Very Large Online Platforms (VLOPs) affirmative obligations to assess and mitigate “systemic risks,” including gender-based violence, under Article 34 — yet its enforcement mechanism routes complaints through national Digital Services Coordinators whose capacity and political will vary dramatically across Member States. The aggregate effect of these instruments reveals a structural gap: gendered harms are acknowledged in legal text but routinely left unredressed in practice, producing a regulatory order that formalizes equality without enforcing it.
The Istanbul Convention’s foundational commitment to eliminating gender-based violence stands in stark contrast to the Reykjavík Convention’s actual drafting priorities, exposing the hierarchy of concern embedded in Council of Europe instruments. The Istanbul Convention obliges state parties to criminalize psychological violence, stalking, and sexual harassment under Articles 33–40, and to ensure that victim protection is not contingent on the perpetrator’s relationship to the victim — a provision directly relevant to anonymous online abuse. Yet a textual analysis of the Reykjavík Convention is instructive: the document references Russia forty times and the word “values” sixteen times, while women appear only five times and girls twice. This drafting disparity reflects a legal architecture in which gendered harm is subordinated to geopolitical and institutional concerns. Turkey’s 2021 presidential withdrawal from the Istanbul Convention — executed by executive decree without parliamentary approval and subsequently upheld by Turkey’s Council of State — further illustrates how treaty commitments on gender-based violence remain uniquely vulnerable to political erosion in ways that other Council of Europe instruments are not.
Domestic enforcement regimes compound this structural weakness by setting evidentiary and mens rea thresholds that effectively immunize a wide range of gendered digital harm from prosecution. In the United Kingdom, the offence of non-consensual intimate image sharing — now consolidated under the Online Safety Act 2023 — requires proof that the defendant intended to cause distress, a threshold that excludes a substantial category of image-based abuse motivated by humiliation or control rather than demonstrable intent. The broader cyberharassment framework under the Protection from Harassment Act 1997, while extended to online conduct, was designed for pre-digital patterns of behavior and struggles to capture the networked, anonymous, and cross-jurisdictional character of contemporary online abuse. As Open University Professor Olga Jurasz has observed, this produces a legally significant category of content that is “harmful but not illegal” — a space in which cyber incitement to hatred operates without meaningful legal consequence. The 2013 campaign to feature women on British banknotes, which generated coordinated violent threats on Twitter directed at the women who led it, exemplifies precisely this enforcement vacuum: the conduct was widely documented, the targets identifiable, and the harm manifest — yet prosecution remained effectively non-viable.
The EU’s proposed Directive on combating violence against women and girls (2022/0066/COD), stalled in part over disagreement on whether to include a harmonized definition of online violence, illustrates that the legislative impasse is itself a form of policy choice that leaves existing gaps intact. Where enforcement thresholds consistently fall short of the harms experienced by women online, the law does not merely fail to remedy inequality — it ratifies it. As European digital law increasingly functions as a global regulatory template, exported through adequacy decisions, bilateral trade frameworks, and platform governance standards, this normalized gap travels with it. The risk is not that European law overreaches or underreaches in the abstract, but that it exports a legal architecture in which gendered digital harm is formally recognized and substantively tolerated — and that by the time other jurisdictions adopt these frameworks, the structural inadequacies will have been laundered into legitimacy.
This piece is intended for general informational and academic discussion purposes only and does not constitute legal advice. The views expressed are those of the author alone and do not represent the positions of the University of Pittsburgh or any affiliated institution. Legislative references and treaty citations should be independently verified, as legal frameworks discussed may have been amended or updated since the time of writing.
For any comments, email ulawreviewpitt@gmail.com

Leave a comment