Generative AI and deepfakes: a human rights approach to tackling harmful content

Research output: Contribution to journalArticlepeer-review

5 Downloads (Pure)


The EU's Artificial Intelligence Act (AIA) introduces necessary deepfake regulations. However, these could infringe on the rights of AI providers and deployers or users, potentially conflicting with privacy and free expression under Articles 8 and 10 of the European Convention on Human Rights, and the General Data Protection Regulation (EU) 2016/679 (GDPR). This paper critically examines how an unmodified AIA could enable voter manipulation, blackmail, and the generation of sexual abusive content, facilitating misinformation and potentially harming millions, both emotionally and financially. Through analysis of the AIA's provisions, GDPR's regulations, relevant case law, and academic literature, the paper identifies risks for both AI providers and users. While the AIA's yearly review cycle is important, the immediacy of these threats demands swifter action. This paper proposes two key amendments: 1) mandate structured synthetic data for deepfake detection, and 2) classify AI intended for malicious deepfakes as ‘high-risk’. These amendments, alongside clear definitions and robust safeguards would ensure effective deepfake regulation while protecting fundamental rights. The paper urges policymakers to adopt these amendments during the next review cycle to protect democracy, individual safety, and children. Only then will the AIA fully achieve its aims while safeguarding the freedoms it seeks to uphold.

Original languageEnglish
Number of pages31
JournalInternational Review of Law, Computers & Technology
Publication statusPublished - 29 Mar 2024


  • Deepfake regulation
  • Human rights
  • Generative AI
  • Political disinformation
  • GDPR
  • generative AI
  • human rights


Dive into the research topics of 'Generative AI and deepfakes: a human rights approach to tackling harmful content'. Together they form a unique fingerprint.

Cite this