Generative AI and deepfakes: adopting a human rights-based approach to tackling harmful content

Research output: Contribution to specialist publicationArticle

Abstract

This paper critically assesses to what extent under the EU AI Act (Act), theprovisions governing the use of deepfakes could be implemented in a way which iscompatible with the right of AI providers and users to privacy and freedom ofexpression under Articles 8 and 10 of the European Convention on Human Rights(ECHR), and the General Data Protection Regulation (EU) 2016/679 (GDPR). Theanalysis draws on the Act deepfake provisions, the case-law of the Strasbourg andLuxembourg courts, and academic literature. It critically examines the compatibilityof the deepfake provisions with the European Court of Human Rights’ (ECtHR)three-part, non-cumulative test to determine whether the obligations set out in theAct concerning AI providers and users can be adopted: firstly, that it is ‘inaccordance with the law’; secondly, that it pursues one or more legitimate aimsincluded in Article 8(2) and 10(2) Convention; and thirdly, that it is ‘necessary’ and‘proportionate’. The paper addresses a significant gap in the literature. It proposesthat the Act be amended to introduce new obligations for AI providers oblige them todeploy structured synthetic data to detect deepfakes, and in addition to electoraldisinformation, also explicitly consider AI systems intended to be used for sextorsionand AI-child pornography, high-risk AI. It concludes that unless, following Article7(1), empowering the Commission to amend the Act, the proposals in the paper forprocedural safeguards are implemented, its deepfake provisions will violate Articles8 and 10 ECHR, and the GDPR.

Keywords

  • Generative AI
  • Deepfakes
  • Human Rights
  • Harmful content

Fingerprint

Dive into the research topics of 'Generative AI and deepfakes: adopting a human rights-based approach to tackling harmful content'. Together they form a unique fingerprint.

Cite this