TY - JOUR
T1 - Generative AI and deepfakes: a human rights approach to tackling harmful content
AU - Romero-Moreno, Felipe
N1 - © 2024 The Author(s). Published by Informa UK Limited, trading as Taylor & Francis Group. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/
licenses/by/4.0/).
PY - 2024/3/29
Y1 - 2024/3/29
N2 - The EU's Artificial Intelligence Act (AIA) introduces necessary deepfake regulations. However, these could infringe on the rights of AI providers and deployers or users, potentially conflicting with privacy and free expression under Articles 8 and 10 of the European Convention on Human Rights, and the General Data Protection Regulation (EU) 2016/679 (GDPR). This paper critically examines how an unmodified AIA could enable voter manipulation, blackmail, and the generation of sexual abusive content, facilitating misinformation and potentially harming millions, both emotionally and financially. Through analysis of the AIA's provisions, GDPR's regulations, relevant case law, and academic literature, the paper identifies risks for both AI providers and users. While the AIA's yearly review cycle is important, the immediacy of these threats demands swifter action. This paper proposes two key amendments: 1) mandate structured synthetic data for deepfake detection, and 2) classify AI intended for malicious deepfakes as ‘high-risk’. These amendments, alongside clear definitions and robust safeguards would ensure effective deepfake regulation while protecting fundamental rights. The paper urges policymakers to adopt these amendments during the next review cycle to protect democracy, individual safety, and children. Only then will the AIA fully achieve its aims while safeguarding the freedoms it seeks to uphold.
AB - The EU's Artificial Intelligence Act (AIA) introduces necessary deepfake regulations. However, these could infringe on the rights of AI providers and deployers or users, potentially conflicting with privacy and free expression under Articles 8 and 10 of the European Convention on Human Rights, and the General Data Protection Regulation (EU) 2016/679 (GDPR). This paper critically examines how an unmodified AIA could enable voter manipulation, blackmail, and the generation of sexual abusive content, facilitating misinformation and potentially harming millions, both emotionally and financially. Through analysis of the AIA's provisions, GDPR's regulations, relevant case law, and academic literature, the paper identifies risks for both AI providers and users. While the AIA's yearly review cycle is important, the immediacy of these threats demands swifter action. This paper proposes two key amendments: 1) mandate structured synthetic data for deepfake detection, and 2) classify AI intended for malicious deepfakes as ‘high-risk’. These amendments, alongside clear definitions and robust safeguards would ensure effective deepfake regulation while protecting fundamental rights. The paper urges policymakers to adopt these amendments during the next review cycle to protect democracy, individual safety, and children. Only then will the AIA fully achieve its aims while safeguarding the freedoms it seeks to uphold.
KW - Deepfake regulation
KW - Human rights
KW - Generative AI
KW - Political disinformation
KW - GDPR
KW - generative AI
KW - human rights
UR - http://www.scopus.com/inward/record.url?scp=85189609444&partnerID=8YFLogxK
U2 - 10.1080/13600869.2024.2324540
DO - 10.1080/13600869.2024.2324540
M3 - Article
SN - 1360-0869
JO - International Review of Law, Computers & Technology
JF - International Review of Law, Computers & Technology
ER -