Abstract
The rapid advancement of generative artificial intelligence has enabled the creation of increasingly convincing deepfakes, posing an unprecedented threat to trust and security in the increasingly digital world. This paper delves into the intricate challenges of deepfake fraud detection, scrutinising the limitations of existing legal frameworks in effectively addressing this evolving threat. Guided by UN Resolution A/78/L.49 and grounded in the principles of Article 114 of the Treaty on the Functioning of the European Union, it critically analyses the intersection of deepfake fraud detection tools with human rights and evolving laws, including the EU AI Act, General Data Protection Regulation, and Digital Services Act. The research explores the efficacy of various detection methods, encompassing artifact-based, physiological signal analysis, deep learning, and blockchain approaches, while also examining the role of the Coalition for Content Provenance and Authenticity (C2PA) in bolstering media authenticity. Crucially, this paper proposes a comprehensive strategy for combating deepfake fraud, integrating legal, ethical, and technical considerations to develop robust and responsible detection methods with the potential for global application. This multifaceted approach is vital to safeguarding trust in generative AI while upholding fundamental rights and data protection principles in the face of this rapidly evolving technology.
Original language | English |
---|---|
Number of pages | 47 |
Specialist publication | SSRN |
Publication status | Published - 23 Nov 2024 |
Keywords
- Generative AI
- Deepfake Detection
- Trust & Security