Deepfake detection in generative AI: A legal framework proposal to protect human rights  

Research output: Contribution to journalArticlepeer-review

Abstract

Deepfakes, exploited for financial fraud, political misinformation, non-consensual imagery, and targeted harassment, represent a rapidly evolving threat to global information integrity, demanding immediate and coordinated intervention. This research undertakes technical and comparative legal analyses of deepfake detection methods. It examines key mitigation strategies—including AI-powered detection, provenance tracking, and watermarking—highlighting the pivotal role of the Coalition for Content Provenance and Authenticity (C2PA) in establishing media authentication standards. The study investigates deepfakes’ complex intersections with the admissibility of legal evidence, non-discrimination, data protection, freedom of expression, and copyright, questioning whether existing legal frameworks adequately balance advances in detection technologies with the protection of individual rights. As national strategies become increasingly vital amid geopolitical realities and fragmented global governance, the research advocates for a unified international approach grounded in UN Resolution 78/265 on safe, secure, and trustworthy AI. It calls for a collaborative framework that prioritizes interoperable technical standards and harmonized regulations. The paper critiques legal frameworks in the EU, US, UK, and China—jurisdictions selected for their global digital influence and divergent regulatory philosophies—and recommends developing robust, accessible, adaptable, and internationally interoperable tools to address evidentiary reliability, privacy, freedom of expression, copyright, and algorithmic bias. Specifically, it proposes enhanced technical standards; regulatory frameworks that support the adoption of explainable AI (XAI) and C2PA; and strengthened cross-sector collaboration to foster a trustworthy deepfake ecosystem.
Original languageEnglish
Article number106162
Pages (from-to)1-34
Number of pages34
JournalComputer Law & Security Review
VolumeVolume 58
DOIs
Publication statusPublished - 23 Jun 2025

Keywords

  • Deepfake Detection
  • Generative AI
  • Explainable AI (XAI)
  • Coalition for Content Provenance and Authenticity (C2PA)
  • Human Rights

Fingerprint

Dive into the research topics of 'Deepfake detection in generative AI: A legal framework proposal to protect human rights  '. Together they form a unique fingerprint.

Cite this