TY - JOUR
T1 - From Information Fiduciaries to AI
T2 - Minding the Gap of Trust
AU - Unver, Mehmet Bilal
PY - 2025/12/20
Y1 - 2025/12/20
N2 - This paper revisits fiduciary relationships in the context of digital platforms and artificial intelligence (AI), evaluating whether fiduciary duties retain relevance within an AI-driven socio-technical environment. It outlines the core features of fiduciary law and examines how interpersonal and institutional trust are reshaped by digital platforms, drawing on the EU Digital Services Act (DSA) (2022) and the UK Online Safety Act (OSA) (2023). Building on Balkin’s (2016) theory of information fiduciaries, the paper highlights and analyses a broader transformation of trust across platforms and AI systems. It argues that the latter may widen the trust gap as users increasingly over-rely on AI and move further away from traditional fiduciary relationships. While the DSA and OSA seek to enhance user trust through strengthened transparency and accountability duties, a distinct regulatory shift emerges in the EU AI Act (2024). By emphasising AI trustworthiness, the Act risks decoupling trust from its moral foundations, potentially fostering misplaced trust (O’Neill 2018), distributed trust (Botsman 2017) or lazy trust (Myskja & Steinsbekk 2020). In conclusion, the paper critiques the EU AI Act’s deprioritising human trust, advocating for enhanced individual rights and citizen participation in AI governance, to mitigate trust gaps and the declining role of fiduciary relationships.
AB - This paper revisits fiduciary relationships in the context of digital platforms and artificial intelligence (AI), evaluating whether fiduciary duties retain relevance within an AI-driven socio-technical environment. It outlines the core features of fiduciary law and examines how interpersonal and institutional trust are reshaped by digital platforms, drawing on the EU Digital Services Act (DSA) (2022) and the UK Online Safety Act (OSA) (2023). Building on Balkin’s (2016) theory of information fiduciaries, the paper highlights and analyses a broader transformation of trust across platforms and AI systems. It argues that the latter may widen the trust gap as users increasingly over-rely on AI and move further away from traditional fiduciary relationships. While the DSA and OSA seek to enhance user trust through strengthened transparency and accountability duties, a distinct regulatory shift emerges in the EU AI Act (2024). By emphasising AI trustworthiness, the Act risks decoupling trust from its moral foundations, potentially fostering misplaced trust (O’Neill 2018), distributed trust (Botsman 2017) or lazy trust (Myskja & Steinsbekk 2020). In conclusion, the paper critiques the EU AI Act’s deprioritising human trust, advocating for enhanced individual rights and citizen participation in AI governance, to mitigate trust gaps and the declining role of fiduciary relationships.
KW - Artificial intelligence (AI)
KW - Information fiduciaries
KW - Trust
U2 - 10.1080/13600869.2025.2602111
DO - 10.1080/13600869.2025.2602111
M3 - Article
SN - 1360-0869
JO - International Review of Law, Computers & Technology
JF - International Review of Law, Computers & Technology
ER -