University of Hertfordshire

From the same journal

By the same authors


  • 906596

    Final published version, 360 KB, PDF document

View graph of relations
Original languageEnglish
Pages (from-to)411-429
JournalPhilosophy and Technology
Journal publication date2013
Early online date23 Aug 2013
Publication statusPublished - 2013


Many accounts of online trust are based upon mechanisms for building
reputation. Trust is portrayed as desirable, and handing off trust is easier if trust is modelled to be transitive. But in the analysis of cyber-security protocols, trust is usually used as a substitute for certain knowledge: it follows that if there is no residual risk, then there is no need for trust. On this grimmer understanding, the less that users are required to trust, the better. Involuntary transitivity of trust becomes corrosive, because it prevents participants from having control—or even knowledge—of the risks to which their trust assumptions expose them. In this paper, we take the stance that controlling the transitivity of trust requires us to recognise trust as a non-referentially transparent modality, similar to but significantly weaker than the epistemic modalities, and to accept the
corollary that imaginary (indeed—even impossible) threats can have real consequences that adversely affect online security. An apparently paradoxical outcome is that the desire of principals to keep their trust assumptions private can actually assist the design of systems to satisfy multiple security agendas. However, this approach requires agents to have the capability to predicate accurately about states of affairs that are logically inconsistent with their beliefs, and consequently, designing systems in this way becomes more akin to diplomacy than engineering


copyright The Author(s) 2013. This article is published with open access at

ID: 2289549