Abstract
Many accounts of online trust are based upon mechanisms for building
reputation. Trust is portrayed as desirable, and handing off trust is easier if trust is modelled to be transitive. But in the analysis of cyber-security protocols, trust is usually used as a substitute for certain knowledge: it follows that if there is no residual risk, then there is no need for trust. On this grimmer understanding, the less that users are required to trust, the better. Involuntary transitivity of trust becomes corrosive, because it prevents participants from having control—or even knowledge—of the risks to which their trust assumptions expose them. In this paper, we take the stance that controlling the transitivity of trust requires us to recognise trust as a non-referentially transparent modality, similar to but significantly weaker than the epistemic modalities, and to accept the
corollary that imaginary (indeed—even impossible) threats can have real consequences that adversely affect online security. An apparently paradoxical outcome is that the desire of principals to keep their trust assumptions private can actually assist the design of systems to satisfy multiple security agendas. However, this approach requires agents to have the capability to predicate accurately about states of affairs that are logically inconsistent with their beliefs, and consequently, designing systems in this way becomes more akin to diplomacy than engineering
reputation. Trust is portrayed as desirable, and handing off trust is easier if trust is modelled to be transitive. But in the analysis of cyber-security protocols, trust is usually used as a substitute for certain knowledge: it follows that if there is no residual risk, then there is no need for trust. On this grimmer understanding, the less that users are required to trust, the better. Involuntary transitivity of trust becomes corrosive, because it prevents participants from having control—or even knowledge—of the risks to which their trust assumptions expose them. In this paper, we take the stance that controlling the transitivity of trust requires us to recognise trust as a non-referentially transparent modality, similar to but significantly weaker than the epistemic modalities, and to accept the
corollary that imaginary (indeed—even impossible) threats can have real consequences that adversely affect online security. An apparently paradoxical outcome is that the desire of principals to keep their trust assumptions private can actually assist the design of systems to satisfy multiple security agendas. However, this approach requires agents to have the capability to predicate accurately about states of affairs that are logically inconsistent with their beliefs, and consequently, designing systems in this way becomes more akin to diplomacy than engineering
Original language | English |
---|---|
Pages (from-to) | 411-429 |
Journal | Philosophy and Technology |
Volume | 26 |
Issue number | 4 |
Early online date | 23 Aug 2013 |
DOIs | |
Publication status | Published - 2013 |
Keywords
- Counterfactual reasoning . Doxastic logic . Intensional properties . Privacy. Security . Trust