TY - JOUR
T1 - Managing Obligation Delegation: Incentive Model and Experiment
AU - Chen, Liang
AU - Zeng, Cheng
AU - Vidalis, Stilianos
AU - Jie, Wei
N1 - © 2024 The Author(s). Security and Privacy published by John Wiley & Sons Ltd. This is an open access article distributed under the Creative Commons Attribution License, to view a copy of the license, see: https://creativecommons.org/licenses/by/4.0/
PY - 2025/1/1
Y1 - 2025/1/1
N2 - Obligations are essential part of security policies, which specify what actions a user is obliged to perform in the future. One interesting feature of obligations is unenforceable, that is, the system cannot guarantee that each obligation will be fulfilled. Indeed, obligations go unfulfilled for a variety of reasons. For example, a user may have a family emergency that leads her having little time to discharge assigned obligations. We argue that delegation of obligations can be regarded as a means of providing opportunity for obligations to be discharged. However, this opportunity will be wasted if users who received delegation do not fulfill the obligations eventually. In this paper, we propose a mechanism that incentivizes users to accept and fulfill obligations for others by rewarding users credits. The amount of credits that can be earned depends on their trust rating, which reflects precisely how diligent of individuals in fulfilling obligations in the past. Users are motivated to raise up their trust ratings by fulfilling obligations for others, in order to earn more credits in the future. To evaluate our approach, we develop a multiple‐agent system that simulates a number of different profiles for agents and run experiments for one‐hop delegation and cascaded delegation with those agents. The experiments show a rich set of results, one of which confirms that delegation with incentives achieves the best outcome in terms of the number of obligations being fulfilled. Also, we implemented the modified ϵ $$ \upvarepsilon $$ ‐greedy algorithm, one of the closely related existing works, in our experimental framework and compared its performance to our approach. The results show that our approach offers greater flexibility and efficiency, as well as a higher obligation fulfillment rate.
AB - Obligations are essential part of security policies, which specify what actions a user is obliged to perform in the future. One interesting feature of obligations is unenforceable, that is, the system cannot guarantee that each obligation will be fulfilled. Indeed, obligations go unfulfilled for a variety of reasons. For example, a user may have a family emergency that leads her having little time to discharge assigned obligations. We argue that delegation of obligations can be regarded as a means of providing opportunity for obligations to be discharged. However, this opportunity will be wasted if users who received delegation do not fulfill the obligations eventually. In this paper, we propose a mechanism that incentivizes users to accept and fulfill obligations for others by rewarding users credits. The amount of credits that can be earned depends on their trust rating, which reflects precisely how diligent of individuals in fulfilling obligations in the past. Users are motivated to raise up their trust ratings by fulfilling obligations for others, in order to earn more credits in the future. To evaluate our approach, we develop a multiple‐agent system that simulates a number of different profiles for agents and run experiments for one‐hop delegation and cascaded delegation with those agents. The experiments show a rich set of results, one of which confirms that delegation with incentives achieves the best outcome in terms of the number of obligations being fulfilled. Also, we implemented the modified ϵ $$ \upvarepsilon $$ ‐greedy algorithm, one of the closely related existing works, in our experimental framework and compared its performance to our approach. The results show that our approach offers greater flexibility and efficiency, as well as a higher obligation fulfillment rate.
KW - delegation of obligations
KW - incentive
KW - multi‐agent systems
KW - security policy
KW - trust
U2 - 10.1002/spy2.489
DO - 10.1002/spy2.489
M3 - Article
SN - 2475-6725
VL - 8
SP - 1
EP - 17
JO - Security and Privacy
JF - Security and Privacy
IS - 1
M1 - 489
ER -