In the field of healthcare, enhancing patient safety depends on several factors (e.g., regulation, technology, care quality, physical environment, human factors) that are interconnected. Artificial Intelligence (AI), along with an increasing realm of use, functions as a component of the overall healthcare system from a multi-agent systems viewpoint. Far from a stand-alone agent, AI cannot be held liable for the flawed decisions in healthcare. Also, AI does not have the capacity to be trusted according to the most prevalent definitions of trust because it does not possess emotive states or cannot be held responsible for their actions. A positive experience of AI reliance comes to be indicative of ‘trustworthiness’ rather than ‘trust’, implying further consequences related to patient safety. From a multi-agent systems viewpoint, ‘trust’ requires all the environmental, psychological and technical conditions being responsive to patient safety. It is fertilized for the overall system in which ‘responsibility’, ‘accountability’, ‘privacy’, ‘transparency; and ‘fairness’ need to be secured for all the parties involved in AI-driven healthcare, given the ethical and legal concerns and their threat to the trust.
|Number of pages||6|
|Journal||Proceedings of the International Symposium on Human Factors and Ergonomics in Health Care|
|Publication status||Published - 6 Oct 2022|
- Artificial Intelligence
- healthcare ethics
- patient safety
- Transparency, trust and accountability