Role of Trust in AI-Driven Healthcare Systems: Discussion from the Perspective of Patient Safety

Research output: Contribution to journalConference articlepeer-review

91 Downloads (Pure)

Abstract

In the field of healthcare, enhancing patient safety depends on several factors (e.g., regulation, technology, care quality, physical environment, human factors) that are interconnected. Artificial Intelligence (AI), along with an increasing realm of use, functions as a component of the overall healthcare system from a multi-agent systems viewpoint. Far from a stand-alone agent, AI cannot be held liable for the flawed decisions in healthcare. Also, AI does not have the capacity to be trusted according to the most prevalent definitions of trust because it does not possess emotive states or cannot be held responsible for their actions. A positive experience of AI reliance comes to be indicative of ‘trustworthiness’ rather than ‘trust’, implying further consequences related to patient safety. From a multi-agent systems viewpoint, ‘trust’ requires all the environmental, psychological and technical conditions being responsive to patient safety. It is fertilized for the overall system in which ‘responsibility’, ‘accountability’, ‘privacy’, ‘transparency; and ‘fairness’ need to be secured for all the parties involved in AI-driven healthcare, given the ethical and legal concerns and their threat to the trust.
Original languageEnglish
Pages (from-to)129-134
Number of pages6
JournalProceedings of the International Symposium on Human Factors and Ergonomics in Health Care
Volume11
Issue number1
DOIs
Publication statusPublished - 6 Oct 2022

Keywords

  • Artificial Intelligence
  • healthcare ethics
  • patient safety
  • Transparency, trust and accountability
  • privacy

Fingerprint

Dive into the research topics of 'Role of Trust in AI-Driven Healthcare Systems: Discussion from the Perspective of Patient Safety'. Together they form a unique fingerprint.

Cite this