Explainable AI in healthcare: Factors influencing medical practitioners’ trust calibration in collaborative tasks
dc.contributor.author | Darvish, Mahdieh | |
dc.contributor.author | Holst, Jan-Hendrik | |
dc.contributor.author | Bick, Markus | |
dc.date.accessioned | 2023-12-26T18:40:10Z | |
dc.date.available | 2023-12-26T18:40:10Z | |
dc.date.issued | 2024-01-03 | |
dc.identifier.doi | 10.24251/HICSS.2023.402 | |
dc.identifier.isbn | 978-0-9981331-7-1 | |
dc.identifier.other | 682e9090-1f8f-4af5-baac-3a6ccc62af89 | |
dc.identifier.uri | https://hdl.handle.net/10125/106785 | |
dc.language.iso | eng | |
dc.relation.ispartof | Proceedings of the 57th Hawaii International Conference on System Sciences | |
dc.rights | Attribution-NonCommercial-NoDerivatives 4.0 International | |
dc.rights.uri | https://creativecommons.org/licenses/by-nc-nd/4.0/ | |
dc.subject | Decision Support for Healthcare Processes and Services | |
dc.subject | ai in healthcare | |
dc.subject | clinical decision support | |
dc.subject | explainable artificial intelligence | |
dc.subject | human-computer interaction | |
dc.subject | trust calibration | |
dc.title | Explainable AI in healthcare: Factors influencing medical practitioners’ trust calibration in collaborative tasks | |
dc.type | Conference Paper | |
dc.type.dcmi | Text | |
dcterms.abstract | Artificial intelligence is transforming clinical decision-making processes by using patient data for improved diagnosis and treatment. However, the increasing black box nature of AI systems presents comprehension challenges for users. To ensure the safe and efficient utilisation of these systems, it is essential to establish appropriate levels of trust. Accordingly, this study aims to answer the following research question: What factors influence medical practitioners' trust calibration in their interactions with AI-based clinical decision support systems (CDSSs)? Applying an exploratory approach, the data is collected through semi-structured interviews with medical and AI experts, and is examined through qualitative content analysis. The results indicate that perceived understandability, technical competence and reliability of the system, along with other userand context-related factors, impact physicians’ trust calibration in AI-based CDSSs. As there is limited literature on this specific topic, our findings provide a foundation for future studies aiming to delve deeper into this field. | |
dcterms.extent | 10 pages | |
prism.startingpage | 3326 |
Files
Original bundle
1 - 1 of 1