The Effect of Interpretable Artificial Intelligence on Repeated Managerial Decision-Making under Uncertainty
| dc.contributor.author | Altintas, Onur | |
| dc.contributor.author | Seidmann, Abraham | |
| dc.contributor.author | Gu, Bin | |
| dc.contributor.author | Mažar, Nina | |
| dc.date.accessioned | 2023-12-26T18:49:26Z | |
| dc.date.available | 2023-12-26T18:49:26Z | |
| dc.date.issued | 2024-01-03 | |
| dc.identifier.doi | https://doi.org/10.24251/HICSS.2024.752 | |
| dc.identifier.isbn | 978-0-9981331-7-1 | |
| dc.identifier.other | 1d74e6ad-84b5-419f-9f5a-f559af0de5a5 | |
| dc.identifier.uri | https://hdl.handle.net/10125/107138 | |
| dc.language.iso | eng | |
| dc.relation.ispartof | Proceedings of the 57th Hawaii International Conference on System Sciences | |
| dc.rights | Attribution-NonCommercial-NoDerivatives 4.0 International | |
| dc.rights.uri | https://creativecommons.org/licenses/by-nc-nd/4.0/ | |
| dc.subject | Digital Transformations of Business Operations | |
| dc.subject | decision-making | |
| dc.subject | human-ai interaction | |
| dc.subject | interpretability | |
| dc.subject | uncertainty | |
| dc.title | The Effect of Interpretable Artificial Intelligence on Repeated Managerial Decision-Making under Uncertainty | |
| dc.type | Conference Paper | |
| dc.type.dcmi | Text | |
| dcterms.abstract | Business decisions involving investments, healthcare, and supply chains are often made in uncertain environments. At the same time, despite being optimal initially, such choices may seem incorrect in hindsight, which may explain why decision-makers hesitate to use AI algorithms under high uncertainty. While some studies suggest that making AI and ML applications more understandable can boost their adoption and trust, this hasn’t been examined in uncertain conditions where decision-makers must make repetitive business decisions. Our study addresses this issue empirically by analyzing how different interpretability approaches affect AI adoption and trust under varying levels of uncertainty. Surprisingly, we find that providing interpretability does not necessarily increase AI adoption. In some cases, it may even reduce AI adoption. Interestingly, even though AI adoption was higher, trust in the AI recommendations was significantly lower in high uncertainty compared to low uncertainty across all interpretability types. The evidence is clear that showing the cumulative monetary performance of AI to the users as a benchmark, side by side with their own monetary performance, enhances trust in the AI recommendations. | |
| dcterms.extent | 10 pages | |
| prism.startingpage | 6271 |
Files
Original bundle
1 - 1 of 1
