The challenges of providing explanations of AI systems when they do not behave like users expect
2022 (English)In: UMAP '22: Proceedings of the 30th ACM Conference on User Modeling, Adaptation and Personalization, New York: Association for Computing Machinery (ACM), 2022, p. 110-120Conference paper, Published paper (Refereed)
Abstract [en]
Explanations in artificial intelligence (AI) ensure that users of complex AI systems understand why the system behaves as it does. Expectations that users may have about the system behaviour play a role since they co-determine appropriate content of the explanations. In this paper, we investigate user-desired content of explanations when the system behaves in unexpected ways. Specifically, we presented participants with various scenarios involving an automated text classifier and then asked them to indicate their preferred explanation in each scenario. One group of participants chose the type of explanation from a multiple-choice questionnaire, the other had to answer using free text.
Participants show a pretty clear agreement regarding the preferred type of explanation when the output matches expectations: most do not require an explanation at all, while those that do would like one that explains what features of the input led to the output (a factual explanation). When the output does not match expectations, users also prefer different explanations. Interestingly, there is less of an agreement in the multiple-choice questionnaire. However, the free text responses indicate slightly favour an explanation that describes how the AI system's internal workings led to the observed output (i.e., a mechanistic explanation).
Overall, we demonstrate that user expectations are a significant variable in determining the most suitable content of explanations (including whether an explanation is needed at all). We also find different results, especially when the output does not match expectations, depending on whether participants answered via multiple-choice or free text. This shows a sensitivity to precise experimental setups that may explain some of the variety in the literature.
Place, publisher, year, edition, pages
New York: Association for Computing Machinery (ACM), 2022. p. 110-120
Keywords [en]
factual, explainable AI, counterfactual, mechanistic, explanations, expectations
National Category
Computer Sciences
Identifiers
URN: urn:nbn:se:hj:diva-57996DOI: 10.1145/3503252.3531306Scopus ID: 2-s2.0-85135173255ISBN: 978-1-4503-9207-5 (electronic)OAI: oai:DiVA.org:hj-57996DiVA, id: diva2:1683960
Conference
UMAP '22: 30th ACM Conference on User Modeling, Adaptation and Personalization, Barcelona, Spain, July 4-7, 2022
2022-07-202022-07-202024-07-16Bibliographically approved