System disruptions
We are currently experiencing disruptions on the search portals due to high traffic. We are working to resolve the issue, you may temporarily encounter an error message.
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
On the Definition of Appropriate Trust and the Tools that Come with it
Jönköping University, Jönköping International Business School.ORCID iD: 0000-0001-9633-0423
2023 (English)In: 2023 Congress in Computer Science, Computer Engineering, & Applied Computing (CSCE), Institute of Electrical and Electronics Engineers (IEEE), 2023, p. 1555-1562Conference paper, Published paper (Other academic)
Abstract [en]

Evaluating the efficiency of human-AI interactions is challenging, including subjective and objective quality aspects. With the focus on the human experience of the explanations, evaluations of explanation methods have become mostly subjective, making comparative evaluations almost impossible and highly linked to the individual user. However, it is commonly agreed that one aspect of explanation quality is how effectively the user can detect if the predictions are trustworthy and correct, i.e., if the explanations can increase the user's appropriate trust in the model. This paper starts with the definitions of appropriate trust from the literature. It compares the definitions with model performance evaluation, showing the strong similarities between appropriate trust and model performance evaluation. The paper's main contribution is a novel approach to evaluating appropriate trust by taking advantage of the likenesses between definitions. The paper offers several straightforward evaluation methods for different aspects of user performance, including suggesting a method for measuring uncertainty and appropriate trust in regression.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2023. p. 1555-1562
Keywords [en]
Appropriate Trust, Calibrated Trust, Comparative Evaluations, Evaluation of Explanations, Explanation Methods, Metrics, XAI
National Category
Human Computer Interaction
Identifiers
URN: urn:nbn:se:hj:diva-64149DOI: 10.1109/CSCE60160.2023.00256Scopus ID: 2-s2.0-85191166512ISBN: 979-8-3503-2759-5 (electronic)OAI: oai:DiVA.org:hj-64149DiVA, id: diva2:1856553
Conference
2023 Congress in Computer Science, Computer Engineering, and Applied Computing, CSCE 2023 Las Vegas 24 July 2023 through 27 July 2023
Funder
Knowledge Foundation, 20160035Available from: 2024-05-07 Created: 2024-05-07 Last updated: 2024-05-07Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopusPreprint

Authority records

Löfström, Helena

Search in DiVA

By author/editor
Löfström, Helena
By organisation
Jönköping International Business School
Human Computer Interaction

Search outside of DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 18 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf