Open this publication in new window or tab >>2024 (English)In: Expert systems with applications, ISSN 0957-4174, E-ISSN 1873-6793, Vol. 246, article id 123154Article in journal (Refereed) Published
Abstract [en]
While local explanations for AI models can offer insights into individual predictions, such as feature importance, they are plagued by issues like instability. The unreliability of feature weights, often skewed due to poorly calibrated ML models, deepens these challenges. Moreover, the critical aspect of feature importance uncertainty remains mostly unaddressed in Explainable AI (XAI). The novel feature importance explanation method presented in this paper, called Calibrated Explanations (CE), is designed to tackle these issues head-on. Built on the foundation of Venn-Abers, CE not only calibrates the underlying model but also delivers reliable feature importance explanations with an exact definition of the feature weights. CE goes beyond conventional solutions by addressing output uncertainty. It accomplishes this by providing uncertainty quantification for both feature weights and the model’s probability estimates. Additionally, CE is model-agnostic, featuring easily comprehensible conditional rules and the ability to generate counterfactual explanations with embedded uncertainty quantification. Results from an evaluation with 25 benchmark datasets underscore the efficacy of CE, making it stand as a fast, reliable, stable, and robust solution.
Place, publisher, year, edition, pages
Elsevier, 2024
Keywords
Explainable AI, Feature Importance, Calibrated Explanations, Venn-Abers, Uncertainty Quantification, Counterfactual Explanations
National Category
Information Systems
Identifiers
urn:nbn:se:hj:diva-62864 (URN)10.1016/j.eswa.2024.123154 (DOI)001164089000001 ()2-s2.0-85182588063 (Scopus ID)HOA;;1810433 (Local ID)HOA;;1810433 (Archive number)HOA;;1810433 (OAI)
Funder
Knowledge Foundation, 20160035
Note
Included in doctoral thesis in manuscript form.
2023-11-082023-11-082024-03-01Bibliographically approved