System disruptions
We are currently experiencing disruptions on the search portals due to high traffic. We are working to resolve the issue, you may temporarily encounter an error message.
Change search
Link to record
Permanent link

Direct link
Publications (10 of 11) Show all publications
Löfström, T., Löfström, H., Johansson, U., Sönströd, C. & Matela, R. (2025). Calibrated explanations for regression. Machine Learning, 114(4), Article ID 100.
Open this publication in new window or tab >>Calibrated explanations for regression
Show others...
2025 (English)In: Machine Learning, ISSN 0885-6125, E-ISSN 1573-0565, Vol. 114, no 4, article id 100Article in journal (Refereed) Published
Abstract [en]

Artificial Intelligence (AI) methods are an integral part of modern decision support systems. The best-performing predictive models used in AI-based decision support systems lack transparency. Explainable Artificial Intelligence (XAI) aims to create AI systems that can explain their rationale to human users. Local explanations in XAI can provide information about the causes of individual predictions in terms of feature importance. However, a critical drawback of existing local explanation methods is their inability to quantify the uncertainty associated with a feature's importance. This paper introduces an extension of a feature importance explanation method, Calibrated Explanations, previously only supporting classification, with support for standard regression and probabilistic regression, i.e., the probability that the target is below an arbitrary threshold. The extension for regression keeps all the benefits of Calibrated Explanations, such as calibration of the prediction from the underlying model with confidence intervals, uncertainty quantification of feature importance, and allows both factual and counterfactual explanations. Calibrated Explanations for regression provides fast, reliable, stable, and robust explanations. Calibrated Explanations for probabilistic regression provides an entirely new way of creating probabilistic explanations from any ordinary regression model, allowing dynamic selection of thresholds. The method is model agnostic with easily understood conditional rules. An implementation in Python is freely available on GitHub and for installation using both pip and conda, making the results in this paper easily replicable.

Place, publisher, year, edition, pages
Springer, 2025
Keywords
Explainable AI, Feature importance, Calibrated explanations, Uncertainty quantification, Regression, Probabilistic regression, Counterfactual explanations, Conformal predictive systems
National Category
Artificial Intelligence
Identifiers
urn:nbn:se:hj:diva-67398 (URN)10.1007/s10994-024-06642-8 (DOI)001427670500004 ()2-s2.0-85218409420 (Scopus ID)HOA;;1004935 (Local ID)HOA;;1004935 (Archive number)HOA;;1004935 (OAI)
Funder
Knowledge Foundation
Available from: 2025-03-04 Created: 2025-03-04 Last updated: 2025-03-04Bibliographically approved
Löfström, T., Löfström, H. & Johansson, U. (2024). Calibrated explanations for multi-class. In: Simone Vantini, Matteo Fontana, Aldo Solari, Henrik Boström & Lars Carlsson (Ed.), Proceedings of the Thirteenth Symposium on Conformal and Probabilistic Prediction with Applications: . Paper presented at The 13th Symposium on Conformal and Probabilistic Prediction with Applications, 9-11 September 2024, Politecnico di Milano, Milano, Italy (pp. 175-194). PMLR, 230
Open this publication in new window or tab >>Calibrated explanations for multi-class
2024 (English)In: Proceedings of the Thirteenth Symposium on Conformal and Probabilistic Prediction with Applications / [ed] Simone Vantini, Matteo Fontana, Aldo Solari, Henrik Boström & Lars Carlsson, PMLR , 2024, Vol. 230, p. 175-194Conference paper, Published paper (Refereed)
Abstract [en]

Calibrated Explanations is a recently proposed feature importance explanation method providing uncertainty quantification. It utilises Venn-Abers to generate well-calibrated factual and counterfactual explanations for binary classification. In this paper, we extend the method to support multi-class classification. The paper includes an evaluation illustrating the calibration quality of the selected multi-class calibration approach, as well as a demonstration of how the explanations can help determine which explanations to trust.

Place, publisher, year, edition, pages
PMLR, 2024
Series
Proceedings of Machine Learning Research ; 230
National Category
Computer Sciences
Identifiers
urn:nbn:se:hj:diva-66433 (URN)
Conference
The 13th Symposium on Conformal and Probabilistic Prediction with Applications, 9-11 September 2024, Politecnico di Milano, Milano, Italy
Projects
PREMACOPAFAIRETIAI
Funder
Knowledge Foundation, 20220187, 20200223, 20230040
Available from: 2024-10-17 Created: 2024-10-17 Last updated: 2024-10-17Bibliographically approved
Löfström, H., Löfström, T., Johansson, U. & Sönströd, C. (2024). Calibrated explanations: With uncertainty information and counterfactuals. Expert systems with applications, 246, Article ID 123154.
Open this publication in new window or tab >>Calibrated explanations: With uncertainty information and counterfactuals
2024 (English)In: Expert systems with applications, ISSN 0957-4174, E-ISSN 1873-6793, Vol. 246, article id 123154Article in journal (Refereed) Published
Abstract [en]

While local explanations for AI models can offer insights into individual predictions, such as feature importance, they are plagued by issues like instability. The unreliability of feature weights, often skewed due to poorly calibrated ML models, deepens these challenges. Moreover, the critical aspect of feature importance uncertainty remains mostly unaddressed in Explainable AI (XAI). The novel feature importance explanation method presented in this paper, called Calibrated Explanations (CE), is designed to tackle these issues head-on. Built on the foundation of Venn-Abers, CE not only calibrates the underlying model but also delivers reliable feature importance explanations with an exact definition of the feature weights. CE goes beyond conventional solutions by addressing output uncertainty. It accomplishes this by providing uncertainty quantification for both feature weights and the model’s probability estimates. Additionally, CE is model-agnostic, featuring easily comprehensible conditional rules and the ability to generate counterfactual explanations with embedded uncertainty quantification. Results from an evaluation with 25 benchmark datasets underscore the efficacy of CE, making it stand as a fast, reliable, stable, and robust solution.

Place, publisher, year, edition, pages
Elsevier, 2024
Keywords
Explainable AI, Feature Importance, Calibrated Explanations, Venn-Abers, Uncertainty Quantification, Counterfactual Explanations
National Category
Information Systems
Identifiers
urn:nbn:se:hj:diva-62864 (URN)10.1016/j.eswa.2024.123154 (DOI)001164089000001 ()2-s2.0-85182588063 (Scopus ID)HOA;;1810433 (Local ID)HOA;;1810433 (Archive number)HOA;;1810433 (OAI)
Funder
Knowledge Foundation, 20160035
Note

Included in doctoral thesis in manuscript form.

Available from: 2023-11-08 Created: 2023-11-08 Last updated: 2024-03-01Bibliographically approved
Löfström, H. & Löfström, T. (2024). Conditional Calibrated Explanations: Finding a Path Between Bias and Uncertainty. In: Explainable Artificial Intelligence: Second World Conference, xAI 2024, Valletta, Malta, July 17–19, 2024, Proceedings, Part I. Paper presented at Second World Conference, xAI 2024, Valletta, Malta, July 17–19, 2024 (pp. 332-355). Springer, 2153
Open this publication in new window or tab >>Conditional Calibrated Explanations: Finding a Path Between Bias and Uncertainty
2024 (English)In: Explainable Artificial Intelligence: Second World Conference, xAI 2024, Valletta, Malta, July 17–19, 2024, Proceedings, Part I, Springer, 2024, Vol. 2153, p. 332-355Conference paper, Published paper (Refereed)
Abstract [en]

While Artificial Intelligence and Machine Learning models are becoming increasingly prevalent, it is essential to remember that they are not infallible or inherently objective. These models depend on the data they are trained on and the inherent bias of the chosen machine learning algorithm. Therefore, selecting and sampling data for training is crucial for a fair outcome of the model. A model predicting, e.g., whether an applicant should be taken further in the job application process, could create heavily biased predictions against women if the data used to train the model mostly contained information about men. The well-known concept of conditional categories used in Conformal Prediction can be utilised to address this type of bias in the data. The Conformal Prediction framework includes uncertainty quantification methods for classification and regression. To help meet the challenges of data sets with potential bias, conditional categories were incorporated into an existing explanation method called Calibrated Explanations, relying on conformal methods. This approach allows users to try out different settings while simultaneously having the possibility to study how the uncertainty in the predictions is affected on an individual level. Furthermore, this paper evaluated how the uncertainty changed when using conditional categories based on attributes containing potential bias. It showed that the uncertainty significantly increased, revealing that fairness came with a cost of increased uncertainty.

Place, publisher, year, edition, pages
Springer, 2024
Series
Communications in Computer and Information Science, ISSN 1865-0929, E-ISSN 1865-0937 ; 2153
Keywords
Bias, Calibrated Explanations, Fairness, Post-hoc Explanations, Uncertainty Quantification, XAI, Learning algorithms, Machine learning, Uncertainty analysis, Artificial intelligence learning, Calibrated explanation, Conformal predictions, Machine learning models, Post-hoc explanation, Uncertainty, Uncertainty quantifications, Forecasting
National Category
Information Systems
Identifiers
urn:nbn:se:hj:diva-66010 (URN)10.1007/978-3-031-63787-2_17 (DOI)2-s2.0-85200761336 (Scopus ID)978-3-031-63786-5 (ISBN)978-3-031-63787-2 (ISBN)
Conference
Second World Conference, xAI 2024, Valletta, Malta, July 17–19, 2024
Funder
Region Jönköping CountyThe Swedish Stroke Association
Available from: 2024-08-20 Created: 2024-08-20 Last updated: 2024-08-20Bibliographically approved
Johansson, U., Löfström, T., Sönströd, C. & Löfström, H. (2023). Conformal Prediction for Accuracy Guarantees in Classification with Reject Option. In: V. Torra and Y. Narukawa (Ed.), Modeling Decisions for Artificial Intelligence: 20th International Conference, MDAI 2023, Umeå, Sweden, June 19–22, 2023, Proceedings. Paper presented at International Conference on Modeling Decisions for Artificial Intelligence Umeå, Sweden 19 June 2023 (pp. 133-145). Springer
Open this publication in new window or tab >>Conformal Prediction for Accuracy Guarantees in Classification with Reject Option
2023 (English)In: Modeling Decisions for Artificial Intelligence: 20th International Conference, MDAI 2023, Umeå, Sweden, June 19–22, 2023, Proceedings / [ed] V. Torra and Y. Narukawa, Springer, 2023, p. 133-145Conference paper, Published paper (Refereed)
Abstract [en]

A standard classifier is forced to predict the label of every test instance, even when confidence in the predictions is very low. In many scenarios, it would, however, be better to avoid making these predictions, maybe leaving them to a human expert. A classifier with that alternative is referred to as a classifier with reject option. In this paper, we propose an algorithm that, for a particular data set, automatically suggests a number of accuracy levels, which it will be able to meet perfectly, using a classifier with reject option. Since the basis of the suggested algorithm is conformal prediction, it comes with strong validity guarantees. The experimentation, using 25 publicly available two-class data sets, confirms that the algorithm obtains empirical accuracies very close to the requested levels. In addition, in an outright comparison with probabilistic predictors, including models calibrated with Platt scaling, the suggested algorithm clearly outperforms the alternatives.

Place, publisher, year, edition, pages
Springer, 2023
Series
Lecture Notes in Computer Science, ISSN 2366-6323, E-ISSN 2366-6331 ; 13890
Keywords
Classification (of information), Accuracy level, Conformal predictions, Data set, Human expert, Probabilistics, Scalings, Test instances, Forecasting
National Category
Information Systems
Identifiers
urn:nbn:se:hj:diva-61450 (URN)10.1007/978-3-031-33498-6_9 (DOI)2-s2.0-85161105564 (Scopus ID)978-3-031-33497-9 (ISBN)
Conference
International Conference on Modeling Decisions for Artificial Intelligence Umeå, Sweden 19 June 2023
Available from: 2023-06-21 Created: 2023-06-21 Last updated: 2024-02-09Bibliographically approved
Löfström, H., Löfström, T., Johansson, U. & Sönströd, C. (2023). Investigating the impact of calibration on the quality of explanations. Annals of Mathematics and Artificial Intelligence
Open this publication in new window or tab >>Investigating the impact of calibration on the quality of explanations
2023 (English)In: Annals of Mathematics and Artificial Intelligence, ISSN 1012-2443, E-ISSN 1573-7470Article in journal (Refereed) Epub ahead of print
Abstract [en]

Predictive models used in Decision Support Systems (DSS) are often requested to explain the reasoning to users. Explanations of instances consist of two parts; the predicted label with an associated certainty and a set of weights, one per feature, describing how each feature contributes to the prediction for the particular instance. In techniques like Local Interpretable Model-agnostic Explanations (LIME), the probability estimate from the underlying model is used as a measurement of certainty; consequently, the feature weights represent how each feature contributes to the probability estimate. It is, however, well-known that probability estimates from classifiers are often poorly calibrated, i.e., the probability estimates do not correspond to the actual probabilities of being correct. With this in mind, explanations from techniques like LIME risk becoming misleading since the feature weights will only describe how each feature contributes to the possibly inaccurate probability estimate. This paper investigates the impact of calibrating predictive models before applying LIME. The study includes 25 benchmark data sets, using Random forest and Extreme Gradient Boosting (xGBoost) as learners and Venn-Abers and Platt scaling as calibration methods. Results from the study show that explanations of better calibrated models are themselves better calibrated, with ECE and log loss for the explanations after calibration becoming more conformed to the model ECE and log loss. The conclusion is that calibration makes the models and the explanations better by accurately representing reality.

Place, publisher, year, edition, pages
Springer, 2023
Keywords
Calibration, Decision support systems, Explainable artificial intelligence, Predicting with confidence, Uncertainty in explanations, Venn Abers
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:hj:diva-60033 (URN)10.1007/s10472-023-09837-2 (DOI)000948763400001 ()2-s2.0-85149810932 (Scopus ID)HOA;;870772 (Local ID)HOA;;870772 (Archive number)HOA;;870772 (OAI)
Funder
Knowledge Foundation
Available from: 2023-03-27 Created: 2023-03-27 Last updated: 2023-11-08
Löfström, H. (2023). On the Definition of Appropriate Trust and the Tools that Come with it. In: 2023 Congress in Computer Science, Computer Engineering, & Applied Computing (CSCE): . Paper presented at 2023 Congress in Computer Science, Computer Engineering, and Applied Computing, CSCE 2023 Las Vegas 24 July 2023 through 27 July 2023 (pp. 1555-1562). Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>On the Definition of Appropriate Trust and the Tools that Come with it
2023 (English)In: 2023 Congress in Computer Science, Computer Engineering, & Applied Computing (CSCE), Institute of Electrical and Electronics Engineers (IEEE), 2023, p. 1555-1562Conference paper, Published paper (Other academic)
Abstract [en]

Evaluating the efficiency of human-AI interactions is challenging, including subjective and objective quality aspects. With the focus on the human experience of the explanations, evaluations of explanation methods have become mostly subjective, making comparative evaluations almost impossible and highly linked to the individual user. However, it is commonly agreed that one aspect of explanation quality is how effectively the user can detect if the predictions are trustworthy and correct, i.e., if the explanations can increase the user's appropriate trust in the model. This paper starts with the definitions of appropriate trust from the literature. It compares the definitions with model performance evaluation, showing the strong similarities between appropriate trust and model performance evaluation. The paper's main contribution is a novel approach to evaluating appropriate trust by taking advantage of the likenesses between definitions. The paper offers several straightforward evaluation methods for different aspects of user performance, including suggesting a method for measuring uncertainty and appropriate trust in regression.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2023
Keywords
Appropriate Trust, Calibrated Trust, Comparative Evaluations, Evaluation of Explanations, Explanation Methods, Metrics, XAI
National Category
Human Computer Interaction
Identifiers
urn:nbn:se:hj:diva-64149 (URN)10.1109/CSCE60160.2023.00256 (DOI)2-s2.0-85191166512 (Scopus ID)979-8-3503-2759-5 (ISBN)
Conference
2023 Congress in Computer Science, Computer Engineering, and Applied Computing, CSCE 2023 Las Vegas 24 July 2023 through 27 July 2023
Funder
Knowledge Foundation, 20160035
Available from: 2024-05-07 Created: 2024-05-07 Last updated: 2024-05-07Bibliographically approved
Löfström, H. (2023). Trustworthy explanations: Improved decision support through well-calibrated uncertainty quantification. (Doctoral dissertation). Jönköping: Jönköping University, Jönköping International Business School
Open this publication in new window or tab >>Trustworthy explanations: Improved decision support through well-calibrated uncertainty quantification
2023 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

The use of Artificial Intelligence (AI) has transformed fields like disease diagnosis and defence. Utilising sophisticated Machine Learning (ML) models, AI predicts future events based on historical data, introducing complexity that challenges understanding and decision-making. Previous research emphasizes users’ difficulty discerning when to trust predictions due to model complexity, underscoring addressing model complexity and providing transparent explanations as pivotal for facilitating high-quality decisions.

Many ML models offer probability estimates for predictions, commonly used in methods providing explanations to guide users on prediction confidence. However, these probabilities often do not accurately reflect the actual distribution in the data, leading to potential user misinterpretation of prediction trustworthiness. Additionally, most explanation methods fail to convey whether the model’s probability is linked to any uncertainty, further diminishing the reliability of the explanations.

Evaluating the quality of explanations for decision support is challenging, and although highlighted as essential in research, there are no benchmark criteria for comparative evaluations.

This thesis introduces an innovative explanation method that generates reliable explanations, incorporating uncertainty information supporting users in determining when to trust the model’s predictions. The thesis also outlines strategies for evaluating explanation quality and facilitating comparative evaluations. Through empirical evaluations and user studies, the thesis provides practical insights to support decision-making utilising complex ML models.

Abstract [sv]

Användningen av Artificiell intelligens (AI) har förändrat områden som diagnosticering av sjukdomar och försvar. Genom att använda sofistikerade maskininlärningsmodeller predicerar AI framtida händelser baserat på historisk data. Modellernas komplexitet resulterar samtidigt i utmanande beslutsprocesser när orsakerna till prediktionerna är svårbegripliga. Tidigare forskning pekar på användares problem att avgöra prediktioners tillförlitlighet på grund av modellkomplexitet och belyser vikten av att tillhandahålla transparenta förklaringar för att underlätta högkvalitativa beslut.

Många maskininlärningsmodeller erbjuder sannolikhetsuppskattningar för prediktionerna, vilket vanligtvis används i metoder som ger förklaringar för att vägleda användare om prediktionernas tillförlitlighet. Dessa sannolikheter återspeglar dock ofta inte de faktiska fördelningarna i datat, vilket kan leda till att användare felaktigt tolkar prediktioner som tillförlitliga. Därutöver förmedlar de flesta förklaringsmetoder inte om prediktionernas sannolikheter är kopplade till någon osäkerhet, vilket minskar tillförlitligheten hos förklaringarna.

Att utvärdera kvaliteten på förklaringar för beslutsstöd är utmanande, och även om det har betonats som avgörande i forskning finns det inga benchmark-kriterier för jämförande utvärderingar.

Denna avhandling introducerar en innovativ förklaringsmetod som genererar tillförlitliga förklaringar vilka inkluderar osäkerhetsinformation för att stödja användare att avgöra när man kan lita på modellens prediktioner. Avhandlingen ger också förslag på strategier för att utvärdera kvaliteten på förklaringar och underlätta jämförande utvärderingar. Genom empiriska utvärderingar och användarstudier ger avhandlingen praktiska insikter för att stödja beslutsfattande användande komplexa maskininlärningsmodeller.

Place, publisher, year, edition, pages
Jönköping: Jönköping University, Jönköping International Business School, 2023. p. 72
Series
JIBS Dissertation Series, ISSN 1403-0470 ; 159
Keywords
Explainable Artificial Intelligence, Interpretable Machine Learning, Decision Support Systems, Uncertainty Estimation, Explanation Methods
National Category
Information Systems, Social aspects Computer Sciences
Identifiers
urn:nbn:se:hj:diva-62865 (URN)978-91-7914-031-1 (ISBN)978-91-7914-032-8 (ISBN)
Public defence
2023-12-12, B1014, Jönköping International Business School, Jönköping, 13:15 (English)
Opponent
Supervisors
Available from: 2023-11-08 Created: 2023-11-08 Last updated: 2023-11-08Bibliographically approved
Löfström, H., Hammar, K. & Johansson, U. (2022). A meta survey of quality evaluation criteria in explanation methods. In: J. De Weerdt, Jochen & A. Polyvyanyy (Ed.), Intelligent Information Systems: CAiSE Forum 2022, Leuven, Belgium, June 6–10, 2022, Proceedings. Paper presented at CAiSE Forum 2022, Leuven, Belgium, June 6–10, 2022 (pp. 55-63). Cham: Springer
Open this publication in new window or tab >>A meta survey of quality evaluation criteria in explanation methods
2022 (English)In: Intelligent Information Systems: CAiSE Forum 2022, Leuven, Belgium, June 6–10, 2022, Proceedings / [ed] J. De Weerdt, Jochen & A. Polyvyanyy, Cham: Springer, 2022, p. 55-63Conference paper, Published paper (Refereed)
Abstract [en]

The evaluation of explanation methods has become a significant issue in explainable artificial intelligence (XAI) due to the recent surge of opaque AI models in decision support systems (DSS). Explanations are essential for bias detection and control of uncertainty since most accurate AI models are opaque with low transparency and comprehensibility. There are numerous criteria to choose from when evaluating explanation method quality. However, since existing criteria focus on evaluating single explanation methods, it is not obvious how to compare the quality of different methods.

Place, publisher, year, edition, pages
Cham: Springer, 2022
Series
Lecture Notes in Business Information Processing, ISSN 1865-1348, E-ISSN 1865-1356 ; 452
Keywords
Explanation method, Evaluation metric, Explainable artificial intelligence, Evaluation of explainability, Comparative evaluations
National Category
Computer Sciences
Identifiers
urn:nbn:se:hj:diva-57114 (URN)10.1007/978-3-031-07481-3_7 (DOI)978-3-031-07480-6 (ISBN)978-3-031-07481-3 (ISBN)
Conference
CAiSE Forum 2022, Leuven, Belgium, June 6–10, 2022
Funder
Knowledge Foundation
Available from: 2022-06-13 Created: 2022-06-13 Last updated: 2023-11-08Bibliographically approved
Löfström, H., Löfström, T. & Johansson, U. (2018). Interpretable instance-based text classification for social science research projects. Archives of Data Science, Series A, 5(1)
Open this publication in new window or tab >>Interpretable instance-based text classification for social science research projects
2018 (English)In: Archives of Data Science, Series A, ISSN 2363-9881, Vol. 5, no 1Article in journal (Refereed) Published
Abstract [en]

In this study, two groups of respondents have evaluated explanations generated from an instance-based explanation method called WITE (Weighted Instance-based Text Explanations). One group consisted of 24 non-experts who answered a web survey about the words characterising the concepts of the classes and the other group consisted of three senior researchers and three respondents from a media house in Sweden who answered a questionnaire with open questions. The data used originates from one of the researchers’ project on media consumption in Sweden. The results from the non-experts indicate that WITE identified many words that corresponded to the human understanding but also included some insignificant or contrary words as important. In the results from the expert evaluation, there were indications that there is a risk that the explanations could persuade the users of the correctness of a prediction, even if it is incorrect. Consequently, the study indicates that an explanation method could be seen as a new actor which is able to persuade and interact with the humans and cause a change in the results of the classification of a text.

Place, publisher, year, edition, pages
KIT – Die Forschungsuniversität in der Helmholtz-Gemeinschaft, 2018
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:hj:diva-49118 (URN)10.5445/KSP/1000087327/15 (DOI)
Available from: 2020-06-10 Created: 2020-06-10 Last updated: 2023-11-08Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0001-9633-0423

Search in DiVA

Show all publications