System disruptions
We are currently experiencing disruptions on the search portals due to high traffic. We are working to resolve the issue, you may temporarily encounter an error message.
Change search
Link to record
Permanent link

Direct link
Publications (6 of 6) Show all publications
Löfström, T., Löfström, H., Johansson, U., Sönströd, C. & Matela, R. (2025). Calibrated explanations for regression. Machine Learning, 114(4), Article ID 100.
Open this publication in new window or tab >>Calibrated explanations for regression
Show others...
2025 (English)In: Machine Learning, ISSN 0885-6125, E-ISSN 1573-0565, Vol. 114, no 4, article id 100Article in journal (Refereed) Published
Abstract [en]

Artificial Intelligence (AI) methods are an integral part of modern decision support systems. The best-performing predictive models used in AI-based decision support systems lack transparency. Explainable Artificial Intelligence (XAI) aims to create AI systems that can explain their rationale to human users. Local explanations in XAI can provide information about the causes of individual predictions in terms of feature importance. However, a critical drawback of existing local explanation methods is their inability to quantify the uncertainty associated with a feature's importance. This paper introduces an extension of a feature importance explanation method, Calibrated Explanations, previously only supporting classification, with support for standard regression and probabilistic regression, i.e., the probability that the target is below an arbitrary threshold. The extension for regression keeps all the benefits of Calibrated Explanations, such as calibration of the prediction from the underlying model with confidence intervals, uncertainty quantification of feature importance, and allows both factual and counterfactual explanations. Calibrated Explanations for regression provides fast, reliable, stable, and robust explanations. Calibrated Explanations for probabilistic regression provides an entirely new way of creating probabilistic explanations from any ordinary regression model, allowing dynamic selection of thresholds. The method is model agnostic with easily understood conditional rules. An implementation in Python is freely available on GitHub and for installation using both pip and conda, making the results in this paper easily replicable.

Place, publisher, year, edition, pages
Springer, 2025
Keywords
Explainable AI, Feature importance, Calibrated explanations, Uncertainty quantification, Regression, Probabilistic regression, Counterfactual explanations, Conformal predictive systems
National Category
Artificial Intelligence
Identifiers
urn:nbn:se:hj:diva-67398 (URN)10.1007/s10994-024-06642-8 (DOI)001427670500004 ()2-s2.0-85218409420 (Scopus ID)HOA;;1004935 (Local ID)HOA;;1004935 (Archive number)HOA;;1004935 (OAI)
Funder
Knowledge Foundation
Available from: 2025-03-04 Created: 2025-03-04 Last updated: 2025-03-04Bibliographically approved
Löfström, H., Löfström, T., Johansson, U. & Sönströd, C. (2024). Calibrated explanations: With uncertainty information and counterfactuals. Expert systems with applications, 246, Article ID 123154.
Open this publication in new window or tab >>Calibrated explanations: With uncertainty information and counterfactuals
2024 (English)In: Expert systems with applications, ISSN 0957-4174, E-ISSN 1873-6793, Vol. 246, article id 123154Article in journal (Refereed) Published
Abstract [en]

While local explanations for AI models can offer insights into individual predictions, such as feature importance, they are plagued by issues like instability. The unreliability of feature weights, often skewed due to poorly calibrated ML models, deepens these challenges. Moreover, the critical aspect of feature importance uncertainty remains mostly unaddressed in Explainable AI (XAI). The novel feature importance explanation method presented in this paper, called Calibrated Explanations (CE), is designed to tackle these issues head-on. Built on the foundation of Venn-Abers, CE not only calibrates the underlying model but also delivers reliable feature importance explanations with an exact definition of the feature weights. CE goes beyond conventional solutions by addressing output uncertainty. It accomplishes this by providing uncertainty quantification for both feature weights and the model’s probability estimates. Additionally, CE is model-agnostic, featuring easily comprehensible conditional rules and the ability to generate counterfactual explanations with embedded uncertainty quantification. Results from an evaluation with 25 benchmark datasets underscore the efficacy of CE, making it stand as a fast, reliable, stable, and robust solution.

Place, publisher, year, edition, pages
Elsevier, 2024
Keywords
Explainable AI, Feature Importance, Calibrated Explanations, Venn-Abers, Uncertainty Quantification, Counterfactual Explanations
National Category
Information Systems
Identifiers
urn:nbn:se:hj:diva-62864 (URN)10.1016/j.eswa.2024.123154 (DOI)001164089000001 ()2-s2.0-85182588063 (Scopus ID)HOA;;1810433 (Local ID)HOA;;1810433 (Archive number)HOA;;1810433 (OAI)
Funder
Knowledge Foundation, 20160035
Note

Included in doctoral thesis in manuscript form.

Available from: 2023-11-08 Created: 2023-11-08 Last updated: 2024-03-01Bibliographically approved
Johansson, U., Sönströd, C., Löfström, T. & Boström, H. (2023). Confidence Classifiers with Guaranteed Accuracy or Precision. In: H. Papadopoulos, K. A. Nguyen, H. Boström & L. Carlsson (Ed.), Proceedings of the Twelfth Symposium on Conformal and Probabilistic Prediction with Applications: . Paper presented at Twelfth Symposium on Conformal and Probabilistic Prediction with Applications, 13-15 September 2023, Limassol, Cyprus (pp. 513-533). Proceedings of Machine Learning Research (PMLR), 204
Open this publication in new window or tab >>Confidence Classifiers with Guaranteed Accuracy or Precision
2023 (English)In: Proceedings of the Twelfth Symposium on Conformal and Probabilistic Prediction with Applications / [ed] H. Papadopoulos, K. A. Nguyen, H. Boström & L. Carlsson, Proceedings of Machine Learning Research (PMLR) , 2023, Vol. 204, p. 513-533Conference paper, Published paper (Refereed)
Abstract [en]

In many situations, probabilistic predictors have replaced conformal classifiers. The main reason is arguably that the set predictions of conformal classifiers, with the accompanying significance level, are hard to interpret. In this paper, we demonstrate how conformal classification can be used as a basis for a classifier with reject option. Specifically, we introduce and evaluate two algorithms that are able to perfectly estimate accuracy or precision for a set of test instances, in a classifier with reject scenario. In the empirical investigation, the suggested algorithms are shown to clearly outperform both calibrated and uncalibrated probabilistic predictors.

Place, publisher, year, edition, pages
Proceedings of Machine Learning Research (PMLR), 2023
Series
Proceedings of Machine Learning Research, E-ISSN 2640-3498 ; 204
Keywords
Conformal prediction, Classification, Classification with reject option, Precision
National Category
Computer Sciences Information Systems
Identifiers
urn:nbn:se:hj:diva-62787 (URN)2-s2.0-85178665732 (Scopus ID)
Conference
Twelfth Symposium on Conformal and Probabilistic Prediction with Applications, 13-15 September 2023, Limassol, Cyprus
Funder
Knowledge Foundation
Available from: 2023-10-27 Created: 2023-10-27 Last updated: 2023-12-19Bibliographically approved
Johansson, U., Löfström, T., Sönströd, C. & Löfström, H. (2023). Conformal Prediction for Accuracy Guarantees in Classification with Reject Option. In: V. Torra and Y. Narukawa (Ed.), Modeling Decisions for Artificial Intelligence: 20th International Conference, MDAI 2023, Umeå, Sweden, June 19–22, 2023, Proceedings. Paper presented at International Conference on Modeling Decisions for Artificial Intelligence Umeå, Sweden 19 June 2023 (pp. 133-145). Springer
Open this publication in new window or tab >>Conformal Prediction for Accuracy Guarantees in Classification with Reject Option
2023 (English)In: Modeling Decisions for Artificial Intelligence: 20th International Conference, MDAI 2023, Umeå, Sweden, June 19–22, 2023, Proceedings / [ed] V. Torra and Y. Narukawa, Springer, 2023, p. 133-145Conference paper, Published paper (Refereed)
Abstract [en]

A standard classifier is forced to predict the label of every test instance, even when confidence in the predictions is very low. In many scenarios, it would, however, be better to avoid making these predictions, maybe leaving them to a human expert. A classifier with that alternative is referred to as a classifier with reject option. In this paper, we propose an algorithm that, for a particular data set, automatically suggests a number of accuracy levels, which it will be able to meet perfectly, using a classifier with reject option. Since the basis of the suggested algorithm is conformal prediction, it comes with strong validity guarantees. The experimentation, using 25 publicly available two-class data sets, confirms that the algorithm obtains empirical accuracies very close to the requested levels. In addition, in an outright comparison with probabilistic predictors, including models calibrated with Platt scaling, the suggested algorithm clearly outperforms the alternatives.

Place, publisher, year, edition, pages
Springer, 2023
Series
Lecture Notes in Computer Science, ISSN 2366-6323, E-ISSN 2366-6331 ; 13890
Keywords
Classification (of information), Accuracy level, Conformal predictions, Data set, Human expert, Probabilistics, Scalings, Test instances, Forecasting
National Category
Information Systems
Identifiers
urn:nbn:se:hj:diva-61450 (URN)10.1007/978-3-031-33498-6_9 (DOI)2-s2.0-85161105564 (Scopus ID)978-3-031-33497-9 (ISBN)
Conference
International Conference on Modeling Decisions for Artificial Intelligence Umeå, Sweden 19 June 2023
Available from: 2023-06-21 Created: 2023-06-21 Last updated: 2024-02-09Bibliographically approved
Löfström, H., Löfström, T., Johansson, U. & Sönströd, C. (2023). Investigating the impact of calibration on the quality of explanations. Annals of Mathematics and Artificial Intelligence
Open this publication in new window or tab >>Investigating the impact of calibration on the quality of explanations
2023 (English)In: Annals of Mathematics and Artificial Intelligence, ISSN 1012-2443, E-ISSN 1573-7470Article in journal (Refereed) Epub ahead of print
Abstract [en]

Predictive models used in Decision Support Systems (DSS) are often requested to explain the reasoning to users. Explanations of instances consist of two parts; the predicted label with an associated certainty and a set of weights, one per feature, describing how each feature contributes to the prediction for the particular instance. In techniques like Local Interpretable Model-agnostic Explanations (LIME), the probability estimate from the underlying model is used as a measurement of certainty; consequently, the feature weights represent how each feature contributes to the probability estimate. It is, however, well-known that probability estimates from classifiers are often poorly calibrated, i.e., the probability estimates do not correspond to the actual probabilities of being correct. With this in mind, explanations from techniques like LIME risk becoming misleading since the feature weights will only describe how each feature contributes to the possibly inaccurate probability estimate. This paper investigates the impact of calibrating predictive models before applying LIME. The study includes 25 benchmark data sets, using Random forest and Extreme Gradient Boosting (xGBoost) as learners and Venn-Abers and Platt scaling as calibration methods. Results from the study show that explanations of better calibrated models are themselves better calibrated, with ECE and log loss for the explanations after calibration becoming more conformed to the model ECE and log loss. The conclusion is that calibration makes the models and the explanations better by accurately representing reality.

Place, publisher, year, edition, pages
Springer, 2023
Keywords
Calibration, Decision support systems, Explainable artificial intelligence, Predicting with confidence, Uncertainty in explanations, Venn Abers
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:hj:diva-60033 (URN)10.1007/s10472-023-09837-2 (DOI)000948763400001 ()2-s2.0-85149810932 (Scopus ID)HOA;;870772 (Local ID)HOA;;870772 (Archive number)HOA;;870772 (OAI)
Funder
Knowledge Foundation
Available from: 2023-03-27 Created: 2023-03-27 Last updated: 2023-11-08
Johansson, U., Sönströd, C., Löfström, T. & Boström, H. (2022). Rule extraction with guarantees from regression models. Pattern Recognition, 126, Article ID 108554.
Open this publication in new window or tab >>Rule extraction with guarantees from regression models
2022 (English)In: Pattern Recognition, ISSN 0031-3203, E-ISSN 1873-5142, Vol. 126, article id 108554Article in journal (Refereed) Published
Abstract [en]

Tools for understanding and explaining complex predictive models are critical for user acceptance and trust. One such tool is rule extraction, i.e., approximating opaque models with less powerful but interpretable models. Pedagogical (or black-box) rule extraction, where the interpretable model is induced using the original training instances, but with the predictions from the opaque model as targets, has many advantages compared to the decompositional (white-box) approach. Most importantly, pedagogical methods are agnostic to the kind of opaque model used, and any learning algorithm producing interpretable models can be employed for the learning step. The pedagogical approach has, however, one main problem, clearly limiting its utility. Specifically, while the extracted models are trained to mimic the opaque, there are absolutely no guarantees that this will transfer to novel data. This potentially low test set fidelity must be considered a severe drawback, in particular when the extracted models are used for explanation and analysis. In this paper, a novel approach, solving the problem with test set fidelity by utilizing the conformal prediction framework, is suggested for extracting interpretable regression models from opaque models. The extracted models are standard regression trees, but augmented with valid prediction intervals in the leaves. Depending on the exact setup, the use of conformal prediction guarantees that either the test set fidelity or the test set accuracy will be equal to a preset confidence level, in the long run. In the extensive empirical investigation, using 20 publicly available data sets, the validity of the extracted models is demonstrated. In addition, it is shown how normalization can be used to provide individualized prediction intervals, thus providing highly informative extracted models.

Place, publisher, year, edition, pages
Elsevier, 2022
Keywords
Conformal prediction, Explainable AI, Interpretability, Predictive regression, Rule extraction, Conformal mapping, Data mining, Extraction, Forecasting, Learning algorithms, Conformal predictions, Prediction interval, Predictive models, Regression modelling, Rules extraction, Test sets, Users' acceptance, Regression analysis
National Category
Computer Sciences
Identifiers
urn:nbn:se:hj:diva-55960 (URN)10.1016/j.patcog.2022.108554 (DOI)000761147800007 ()2-s2.0-85124506084 (Scopus ID)HOA;;798114 (Local ID)HOA;;798114 (Archive number)HOA;;798114 (OAI)
Funder
Knowledge Foundation, 20190194
Available from: 2022-03-02 Created: 2022-03-02 Last updated: 2022-03-29Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0009-0009-0404-2586

Search in DiVA

Show all publications