Ändra sökning
Länk till posten
Permanent länk

Direktlänk
Alternativa namn
Publikationer (10 of 75) Visa alla publikationer
Löfström, T., Löfström, H. & Johansson, U. (2024). Calibrated explanations for multi-class. In: Simone Vantini, Matteo Fontana, Aldo Solari, Henrik Boström & Lars Carlsson (Ed.), Proceedings of the Thirteenth Symposium on Conformal and Probabilistic Prediction with Applications: . Paper presented at The 13th Symposium on Conformal and Probabilistic Prediction with Applications, 9-11 September 2024, Politecnico di Milano, Milano, Italy (pp. 175-194). PMLR, 230
Öppna denna publikation i ny flik eller fönster >>Calibrated explanations for multi-class
2024 (Engelska)Ingår i: Proceedings of the Thirteenth Symposium on Conformal and Probabilistic Prediction with Applications / [ed] Simone Vantini, Matteo Fontana, Aldo Solari, Henrik Boström & Lars Carlsson, PMLR , 2024, Vol. 230, s. 175-194Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

Calibrated Explanations is a recently proposed feature importance explanation method providing uncertainty quantification. It utilises Venn-Abers to generate well-calibrated factual and counterfactual explanations for binary classification. In this paper, we extend the method to support multi-class classification. The paper includes an evaluation illustrating the calibration quality of the selected multi-class calibration approach, as well as a demonstration of how the explanations can help determine which explanations to trust.

Ort, förlag, år, upplaga, sidor
PMLR, 2024
Serie
Proceedings of Machine Learning Research ; 230
Nationell ämneskategori
Datavetenskap (datalogi)
Identifikatorer
urn:nbn:se:hj:diva-66433 (URN)
Konferens
The 13th Symposium on Conformal and Probabilistic Prediction with Applications, 9-11 September 2024, Politecnico di Milano, Milano, Italy
Projekt
PREMACOPAFAIRETIAI
Forskningsfinansiär
KK-stiftelsen, 20220187, 20200223, 20230040
Tillgänglig från: 2024-10-17 Skapad: 2024-10-17 Senast uppdaterad: 2024-10-17Bibliografiskt granskad
Löfström, H., Löfström, T., Johansson, U. & Sönströd, C. (2024). Calibrated explanations: With uncertainty information and counterfactuals. Expert systems with applications, 246, Article ID 123154.
Öppna denna publikation i ny flik eller fönster >>Calibrated explanations: With uncertainty information and counterfactuals
2024 (Engelska)Ingår i: Expert systems with applications, ISSN 0957-4174, E-ISSN 1873-6793, Vol. 246, artikel-id 123154Artikel i tidskrift (Refereegranskat) Published
Abstract [en]

While local explanations for AI models can offer insights into individual predictions, such as feature importance, they are plagued by issues like instability. The unreliability of feature weights, often skewed due to poorly calibrated ML models, deepens these challenges. Moreover, the critical aspect of feature importance uncertainty remains mostly unaddressed in Explainable AI (XAI). The novel feature importance explanation method presented in this paper, called Calibrated Explanations (CE), is designed to tackle these issues head-on. Built on the foundation of Venn-Abers, CE not only calibrates the underlying model but also delivers reliable feature importance explanations with an exact definition of the feature weights. CE goes beyond conventional solutions by addressing output uncertainty. It accomplishes this by providing uncertainty quantification for both feature weights and the model’s probability estimates. Additionally, CE is model-agnostic, featuring easily comprehensible conditional rules and the ability to generate counterfactual explanations with embedded uncertainty quantification. Results from an evaluation with 25 benchmark datasets underscore the efficacy of CE, making it stand as a fast, reliable, stable, and robust solution.

Ort, förlag, år, upplaga, sidor
Elsevier, 2024
Nyckelord
Explainable AI, Feature Importance, Calibrated Explanations, Venn-Abers, Uncertainty Quantification, Counterfactual Explanations
Nationell ämneskategori
Systemvetenskap, informationssystem och informatik
Identifikatorer
urn:nbn:se:hj:diva-62864 (URN)10.1016/j.eswa.2024.123154 (DOI)001164089000001 ()2-s2.0-85182588063 (Scopus ID)HOA;;1810433 (Lokalt ID)HOA;;1810433 (Arkivnummer)HOA;;1810433 (OAI)
Forskningsfinansiär
KK-stiftelsen, 20160035
Anmärkning

Included in doctoral thesis in manuscript form.

Tillgänglig från: 2023-11-08 Skapad: 2023-11-08 Senast uppdaterad: 2024-03-01Bibliografiskt granskad
Löfström, H. & Löfström, T. (2024). Conditional Calibrated Explanations: Finding a Path Between Bias and Uncertainty. In: Explainable Artificial Intelligence: Second World Conference, xAI 2024, Valletta, Malta, July 17–19, 2024, Proceedings, Part I. Paper presented at Second World Conference, xAI 2024, Valletta, Malta, July 17–19, 2024 (pp. 332-355). Springer, 2153
Öppna denna publikation i ny flik eller fönster >>Conditional Calibrated Explanations: Finding a Path Between Bias and Uncertainty
2024 (Engelska)Ingår i: Explainable Artificial Intelligence: Second World Conference, xAI 2024, Valletta, Malta, July 17–19, 2024, Proceedings, Part I, Springer, 2024, Vol. 2153, s. 332-355Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

While Artificial Intelligence and Machine Learning models are becoming increasingly prevalent, it is essential to remember that they are not infallible or inherently objective. These models depend on the data they are trained on and the inherent bias of the chosen machine learning algorithm. Therefore, selecting and sampling data for training is crucial for a fair outcome of the model. A model predicting, e.g., whether an applicant should be taken further in the job application process, could create heavily biased predictions against women if the data used to train the model mostly contained information about men. The well-known concept of conditional categories used in Conformal Prediction can be utilised to address this type of bias in the data. The Conformal Prediction framework includes uncertainty quantification methods for classification and regression. To help meet the challenges of data sets with potential bias, conditional categories were incorporated into an existing explanation method called Calibrated Explanations, relying on conformal methods. This approach allows users to try out different settings while simultaneously having the possibility to study how the uncertainty in the predictions is affected on an individual level. Furthermore, this paper evaluated how the uncertainty changed when using conditional categories based on attributes containing potential bias. It showed that the uncertainty significantly increased, revealing that fairness came with a cost of increased uncertainty.

Ort, förlag, år, upplaga, sidor
Springer, 2024
Serie
Communications in Computer and Information Science, ISSN 1865-0929, E-ISSN 1865-0937 ; 2153
Nyckelord
Bias, Calibrated Explanations, Fairness, Post-hoc Explanations, Uncertainty Quantification, XAI, Learning algorithms, Machine learning, Uncertainty analysis, Artificial intelligence learning, Calibrated explanation, Conformal predictions, Machine learning models, Post-hoc explanation, Uncertainty, Uncertainty quantifications, Forecasting
Nationell ämneskategori
Systemvetenskap, informationssystem och informatik
Identifikatorer
urn:nbn:se:hj:diva-66010 (URN)10.1007/978-3-031-63787-2_17 (DOI)2-s2.0-85200761336 (Scopus ID)978-3-031-63786-5 (ISBN)978-3-031-63787-2 (ISBN)
Konferens
Second World Conference, xAI 2024, Valletta, Malta, July 17–19, 2024
Forskningsfinansiär
Region Jönköpings länSTROKE-Riksförbundet
Tillgänglig från: 2024-08-20 Skapad: 2024-08-20 Senast uppdaterad: 2024-08-20Bibliografiskt granskad
Pettersson, T., Riveiro, M. & Löfström, T. (2024). Multimodal fine-grained grocery product recognition using image and OCR text. Machine Vision and Applications, 35(4), Article ID 79.
Öppna denna publikation i ny flik eller fönster >>Multimodal fine-grained grocery product recognition using image and OCR text
2024 (Engelska)Ingår i: Machine Vision and Applications, ISSN 0932-8092, E-ISSN 1432-1769, Vol. 35, nr 4, artikel-id 79Artikel i tidskrift (Refereegranskat) Published
Abstract [en]

Automatic recognition of grocery products can be used to improve customer flow at checkouts and reduce labor costs and store losses. Product recognition is, however, a challenging task for machine learning-based solutions due to the large number of products and their variations in appearance. In this work, we tackle the challenge of fine-grained product recognition by first extracting a large dataset from a grocery store containing products that are only differentiable by subtle details. Then, we propose a multimodal product recognition approach that uses product images with extracted OCR text from packages to improve fine-grained recognition of grocery products. We evaluate several image and text models separately and then combine them using different multimodal models of varying complexities. The results show that image and textual information complement each other in multimodal models and enable a classifier with greater recognition performance than unimodal models, especially when the number of training samples is limited. Therefore, this approach is suitable for many different scenarios in which product recognition is used to further improve recognition performance. The dataset can be found at https://github.com/Tubbias/finegrainocr.

Ort, förlag, år, upplaga, sidor
Springer, 2024
Nyckelord
Grocery product recognition, Multimodal classification, Fine-grained recognition, Optical character recognition
Nationell ämneskategori
Datorgrafik och datorseende
Identifikatorer
urn:nbn:se:hj:diva-64774 (URN)10.1007/s00138-024-01549-9 (DOI)001243616100001 ()2-s2.0-85195555790 (Scopus ID)HOA;;955228 (Lokalt ID)HOA;;955228 (Arkivnummer)HOA;;955228 (OAI)
Forskningsfinansiär
Vetenskapsrådet, 2018-05973
Tillgänglig från: 2024-06-10 Skapad: 2024-06-10 Senast uppdaterad: 2025-02-07Bibliografiskt granskad
Uddin, N. & Löfström, T. (2023). Applications of Conformal Regression on Real-world Industrial Use Cases using Crepes and MAPIE. In: H. Papadopoulos, K. A. Nguyen, H. Boström & L. Carlsson (Ed.), Proceedings of the Twelfth Symposium on Conformal and Probabilistic Prediction with Applications: . Paper presented at Twelfth Symposium on Conformal and Probabilistic Prediction with Applications, 13-15 September 2023, Limassol, Cyprus (pp. 147-165). Proceedings of Machine Learning Research (PMLR), 204
Öppna denna publikation i ny flik eller fönster >>Applications of Conformal Regression on Real-world Industrial Use Cases using Crepes and MAPIE
2023 (Engelska)Ingår i: Proceedings of the Twelfth Symposium on Conformal and Probabilistic Prediction with Applications / [ed] H. Papadopoulos, K. A. Nguyen, H. Boström & L. Carlsson, Proceedings of Machine Learning Research (PMLR) , 2023, Vol. 204, s. 147-165Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

Applying conformal prediction in real-world industrial use cases is rare, and publications are often limited to popular open-source data sets. This paper demonstrates two experimental use cases where the conformal prediction framework was applied to regression problems at Husqvarna Group with the two Python-based open-source platforms MAPIE and Crepes. The paper concludes by discussing lessons learned for the industry and some challenges for the conformal prediction community to address.

Ort, förlag, år, upplaga, sidor
Proceedings of Machine Learning Research (PMLR), 2023
Serie
Proceedings of Machine Learning Research, E-ISSN 2640-3498 ; 204
Nyckelord
Crepes, MAPIE, conformal regression, EnbPI, demand prediction, injection molding, manufacturing analytics, supply-chain, conformal predictive system
Nationell ämneskategori
Datavetenskap (datalogi) Systemvetenskap, informationssystem och informatik
Identifikatorer
urn:nbn:se:hj:diva-62789 (URN)2-s2.0-85178659649 (Scopus ID)
Konferens
Twelfth Symposium on Conformal and Probabilistic Prediction with Applications, 13-15 September 2023, Limassol, Cyprus
Forskningsfinansiär
KK-stiftelsen
Tillgänglig från: 2023-10-27 Skapad: 2023-10-27 Senast uppdaterad: 2023-12-19Bibliografiskt granskad
Johansson, U., Sönströd, C., Löfström, T. & Boström, H. (2023). Confidence Classifiers with Guaranteed Accuracy or Precision. In: H. Papadopoulos, K. A. Nguyen, H. Boström & L. Carlsson (Ed.), Proceedings of the Twelfth Symposium on Conformal and Probabilistic Prediction with Applications: . Paper presented at Twelfth Symposium on Conformal and Probabilistic Prediction with Applications, 13-15 September 2023, Limassol, Cyprus (pp. 513-533). Proceedings of Machine Learning Research (PMLR), 204
Öppna denna publikation i ny flik eller fönster >>Confidence Classifiers with Guaranteed Accuracy or Precision
2023 (Engelska)Ingår i: Proceedings of the Twelfth Symposium on Conformal and Probabilistic Prediction with Applications / [ed] H. Papadopoulos, K. A. Nguyen, H. Boström & L. Carlsson, Proceedings of Machine Learning Research (PMLR) , 2023, Vol. 204, s. 513-533Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

In many situations, probabilistic predictors have replaced conformal classifiers. The main reason is arguably that the set predictions of conformal classifiers, with the accompanying significance level, are hard to interpret. In this paper, we demonstrate how conformal classification can be used as a basis for a classifier with reject option. Specifically, we introduce and evaluate two algorithms that are able to perfectly estimate accuracy or precision for a set of test instances, in a classifier with reject scenario. In the empirical investigation, the suggested algorithms are shown to clearly outperform both calibrated and uncalibrated probabilistic predictors.

Ort, förlag, år, upplaga, sidor
Proceedings of Machine Learning Research (PMLR), 2023
Serie
Proceedings of Machine Learning Research, E-ISSN 2640-3498 ; 204
Nyckelord
Conformal prediction, Classification, Classification with reject option, Precision
Nationell ämneskategori
Datavetenskap (datalogi) Systemvetenskap, informationssystem och informatik
Identifikatorer
urn:nbn:se:hj:diva-62787 (URN)2-s2.0-85178665732 (Scopus ID)
Konferens
Twelfth Symposium on Conformal and Probabilistic Prediction with Applications, 13-15 September 2023, Limassol, Cyprus
Forskningsfinansiär
KK-stiftelsen
Tillgänglig från: 2023-10-27 Skapad: 2023-10-27 Senast uppdaterad: 2023-12-19Bibliografiskt granskad
Johansson, U., Löfström, T., Sönströd, C. & Löfström, H. (2023). Conformal Prediction for Accuracy Guarantees in Classification with Reject Option. In: V. Torra and Y. Narukawa (Ed.), Modeling Decisions for Artificial Intelligence: 20th International Conference, MDAI 2023, Umeå, Sweden, June 19–22, 2023, Proceedings. Paper presented at International Conference on Modeling Decisions for Artificial Intelligence Umeå, Sweden 19 June 2023 (pp. 133-145). Springer
Öppna denna publikation i ny flik eller fönster >>Conformal Prediction for Accuracy Guarantees in Classification with Reject Option
2023 (Engelska)Ingår i: Modeling Decisions for Artificial Intelligence: 20th International Conference, MDAI 2023, Umeå, Sweden, June 19–22, 2023, Proceedings / [ed] V. Torra and Y. Narukawa, Springer, 2023, s. 133-145Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

A standard classifier is forced to predict the label of every test instance, even when confidence in the predictions is very low. In many scenarios, it would, however, be better to avoid making these predictions, maybe leaving them to a human expert. A classifier with that alternative is referred to as a classifier with reject option. In this paper, we propose an algorithm that, for a particular data set, automatically suggests a number of accuracy levels, which it will be able to meet perfectly, using a classifier with reject option. Since the basis of the suggested algorithm is conformal prediction, it comes with strong validity guarantees. The experimentation, using 25 publicly available two-class data sets, confirms that the algorithm obtains empirical accuracies very close to the requested levels. In addition, in an outright comparison with probabilistic predictors, including models calibrated with Platt scaling, the suggested algorithm clearly outperforms the alternatives.

Ort, förlag, år, upplaga, sidor
Springer, 2023
Serie
Lecture Notes in Computer Science, ISSN 2366-6323, E-ISSN 2366-6331 ; 13890
Nyckelord
Classification (of information), Accuracy level, Conformal predictions, Data set, Human expert, Probabilistics, Scalings, Test instances, Forecasting
Nationell ämneskategori
Systemvetenskap, informationssystem och informatik
Identifikatorer
urn:nbn:se:hj:diva-61450 (URN)10.1007/978-3-031-33498-6_9 (DOI)2-s2.0-85161105564 (Scopus ID)978-3-031-33497-9 (ISBN)
Konferens
International Conference on Modeling Decisions for Artificial Intelligence Umeå, Sweden 19 June 2023
Tillgänglig från: 2023-06-21 Skapad: 2023-06-21 Senast uppdaterad: 2024-02-09Bibliografiskt granskad
Johansson, U., Löfström, T. & Boström, H. (2023). Conformal Predictive Distribution Trees. Annals of Mathematics and Artificial Intelligence
Öppna denna publikation i ny flik eller fönster >>Conformal Predictive Distribution Trees
2023 (Engelska)Ingår i: Annals of Mathematics and Artificial Intelligence, ISSN 1012-2443, E-ISSN 1573-7470Artikel i tidskrift (Refereegranskat) Epub ahead of print
Abstract [en]

Being able to understand the logic behind predictions or recommendations on the instance level is at the heart of trustworthy machine learning models. Inherently interpretable models make this possible by allowing inspection and analysis of the model itself, thus exhibiting the logic behind each prediction, while providing an opportunity to gain insights about the underlying domain. Another important criterion for trustworthiness is the model’s ability to somehow communicate a measure of confidence in every specific prediction or recommendation. Indeed, the overall goal of this paper is to produce highly informative models that combine interpretability and algorithmic confidence. For this purpose, we introduce conformal predictive distribution trees, which is a novel form of regression trees where each leaf contains a conformal predictive distribution. Using this representation language, the proposed approach allows very versatile analyses of individual leaves in the regression trees. Specifically, depending on the chosen level of detail, the leaves, in addition to the normal point predictions, can provide either cumulative distributions or prediction intervals that are guaranteed to be well-calibrated. In the empirical evaluation, the suggested conformal predictive distribution trees are compared to the well-established conformal regressors, thus demonstrating the benefits of the enhanced representation.

Ort, förlag, år, upplaga, sidor
Springer, 2023
Nyckelord
Conformal predictive distributions, Conformal regression, Interpretability, Regression trees
Nationell ämneskategori
Datavetenskap (datalogi)
Identifikatorer
urn:nbn:se:hj:diva-61037 (URN)10.1007/s10472-023-09847-0 (DOI)000999966600001 ()2-s2.0-85160848450 (Scopus ID)HOA;;884987 (Lokalt ID)HOA;;884987 (Arkivnummer)HOA;;884987 (OAI)
Forskningsfinansiär
KK-stiftelsen, 20200223
Tillgänglig från: 2023-06-12 Skapad: 2023-06-12 Senast uppdaterad: 2023-06-16
Pettersson, T., Riveiro, M. & Löfström, T. (2023). Explainable local and global models for fine-grained multimodal product recognition. In: : . Paper presented at Multimodal KDD 2023, International Workshop on Multimodal Learning, in conjunction with 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD 2023), August 6–10, 2023, Long Beach, CA, USA.
Öppna denna publikation i ny flik eller fönster >>Explainable local and global models for fine-grained multimodal product recognition
2023 (Engelska)Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

Grocery product recognition techniques are emerging in the retail sector and are used to provide automatic checkout counters, reduce self-checkout fraud, and support inventory management. However, recognizing grocery products using machine learning models is challenging due to the vast number of products, their similarities, and changes in appearance. To address these challenges, more complex models are created by adding additional modalities, such as text from product packages. But these complex models pose additional challenges in terms of model interpretability. Machine learning experts and system developers need tools and techniques conveying interpretations to enable the evaluation and improvement of multimodal production recognition models.

In this work, we propose thus an approach to provide local and global explanations that allow us to assess multimodal models for product recognition. We evaluate this approach on a large fine-grained grocery product dataset captured from a real-world environment. To assess the utility of our approach, experiments are conducted for three types of multimodal models.

The results show that our approach provides fine-grained local explanations while being able to aggregate those into global explanations for each type of product. In addition, we observe a disparity between different multimodal models, in what type of features they learn and what modality each model focuses on. This provides valuable insight to further improve the accuracy and robustness of multimodal product recognition models for grocery product recognition.

Nyckelord
Multimodal classification, Explainable AI, Grocery product recognition, LIME, Fine-grained recognition, Optical character recognition
Nationell ämneskategori
Datorgrafik och datorseende
Identifikatorer
urn:nbn:se:hj:diva-62382 (URN)
Konferens
Multimodal KDD 2023, International Workshop on Multimodal Learning, in conjunction with 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD 2023), August 6–10, 2023, Long Beach, CA, USA
Tillgänglig från: 2023-09-04 Skapad: 2023-09-04 Senast uppdaterad: 2025-02-07Bibliografiskt granskad
Löfström, H., Löfström, T., Johansson, U. & Sönströd, C. (2023). Investigating the impact of calibration on the quality of explanations. Annals of Mathematics and Artificial Intelligence
Öppna denna publikation i ny flik eller fönster >>Investigating the impact of calibration on the quality of explanations
2023 (Engelska)Ingår i: Annals of Mathematics and Artificial Intelligence, ISSN 1012-2443, E-ISSN 1573-7470Artikel i tidskrift (Refereegranskat) Epub ahead of print
Abstract [en]

Predictive models used in Decision Support Systems (DSS) are often requested to explain the reasoning to users. Explanations of instances consist of two parts; the predicted label with an associated certainty and a set of weights, one per feature, describing how each feature contributes to the prediction for the particular instance. In techniques like Local Interpretable Model-agnostic Explanations (LIME), the probability estimate from the underlying model is used as a measurement of certainty; consequently, the feature weights represent how each feature contributes to the probability estimate. It is, however, well-known that probability estimates from classifiers are often poorly calibrated, i.e., the probability estimates do not correspond to the actual probabilities of being correct. With this in mind, explanations from techniques like LIME risk becoming misleading since the feature weights will only describe how each feature contributes to the possibly inaccurate probability estimate. This paper investigates the impact of calibrating predictive models before applying LIME. The study includes 25 benchmark data sets, using Random forest and Extreme Gradient Boosting (xGBoost) as learners and Venn-Abers and Platt scaling as calibration methods. Results from the study show that explanations of better calibrated models are themselves better calibrated, with ECE and log loss for the explanations after calibration becoming more conformed to the model ECE and log loss. The conclusion is that calibration makes the models and the explanations better by accurately representing reality.

Ort, förlag, år, upplaga, sidor
Springer, 2023
Nyckelord
Calibration, Decision support systems, Explainable artificial intelligence, Predicting with confidence, Uncertainty in explanations, Venn Abers
Nationell ämneskategori
Data- och informationsvetenskap
Identifikatorer
urn:nbn:se:hj:diva-60033 (URN)10.1007/s10472-023-09837-2 (DOI)000948763400001 ()2-s2.0-85149810932 (Scopus ID)HOA;;870772 (Lokalt ID)HOA;;870772 (Arkivnummer)HOA;;870772 (OAI)
Forskningsfinansiär
KK-stiftelsen
Tillgänglig från: 2023-03-27 Skapad: 2023-03-27 Senast uppdaterad: 2023-11-08
Organisationer
Identifikatorer
ORCID-id: ORCID iD iconorcid.org/0000-0003-0274-9026

Sök vidare i DiVA

Visa alla publikationer