Endre søk
RefereraExporteraLink to record
Permanent link

Direct link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Conditional Calibrated Explanations: Finding a Path Between Bias and Uncertainty
Jönköping University, Internationella Handelshögskolan.ORCID-id: 0000-0001-9633-0423
Jönköping University, Tekniska Högskolan, JTH, Avdelningen för datavetenskap, Jönköping AI Lab (JAIL).ORCID-id: 0000-0003-0274-9026
2024 (engelsk)Inngår i: Explainable Artificial Intelligence: Second World Conference, xAI 2024, Valletta, Malta, July 17–19, 2024, Proceedings, Part I, Springer, 2024, Vol. 2153, s. 332-355Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

While Artificial Intelligence and Machine Learning models are becoming increasingly prevalent, it is essential to remember that they are not infallible or inherently objective. These models depend on the data they are trained on and the inherent bias of the chosen machine learning algorithm. Therefore, selecting and sampling data for training is crucial for a fair outcome of the model. A model predicting, e.g., whether an applicant should be taken further in the job application process, could create heavily biased predictions against women if the data used to train the model mostly contained information about men. The well-known concept of conditional categories used in Conformal Prediction can be utilised to address this type of bias in the data. The Conformal Prediction framework includes uncertainty quantification methods for classification and regression. To help meet the challenges of data sets with potential bias, conditional categories were incorporated into an existing explanation method called Calibrated Explanations, relying on conformal methods. This approach allows users to try out different settings while simultaneously having the possibility to study how the uncertainty in the predictions is affected on an individual level. Furthermore, this paper evaluated how the uncertainty changed when using conditional categories based on attributes containing potential bias. It showed that the uncertainty significantly increased, revealing that fairness came with a cost of increased uncertainty.

sted, utgiver, år, opplag, sider
Springer, 2024. Vol. 2153, s. 332-355
Serie
Communications in Computer and Information Science, ISSN 1865-0929, E-ISSN 1865-0937 ; 2153
Emneord [en]
Bias, Calibrated Explanations, Fairness, Post-hoc Explanations, Uncertainty Quantification, XAI, Learning algorithms, Machine learning, Uncertainty analysis, Artificial intelligence learning, Calibrated explanation, Conformal predictions, Machine learning models, Post-hoc explanation, Uncertainty, Uncertainty quantifications, Forecasting
HSV kategori
Identifikatorer
URN: urn:nbn:se:hj:diva-66010DOI: 10.1007/978-3-031-63787-2_17Scopus ID: 2-s2.0-85200761336ISBN: 978-3-031-63786-5 (tryckt)ISBN: 978-3-031-63787-2 (digital)OAI: oai:DiVA.org:hj-66010DiVA, id: diva2:1890740
Konferanse
Second World Conference, xAI 2024, Valletta, Malta, July 17–19, 2024
Forskningsfinansiär
Region Jönköping CountyThe Swedish Stroke AssociationTilgjengelig fra: 2024-08-20 Laget: 2024-08-20 Sist oppdatert: 2024-08-20bibliografisk kontrollert

Open Access i DiVA

Fulltekst mangler i DiVA

Andre lenker

Forlagets fulltekstScopus

Person

Löfström, HelenaLöfström, Tuwe

Søk i DiVA

Av forfatter/redaktør
Löfström, HelenaLöfström, Tuwe
Av organisasjonen

Søk utenfor DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric

doi
isbn
urn-nbn
Totalt: 72 treff
RefereraExporteraLink to record
Permanent link

Direct link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf