Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Assessing Explanation Quality by Venn Prediction
School of Electrical Engineering and Computer Science, KTH Royal Institute of Technology, Sweden.
Jönköping University, School of Engineering, JTH, Department of Computing, Jönköping AI Lab (JAIL).ORCID iD: 0000-0003-0412-6199
2022 (English)In: Proceedings of the Eleventh Symposium on Conformal and Probabilistic Prediction with Applications: Volume 179: Conformal and Probabilistic Prediction with Applications, 24-26 August 2022, Brighton, UK / [ed] U. Johansson, H. Boström, K. A. Nguyen, Z. Luo & L. Carlsson, ML Research Press , 2022, Vol. 179, p. 42-54Conference paper, Published paper (Refereed)
Abstract [en]

Rules output by explainable machine learning techniques naturally come with a degree of uncertainty, as the complex functionality of the underlying black-box model often can be difficult to approximate by a single, interpretable rule. However, the uncertainty of these approximations is not properly quantified by current explanatory techniques. The use of Venn prediction is here proposed and investigated as a means to quantify the uncertainty of the explanations and thereby also allow for competing explanation techniques to be evaluated with respect to their relative uncertainty. A number of metrics of rule explanation quality based on uncertainty are proposed and discussed, including metrics that capture the tendency of the explanations to predict the correct outcome of a black-box model on new instances, how informative (tight) the produced intervals are, and how certain a rule is when predicting one class. An empirical investigation is presented, in which explanations produced by the state-of-the-art technique Anchors are compared to explanatory rules obtained from association rule mining. The results suggest that the association rule mining approach may provide explanations with less uncertainty towards the correct label, as predicted by the black-box model, compared to Anchors. The results also show that the explanatory rules obtained through association rule mining result in tighter intervals and are closer to either one or zero compared to Anchors, i.e., they are more certain towards a specific class label.

Place, publisher, year, edition, pages
ML Research Press , 2022. Vol. 179, p. 42-54
Series
Proceedings of Machine Learning Research, E-ISSN 2640-3498 ; 179
Keywords [en]
Venn prediction, Explainable machine learning, Rule mining
National Category
Computer Sciences
Identifiers
URN: urn:nbn:se:hj:diva-58686Scopus ID: 2-s2.0-85164705716OAI: oai:DiVA.org:hj-58686DiVA, id: diva2:1705500
Conference
11th Symposium on Conformal and Probabilistic Prediction with Applications, 24-26 August 2022, Brighton, UK
Available from: 2022-10-24 Created: 2022-10-24 Last updated: 2023-08-17Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

ScopusFull-text

Authority records

Johansson, Ulf

Search in DiVA

By author/editor
Johansson, Ulf
By organisation
Jönköping AI Lab (JAIL)
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar

urn-nbn

Altmetric score

urn-nbn
Total: 165 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf