Well-Calibrated and Sharp Interpretable Multi-Class Models
2021 (English)In: Lecture Notes in Computer Science: Modeling Decisions for Artificial Intelligence / [ed] V. Torra & Y. Narukawa, Springer Science and Business Media Deutschland GmbH , 2021, Vol. 12898, p. 193-204Conference paper, Published paper (Refereed)
Abstract [en]
Interpretable models make it possible to understand individual predictions, and are in many domains considered mandatory for user acceptance and trust. If coupled with communicated algorithmic confidence, interpretable models become even more informative, also making it possible to assess and compare the confidence expressed by the models in different predictions. To earn a user’s appropriate trust, however, the communicated algorithmic confidence must also be well-calibrated. In this paper, we suggest a novel way of extending Venn-Abers predictors to multi-class problems. The approach is applied to decision trees, providing well-calibrated probability intervals in the leaves. The result is one interpretable model with valid and sharp probability intervals, ready for inspection and analysis. In the experimentation, the proposed method is verified using 20 publicly available data sets showing that the generated models are indeed well-calibrated.
Place, publisher, year, edition, pages
Springer Science and Business Media Deutschland GmbH , 2021. Vol. 12898, p. 193-204
Keywords [en]
Algorithmics, Data set, Individual prediction, Multi-class models, Multiclass problem, Probability intervals, Users' acceptance, Decision trees
National Category
Computer and Information Sciences
Identifiers
URN: urn:nbn:se:hj:diva-54806DOI: 10.1007/978-3-030-85529-1_16Scopus ID: 2-s2.0-85115844466ISBN: 9783030855284 (print)ISBN: 9783030855291 (electronic)OAI: oai:DiVA.org:hj-54806DiVA, id: diva2:1600198
Conference
18th International Conference, MDAI 2021, Umeå, Sweden, September 27–30, 2021, Proceedings
Funder
Knowledge Foundation, DATAKIND 201901942021-10-042021-10-042021-10-04Bibliographically approved