Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Model-agnostic nonconformity functions for conformal classification
Jönköping University, School of Engineering, JTH, Computer Science and Informatics, JTH, Jönköping AI Lab (JAIL). Department of Information Technology, University of Borås, Sweden.ORCID iD: 0000-0003-0412-6199
Department of Information Technology, University of Borås, Sweden.
Jönköping University, School of Engineering, JTH, Computer Science and Informatics, JTH, Jönköping AI Lab (JAIL). Department of Information Technology, University of Borås, Sweden.ORCID iD: 0000-0003-0274-9026
Department of Computer and Systems Sciences, Stockholm University, Sweden.
2017 (English)In: Proceedings of the International Joint Conference on Neural Networks, IEEE, 2017, p. 2072-2079Conference paper, Published paper (Refereed)
Abstract [en]

A conformai predictor outputs prediction regions, for classification label sets. The key property of all conformai predictors is that they are valid, i.e., their error rate on novel data is bounded by a preset significance level. Thus, the key performance metric for evaluating conformal predictors is the size of the output prediction regions, where smaller (more informative) prediction regions are said to be more efficient. All conformal predictions rely on nonconformity functions, measuring the strangeness of an input-output pair, and the efficiency depends critically on the quality of the chosen nonconformity function. In this paper, three model-agnostic nonconformity functions, based on well-known loss functions, are evaluated with regard to how they affect efficiency. In the experimentation on 21 publicly available multi-class data sets, both single neural networks and ensembles of neural networks are used as underlying models for conformal classifiers. The results show that the choice of nonconformity function has a major impact on the efficiency, but also that different nonconformity functions should be used depending on the exact efficiency metric. For a high fraction of single-label predictions, a margin-based nonconformity function is the best option, while a nonconformity function based on the hinge loss obtained the smallest label sets on average.

Place, publisher, year, edition, pages
IEEE, 2017. p. 2072-2079
Keywords [en]
Classification, Conformal prediction, Neural networks, Efficiency, Forecasting, Classification labels, Conformal predictions, Conformal predictors, Label predictions, Loss functions, Performance metrices, Significance levels, Single neural, Classification (of information)
National Category
Computer Sciences
Identifiers
URN: urn:nbn:se:hj:diva-38112DOI: 10.1109/IJCNN.2017.7966105ISI: 000426968702043Scopus ID: 2-s2.0-85031028048ISBN: 9781509061815 (print)OAI: oai:DiVA.org:hj-38112DiVA, id: diva2:1163950
Conference
2017 International Joint Conference on Neural Networks, IJCNN 2017, 14 May 2017 through 19 May 2017
Available from: 2017-12-08 Created: 2017-12-08 Last updated: 2019-08-22Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Johansson, UlfLöfström, Tuwe

Search in DiVA

By author/editor
Johansson, UlfLöfström, Tuwe
By organisation
JTH, Jönköping AI Lab (JAIL)
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 215 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf