Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Are Traditional Neural Networks Well-Calibrated?
Jönköping University, School of Engineering, JTH, Department of Computing, Jönköping AI Lab (JAIL).ORCID iD: 0000-0003-0412-6199
Dept. of Information Technology, University of Borås, Sweden.
2019 (English)In: Proceedings of the International Joint Conference on Neural Networks, IEEE, 2019, Vol. July, article id 8851962Conference paper, Published paper (Refereed)
Abstract [en]

Traditional neural networks are generally considered to be well-calibrated. Consequently, the established best practice is to not try to improve the calibration using general techniques like Platt scaling. In this paper, it is demonstrated, using 25 publicly available two-class data sets, that both single multilayer perceptrons and ensembles of multilayer perceptrons in fact often are poorly calibrated. Furthermore, from the experimental results, it is obvious that the calibration can be significantly improved by using either Platt scaling or Venn-Abers predictors. These results stand in sharp contrast to the standard recommendations for the use of neural networks as probabilistic classifiers. The empirical investigation also shows that for bagged ensembles, it is beneficiary to calibrate on the out-of-bag instances, despite the fact that this leads to using substantially smaller ensembles for the predictions. Finally, an outright comparison between Platt scaling and Venn-Abers predictors shows that the latter most often produced significantly better calibrations, especially when calibrated on out-of-bag instances. 

Place, publisher, year, edition, pages
IEEE, 2019. Vol. July, article id 8851962
Keywords [en]
Bagging, Calibration, Classification, Multilayer perceptrons, Probabilistic prediction, Venn-Abers predictors, Classification (of information), Multilayer neural networks, Multilayers, Best practices, Empirical investigation, Probabilistic classifiers, Sharp contrast
National Category
Computer Sciences
Identifiers
URN: urn:nbn:se:hj:diva-46689DOI: 10.1109/IJCNN.2019.8851962ISI: 000530893802023Scopus ID: 2-s2.0-85073208584ISBN: 9781728119854 (electronic)OAI: oai:DiVA.org:hj-46689DiVA, id: diva2:1365758
Conference
2019 International Joint Conference on Neural Networks, IJCNN 2019, Budapest, Hungary, 14 - 19 July 2019
Funder
Knowledge Foundation, 20150185Knowledge Foundation, 20170182Available from: 2019-10-25 Created: 2019-10-25 Last updated: 2021-03-15Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Johansson, Ulf

Search in DiVA

By author/editor
Johansson, Ulf
By organisation
Jönköping AI Lab (JAIL)
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 185 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf