Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Efficient conformal regressors using bagged neural nets
Department of Information Technology, University of Borås, Sweden.ORCID iD: 0000-0003-0412-6199
Department of Information Technology, University of Borås, Sweden.
Department of Information Technology, University of Borås, Sweden.
2015 (English)In: Proceedings of the International Joint Conference on Neural Networks, IEEE, 2015Conference paper, Published paper (Refereed)
Abstract [en]

Conformal predictors use machine learning models to output prediction sets. For regression, a prediction set is simply a prediction interval. All conformal predictors are valid, meaning that the error rate on novel data is bounded by a preset significance level. The key performance metric for conformal predictors is their efficiency, i.e., the size of the prediction sets. Inductive conformal predictors utilize real-valued functions, called nonconformity functions, and a calibration set, i.e., a set of labeled instances not used for the model training, to obtain the prediction regions. In state-of-the-art conformal regressors, the nonconformity functions are normalized, i.e., they include a component estimating the difficulty of each instance. In this study, conformal regressors are built on top of ensembles of bagged neural networks, and several nonconformity functions are evaluated. In addition, the option to calibrate on out-of-bag instances instead of setting aside a calibration set is investigated. The experiments, using 33 publicly available data sets, show that normalized nonconformity functions can produce smaller prediction sets, but the efficiency is highly dependent on the quality of the difficulty estimation. Specifically, in this study, the most efficient normalized nonconformity function estimated the difficulty of an instance by calculating the average error of neighboring instances. These results are consistent with previous studies using random forests as underlying models. Calibrating on out-of-bag did, however, only lead to more efficient conformal predictors on smaller data sets, which is in sharp contrast to the random forest study, where out-out-of bag calibration was significantly better overall. 

Place, publisher, year, edition, pages
IEEE, 2015.
Keywords [en]
Bagging, Conformal prediction, Neural networks, Regression, Artificial intelligence, Calibration, Decision trees, Efficiency, Learning systems, Conformal predictions, Conformal predictors, Difficulty estimations, Machine learning models, Performance metrices, Real-valued functions, Forecasting
National Category
Computer Sciences
Identifiers
URN: urn:nbn:se:hj:diva-38116DOI: 10.1109/IJCNN.2015.7280763Scopus ID: 2-s2.0-84951091178ISBN: 9781479919604 (print)OAI: oai:DiVA.org:hj-38116DiVA, id: diva2:1163944
Conference
International Joint Conference on Neural Networks, IJCNN 2015, 12 July 2015 through 17 July 2015
Available from: 2017-12-08 Created: 2017-12-08 Last updated: 2018-01-13Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records BETA

Johansson, Ulf

Search in DiVA

By author/editor
Johansson, Ulf
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 40 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf