Ändra sökning
RefereraExporteraLänk till posten
Permanent länk

Direktlänk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Handling small calibration sets in mondrian inductive conformal regressors
Department of Information Technology, University of Borås, Borås, Sweden.ORCID-id: 0000-0003-0412-6199
Drug Safety and Metabolism, AstraZeneca Innovative Medicines and Early Development, Mölndal, Sweden.
Department of Systems and Computer Sciences, Stockholm University, Stockholm, Sweden.
Drug Safety and Metabolism, AstraZeneca Innovative Medicines and Early Development, Mölndal, Sweden.
Visa övriga samt affilieringar
2015 (Engelska)Ingår i: Statistical Learning and Data Sciences, Springer, 2015, s. 271-280Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

In inductive conformal prediction, calibration sets must contain an adequate number of instances to support the chosen confidence level. This problem is particularly prevalent when using Mondrian inductive conformal prediction, where the input space is partitioned into independently valid prediction regions. In this study, Mondrian conformal regressors, in the form of regression trees, are used to investigate two problematic aspects of small calibration sets. If there are too few calibration instances to support the significance level, we suggest using either extrapolation or altering the model. In situations where the desired significance level is between two calibration instances, the standard procedure is to choose the more nonconforming one, thus guaranteeing validity, but producing conservative conformal predictors. The suggested solution is to use interpolation between calibration instances. All proposed techniques are empirically evaluated and compared to the standard approach on 30 benchmark data sets. The results show that while extrapolation often results in invalid models, interpolation works extremely well and provides increased efficiency with preserved empirical validity.

Ort, förlag, år, upplaga, sidor
Springer, 2015. s. 271-280
Serie
Lecture Notes in Computer Science, ISSN 0302-9743 ; 9047
Nyckelord [en]
Extrapolation, Forecasting, Interpolation, Benchmark data, Confidence levels, Conformal predictions, Conformal predictors, Input space, Regression trees, Significance levels, Standard procedures, Calibration
Nationell ämneskategori
Datavetenskap (datalogi)
Identifikatorer
URN: urn:nbn:se:hj:diva-38120DOI: 10.1007/978-3-319-17091-6_22Scopus ID: 2-s2.0-84949798529Lokalt ID: 0;0;miljJAILISBN: 9783319170909 (tryckt)OAI: oai:DiVA.org:hj-38120DiVA, id: diva2:1163928
Konferens
3rd International Symposium on Statistical Learning and Data Sciences, SLDS 2015; Egham; United Kingdom; 20 April 2015 through 23 April 2015
Tillgänglig från: 2017-12-08 Skapad: 2017-12-08 Senast uppdaterad: 2019-08-23Bibliografiskt granskad

Open Access i DiVA

Fulltext saknas i DiVA

Övriga länkar

Förlagets fulltextScopus

Person

Johansson, Ulf

Sök vidare i DiVA

Av författaren/redaktören
Johansson, Ulf
Datavetenskap (datalogi)

Sök vidare utanför DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetricpoäng

doi
isbn
urn-nbn
Totalt: 125 träffar
RefereraExporteraLänk till posten
Permanent länk

Direktlänk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf