Endre søk
RefereraExporteraLink to record
Permanent link

Direct link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Overproduce-and-Select: The Grim Reality
Högskolan i Borås, Institutionen Handels- och IT-högskolan.ORCID-id: 0000-0003-0412-6199
Högskolan i Borås, Institutionen Handels- och IT-högskolan.ORCID-id: 0000-0003-0274-9026
Högskolan i Borås, Institutionen Handels- och IT-högskolan.
2013 (engelsk)Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

Overproduce-and-select (OPAS) is a frequently used paradigm for building ensembles. In static OPAS, a large number of base classifiers are trained, before a subset of the available models is selected to be combined into the final ensemble. In general, the selected classifiers are supposed to be accurate and diverse for the OPAS strategy to result in highly accurate ensembles, but exactly how this is enforced in the selection process is not obvious. Most often, either individual models or ensembles are evaluated, using some performance metric, on available and labeled data. Naturally, the underlying assumption is that an observed advantage for the models (or the resulting ensemble) will carry over to test data. In the experimental study, a typical static OPAS scenario, using a pool of artificial neural networks and a number of very natural and frequently used performance measures, is evaluated on 22 publicly available data sets. The discouraging result is that although a fairly large proportion of the ensembles obtained higher test set accuracies, compared to using the entire pool as the ensemble, none of the selection criteria could be used to identify these highly accurate ensembles. Despite only investigating a specific scenario, we argue that the settings used are typical for static OPAS, thus making the results general enough to question the entire paradigm.

sted, utgiver, år, opplag, sider
IEEE, 2013.
Emneord [en]
Ensembles, Neural networks, Overproduce-and-select, Data mining, Machine Learning
HSV kategori
Identifikatorer
URN: urn:nbn:se:hj:diva-38094DOI: 10.1109/CIEL.2013.6613140ISI: 000335317800008Lokal ID: 0;0;miljJAILOAI: oai:DiVA.org:hj-38094DiVA, id: diva2:1163318
Konferanse
IEEE Symposium on Computational Intelligence and Ensemble Learning (CIEL), 16-19 April 2013 , Singapore
Merknad

Sponsorship:

Swedish Foundation for

Strategic Research through the project High-Performance Data

Mining for Drug Effect Detection (ref. no. IIS11-0053)

Tilgjengelig fra: 2017-12-06 Laget: 2017-12-06 Sist oppdatert: 2019-08-23bibliografisk kontrollert

Open Access i DiVA

fulltext(199 kB)124 nedlastinger
Filinformasjon
Fil FULLTEXT01.pdfFilstørrelse 199 kBChecksum SHA-512
b41444d3c4201bf9469d3ff306c925ab4c3001c3d5a882a0549546dfac9caf25a9714464d1f22fc44796cf3ed49d35dd2053230da803724b8ec8e22a72ada932
Type fulltextMimetype application/pdf

Andre lenker

Forlagets fulltekst

Personposter BETA

Johansson, UlfLöfström, Tuve

Søk i DiVA

Av forfatter/redaktør
Johansson, UlfLöfström, Tuve

Søk utenfor DiVA

GoogleGoogle Scholar
Totalt: 124 nedlastinger
Antall nedlastinger er summen av alle nedlastinger av alle fulltekster. Det kan for eksempel være tidligere versjoner som er ikke lenger tilgjengelige

doi
urn-nbn

Altmetric

doi
urn-nbn
Totalt: 233 treff
RefereraExporteraLink to record
Permanent link

Direct link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf