Change search
Link to record
Permanent link

Direct link
BETA
König, Rikard
Publications (6 of 6) Show all publications
Radon, A., Johansson, P., Sundström, M., Alm, H., Behre, M., Göbel, H., . . . Wallström, S. (2016). What happens when retail meets research?: Special session. In: : . Paper presented at ANZMAC Conference 2016 - Marketing in a Post-Disciplinary Era, Christchurch, 5-7 December, 2016.
Open this publication in new window or tab >>What happens when retail meets research?: Special session
Show others...
2016 (English)Conference paper, Oral presentation with published abstract (Other academic)
Abstract [en]

special session Information

We are witnessing the beginning of a seismic shift in retail due to digitalization. However, what is meant by digitalization is less clear. Sometimes it is understood as means for automatization and sometimes it is regarded as equal to e-commerce. Sometimes digitalization is considered being both automatization and e-commerce trough new technology. In recent years there has been an increase in Internet and mobile devise usage within the retail sector and e-commerce is growing, encompassing both large and small retailers. Digital tools such as, new applications are developing rapidly in order to search for information about products based on price, health, environmental and ethical considerations, and also to facilitate payments. Also the fixed store settings are changing due to digitalization and at an overall level; digitalization will lead to existing business models being reviewed, challenged and ultimately changed. More specifically, digitalization has consequences for all parts of the physical stores including customer interface, knowledge creation, sustainability performance and logistics. As with all major shifts, digitalization comprises both opportunities and challenges for retail firms and employees, and these needs to be empirically studied and systematically analysed. The Swedish Institute for Innovative Retailing at University of Borås is a research centre with the aim of identifying and analysing emerging trends that digitalization brings for the retail industry.

National Category
Economics and Business
Research subject
Business and IT
Identifiers
urn:nbn:se:hj:diva-45796 (URN)0;0;miljJAIL (Local ID)0;0;miljJAIL (Archive number)0;0;miljJAIL (OAI)
Conference
ANZMAC Conference 2016 - Marketing in a Post-Disciplinary Era, Christchurch, 5-7 December, 2016
Available from: 2019-09-06 Created: 2019-09-06 Last updated: 2019-09-06Bibliographically approved
König, R., Johansson, U., Löfström, T. & Niklasson, L. (2010). Improving GP Classification Performance by Injection of Decision Trees. In: : . Paper presented at WCCI 2010 IEEE World Congress on Computational Intelligence, CEC 2010. IEEE
Open this publication in new window or tab >>Improving GP Classification Performance by Injection of Decision Trees
2010 (English)Conference paper, Published paper (Refereed)
Abstract [en]

This paper presents a novel hybrid method combining genetic programming and decision tree learning. The method starts by estimating a benchmark level of reasonable accuracy, based on decision tree performance on bootstrap samples of the training set. Next, a normal GP evolution is started with the aim of producing an accurate GP. At even intervals, the best GP in the population is evaluated against the accuracy benchmark. If the GP has higher accuracy than the benchmark, the evolution continues normally until the maximum number of generations is reached. If the accuracy is lower than the benchmark, two things happen. First, the fitness function is modified to allow larger GPs, able to represent more complex models. Secondly, a decision tree with increased size and trained on a bootstrap of the training data is injected into the population. The experiments show that the hybrid solution of injecting decision trees into a GP population gives synergetic effects producing results that are better than using either technique separately. The results, from 18 UCI data sets, show that the proposed method clearly outperforms normal GP, and is significantly better than the standard decision tree algorithm.

Place, publisher, year, edition, pages
IEEE, 2010
Series
CFP10ICE-DVD
Keywords
genetic programming, tree induction, Machine Learning
National Category
Computer Sciences Computer and Information Sciences
Identifiers
urn:nbn:se:hj:diva-45802 (URN)10.1109/CEC.2010.5585988 (DOI)0;0;miljJAIL (Local ID)978-1-4244-6909-3 (ISBN)0;0;miljJAIL (Archive number)0;0;miljJAIL (OAI)
Conference
WCCI 2010 IEEE World Congress on Computational Intelligence, CEC 2010
Available from: 2019-09-06 Created: 2019-09-06 Last updated: 2019-09-06Bibliographically approved
Johansson, U., König, R., Löfström, T. & Niklasson, L. (2010). Using Imaginary Ensembles to Select GP Classifiers. In: A.I. et al. Esparcia-Alcazar (Ed.), Genetic Programming: 13th European Conference, EuroGP 2010, Istanbul, Turkey, April 7-9, 2010, Proceedings: . Paper presented at 13th European Conference, EuroGP 2010, Istanbul, Turkey, April 7-9, 2010 (pp. 278-288). Springer
Open this publication in new window or tab >>Using Imaginary Ensembles to Select GP Classifiers
2010 (English)In: Genetic Programming: 13th European Conference, EuroGP 2010, Istanbul, Turkey, April 7-9, 2010, Proceedings / [ed] A.I. et al. Esparcia-Alcazar, Springer, 2010, p. 278-288Conference paper, Published paper (Refereed)
Abstract [en]

When predictive modeling requires comprehensible models, most data miners will use specialized techniques producing rule sets or decision trees. This study, however, shows that genetically evolved decision trees may very well outperform the more specialized techniques. The proposed approach evolves a number of decision trees and then uses one of several suggested selection strategies to pick one specific tree from that pool. The inherent inconsistency of evolution makes it possible to evolve each tree using all data, and still obtain somewhat different models. The main idea is to use these quite accurate and slightly diverse trees to form an imaginary ensemble, which is then used as a guide when selecting one specific tree. Simply put, the tree classifying the largest number of instances identically to the ensemble is chosen. In the experimentation, using 25 UCI data sets, two selection strategies obtained significantly higher accuracy than the standard rule inducer J48.

Place, publisher, year, edition, pages
Springer, 2010
Series
LNCS ; 6021
Keywords
classification, decision trees, ensembles, genetic programming, Machine learning
National Category
Computer Sciences Information Systems
Identifiers
urn:nbn:se:hj:diva-45805 (URN)0;0;miljJAIL (Local ID)978-3-642-12147-0 (ISBN)0;0;miljJAIL (Archive number)0;0;miljJAIL (OAI)
Conference
13th European Conference, EuroGP 2010, Istanbul, Turkey, April 7-9, 2010
Note

Sponsorship:

This work was supported by the INFUSIS project (www.his.se/infusis) at the University of Skövde, Sweden, in partnership with the Swedish Knowledge Foundation under grant 2008/0502.

Available from: 2019-09-06 Created: 2019-09-06 Last updated: 2019-09-06Bibliographically approved
Johansson, U., König, R., Löfström, T., Sönströd, C. & Niklasson, L. (2009). Post-processing Evolved Decision Trees. In: Ajith Abraham (Ed.), Foundations of Computational Intelligence: (pp. 149-164). Springer
Open this publication in new window or tab >>Post-processing Evolved Decision Trees
Show others...
2009 (English)In: Foundations of Computational Intelligence / [ed] Ajith Abraham, Springer, 2009, p. 149-164Chapter in book (Other academic)
Abstract [en]

Although Genetic Programming (GP) is a very general technique, it is also quite powerful. As a matter of fact, GP has often been shown to outperform more specialized techniques on a variety of tasks. In data mining, GP has successfully been applied to most major tasks; e.g. classification, regression and clustering. In this chapter, we introduce, describe and evaluate a straightforward novel algorithm for post-processing genetically evolved decision trees. The algorithm works by iteratively, one node at a time, search for possible modifications that will result in higher accuracy. More specifically, the algorithm, for each interior test, evaluates every possible split for the current attribute and chooses the best. With this design, the post-processing algorithm can only increase training accuracy, never decrease it. In the experiments, the suggested algorithm is applied to GP decision trees, either induced directly from datasets, or extracted from neural network ensembles. The experimentation, using 22 UCI datasets, shows that the suggested post-processing technique results in higher test set accuracies on a large majority of the datasets. As a matter of fact, the increase in test accuracy is statistically significant for one of the four evaluated setups, and substantial on two out of the other three.

Place, publisher, year, edition, pages
Springer, 2009
Keywords
decision trees, genetic programming, Machine learning, data mining
National Category
Computer and Information Sciences Computer and Information Sciences
Identifiers
urn:nbn:se:hj:diva-45808 (URN)10.1007/978-3-642-01088-0 (DOI)0;0;miljJAIL (Local ID)978-3-642-01087-3 (ISBN)0;0;miljJAIL (Archive number)0;0;miljJAIL (OAI)
Available from: 2015-12-17 Created: 2019-09-06Bibliographically approved
Johansson, U., Sönströd, C., Löfström, T. & König, R. (2009). Using Genetic Programming to Obtain Implicit Diversity. In: : . Paper presented at 2009 IEEE Congress on Evolutionary Computation (CEC 2009), Trondheim, Norge. IEEE
Open this publication in new window or tab >>Using Genetic Programming to Obtain Implicit Diversity
2009 (English)Conference paper, Published paper (Refereed)
Abstract [en]

When performing predictive data mining, the use of ensembles is known to increase prediction accuracy, compared to single models. To obtain this higher accuracy, ensembles should be built from base classifiers that are both accurate and diverse. The question of how to balance these two properties in order to maximize ensemble accuracy is, however, far from solved and many different techniques for obtaining ensemble diversity exist. One such technique is bagging, where implicit diversity is introduced by training base classifiers on different subsets of available data instances, thus resulting in less accurate, but diverse base classifiers. In this paper, genetic programming is used as an alternative method to obtain implicit diversity in ensembles by evolving accurate, but different base classifiers in the form of decision trees, thus exploiting the inherent inconsistency of genetic programming. The experiments show that the GP approach outperforms standard bagging of decision trees, obtaining significantly higher ensemble accuracy over 25 UCI datasets. This superior performance stems from base classifiers having both higher average accuracy and more diversity. Implicitly introducing diversity using GP thus works very well, since evolved base classifiers tend to be highly accurate and diverse.

Place, publisher, year, edition, pages
IEEE, 2009
Keywords
genetic programming, bagging, ensembles, diversity, Machine learning
National Category
Computer and Information Sciences Computer and Information Sciences
Identifiers
urn:nbn:se:hj:diva-45809 (URN)0;0;miljJAIL (Local ID)978-1-4244-2959-2 (ISBN)0;0;miljJAIL (Archive number)0;0;miljJAIL (OAI)
Conference
2009 IEEE Congress on Evolutionary Computation (CEC 2009), Trondheim, Norge
Available from: 2019-09-06 Created: 2019-09-06 Last updated: 2019-09-06Bibliographically approved
Johansson, U., König, R., Löfström, T. & Niklasson, L. (2008). Increasing Rule Extraction Accuracy by Post-processing GP Trees. In: Proceedings of the Congress on Evolutionary Computation: . Paper presented at CEC 2008, Hong Kong, June 1-6, 2008 (pp. 3010-3015). IEEE
Open this publication in new window or tab >>Increasing Rule Extraction Accuracy by Post-processing GP Trees
2008 (English)In: Proceedings of the Congress on Evolutionary Computation, IEEE, 2008, p. 3010-3015Conference paper, Published paper (Refereed)
Abstract [en]

Genetic programming (GP), is a very general and efficient technique, often capable of outperforming more specialized techniques on a variety of tasks. In this paper, we suggest a straightforward novel algorithm for post-processing of GP classification trees. The algorithm iteratively, one node at a time, searches for possible modifications that would result in higher accuracy. More specifically, the algorithm for each split evaluates every possible constant value and chooses the best. With this design, the post-processing algorithm can only increase training accuracy, never decrease it. In this study, we apply the suggested algorithm to GP trees, extracted from neural network ensembles. Experimentation, using 22 UCI datasets, shows that the post-processing results in higher test set accuracies on a large majority of datasets. As a matter of fact, for two setups of three evaluated, the increase in accuracy is statistically significant.

Place, publisher, year, edition, pages
IEEE, 2008
Keywords
genetic programming, rule extraction, Computer Science, Machine Learning, Data Mining, data mining
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:hj:diva-45814 (URN)0;0;miljJAIL (Local ID)978-1-4244-1823-7 (ISBN)0;0;miljJAIL (Archive number)0;0;miljJAIL (OAI)
Conference
CEC 2008, Hong Kong, June 1-6, 2008
Note

Sponsorship:

This work was supported by the Information Fusion Research Program (University of Skövde, Sweden) in partnership with the Swedish Knowledge Foundation under grant 2003/0104 (URL: http://www.infofusion.se).

Available from: 2019-09-06 Created: 2019-09-06 Last updated: 2019-09-06Bibliographically approved
Organisations

Search in DiVA

Show all publications