Change search
Refine search result
1 - 7 of 7
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    García Martín, Eva
    et al.
    Blekinge Tekniska Högskola, Institutionen för datalogi och datorsystemteknik.
    Lavesson, Niklas
    Jönköping University, School of Engineering, JTH, Computer Science and Informatics, JTH, Jönköping AI Lab (JAIL). Blekinge Tekniska Högskola, Institutionen för datalogi och datorsystemteknik.
    Grahn, Håkan
    Blekinge Tekniska Högskola, Institutionen för datalogi och datorsystemteknik.
    Casalicchio, Emiliano
    Blekinge Tekniska Högskola, Institutionen för datalogi och datorsystemteknik.
    Boeva, Veselka
    Blekinge Tekniska Högskola, Institutionen för datalogi och datorsystemteknik.
    Hoeffding Trees with nmin adaptation2018In: The 5th IEEE International Conference on Data Science and Advanced Analytics (DSAA 2018), IEEE, 2018, p. 70-79Conference paper (Refereed)
    Abstract [en]

    Machine learning software accounts for a significant amount of energy consumed in data centers. These algorithms are usually optimized towards predictive performance, i.e. accuracy, and scalability. This is the case of data stream mining algorithms. Although these algorithms are adaptive to the incoming data, they have fixed parameters from the beginning of the execution. We have observed that having fixed parameters lead to unnecessary computations, thus making the algorithm energy inefficient.In this paper we present the nmin adaptation method for Hoeffding trees. This method adapts the value of the nmin pa- rameter, which significantly affects the energy consumption of the algorithm. The method reduces unnecessary computations and memory accesses, thus reducing the energy, while the accuracy is only marginally affected. We experimentally compared VFDT (Very Fast Decision Tree, the first Hoeffding tree algorithm) and CVFDT (Concept-adapting VFDT) with the VFDT-nmin (VFDT with nmin adaptation). The results show that VFDT-nmin consumes up to 27% less energy than the standard VFDT, and up to 92% less energy than CVFDT, trading off a few percent of accuracy in a few datasets.

  • 2.
    García Martín, Eva
    et al.
    Blekinge Tekniska Högskola, Institutionen för datalogi och datorsystemteknik.
    Lavesson, Niklas
    Jönköping University, School of Engineering, JTH, Computer Science and Informatics, JTH, Jönköping AI Lab (JAIL). Blekinge Tekniska Högskola, Institutionen för datalogi och datorsystemteknik.
    Grahn, Håkan
    Blekinge Tekniska Högskola, Institutionen för datalogi och datorsystemteknik.
    Casalicchio, Emiliano
    Blekinge Tekniska Högskola, Institutionen för datalogi och datorsystemteknik.
    Boeva, Veselka
    Blekinge Tekniska Högskola, Institutionen för datalogi och datorsystemteknik.
    How to Measure Energy Consumption in Machine Learning Algorithms2019In: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics): ECMLPKDD 2018: European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases Workshops. Lecture Notes in Computer Science. Springer, Cham, 2019, p. 243-255Conference paper (Refereed)
    Abstract [en]

    Machine learning algorithms are responsible for a significant amount of computations. These computations are increasing with the advancements in different machine learning fields. For example, fields such as deep learning require algorithms to run during weeks consuming vast amounts of energy. While there is a trend in optimizing machine learning algorithms for performance and energy consumption, still there is little knowledge on how to estimate an algorithm’s energy consumption. Currently, a straightforward cross-platform approach to estimate energy consumption for different types of algorithms does not exist. For that reason, well-known researchers in computer architecture have published extensive works on approaches to estimate the energy consumption. This study presents a survey of methods to estimate energy consumption, and maps them to specific machine learning scenarios. Finally, we illustrate our mapping suggestions with a case study, where we measure energy consumption in a big data stream mining scenario. Our ultimate goal is to bridge the current gap that exists to estimate energy consumption in machine learning scenarios.

  • 3.
    García Martín, Eva
    et al.
    Blekinge Tekniska Högskola, Institutionen för datalogi och datorsystemteknik.
    Lavesson, Niklas
    Jönköping University, School of Engineering, JTH, Computer Science and Informatics, JTH, Jönköping AI Lab (JAIL). Blekinge Tekniska Högskola, Institutionen för datalogi och datorsystemteknik.
    Grahn, Håkan
    Blekinge Tekniska Högskola, Institutionen för datalogi och datorsystemteknik.
    Casalicchio, Emiliano
    Blekinge Tekniska Högskola, Institutionen för datalogi och datorsystemteknik.
    Boeva, Veselka
    Blekinge Tekniska Högskola, Institutionen för datalogi och datorsystemteknik.
    How to Measure Energy Consumption in Machine Learning Algorithms2018In: Green Data Mining, International Workshop on Energy Efficient Data Mining and Knowledge Discovery: ECMLPKDD 2018: European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases Workshops. Lecture Notes in Computer Science. Springer, Cham, 2018Conference paper (Refereed)
    Abstract [en]

    Machine learning algorithms are responsible for a significant amount of computations. These computations are increasing with the advancements in different machine learning fields. For example, fields such as deep learning require algorithms to run during weeks consuming vast amounts of energy. While there is a trend in optimizing machine learning algorithms for performance and energy consumption, still there is little knowledge on how to estimate an algorithm’s energy consumption. Currently, a straightforward cross-platform approach to estimate energy consumption for different types of algorithms does not exist. For that reason, well-known researchers in computer architecture have published extensive works on approaches to estimate the energy consumption. This study presents a survey of methods to estimate energy consumption, and maps them to specific machine learning scenarios. Finally, we illustrate our mapping suggestions with a case study, where we measure energy consumption in a big data stream mining scenario. Our ultimate goal is to bridge the current gap that exists to estimate energy consumption in machine learning scenarios.

  • 4.
    Westphal, Florian
    et al.
    Blekinge Tekniska Högskola, Institutionen för datalogi och datorsystemteknik.
    Grahn, Håkan
    Blekinge Tekniska Högskola, Institutionen för datalogi och datorsystemteknik.
    Lavesson, Niklas
    Jönköping University, School of Engineering, JTH, Computer Science and Informatics, JTH, Jönköping AI Lab (JAIL). Blekinge Tekniska Högskola, Institutionen för datalogi och datorsystemteknik.
    Efficient document image binarization using heterogeneous computing and parameter tuning2018In: International Journal on Document Analysis and Recognition, ISSN 1433-2833, E-ISSN 1433-2825, Vol. 21, no 1-2, p. 41-58Article in journal (Refereed)
    Abstract [en]

    In the context of historical document analysis, image binarization is a first important step, which separates foreground from background, despite common image degradations, such as faded ink, stains, or bleed-through. Fast binarization has great significance when analyzing vast archives of document images, since even small inefficiencies can quickly accumulate to years of wasted execution time. Therefore, efficient binarization is especially relevant to companies and government institutions, who want to analyze their large collections of document images. The main challenge with this is to speed up the execution performance without affecting the binarization performance. We modify a state-of-the-art binarization algorithm and achieve on average a 3.5 times faster execution performance by correctly mapping this algorithm to a heterogeneous platform, consisting of a CPU and a GPU. Our proposed parameter tuning algorithm additionally improves the execution time for parameter tuning by a factor of 1.7, compared to previous parameter tuning algorithms. We see that for the chosen algorithm, machine learning-based parameter tuning improves the execution performance more than heterogeneous computing, when comparing absolute execution times. © 2018 The Author(s)

  • 5.
    Westphal, Florian
    et al.
    Blekinge Tekniska Högskola, Institutionen för datalogi och datorsystemteknik.
    Grahn, Håkan
    Blekinge Tekniska Högskola, Institutionen för datalogi och datorsystemteknik.
    Lavesson, Niklas
    Jönköping University, School of Engineering, JTH, Computer Science and Informatics, JTH, Jönköping AI Lab (JAIL). Blekinge Tekniska Högskola, Institutionen för datalogi och datorsystemteknik.
    User Feedback and Uncertainty in User Guided Binarization2018In: International Conference on Data Mining Workshops / [ed] H. Tong, Z. Li, F. Zhu, & J. Yu, IEEE Computer Society, 2018, p. 403-410, article id 8637367Conference paper (Refereed)
    Abstract [en]

    In a child’s development, the child’s inherent ability to construct knowledge from new information is as important as explicit instructional guidance. Similarly, mechanisms to produce suitable learning representations, which can be trans- ferred and allow integration of new information are important for artificial learning systems. However, equally important are modes of instructional guidance, which allow the system to learn efficiently. Thus, the challenge for efficient learning is to identify suitable guidance strategies together with suitable learning mechanisms.

    In this paper, we propose guided machine learning as source for suitable guidance strategies, we distinguish be- tween sample selection based and privileged information based strategies and evaluate three sample selection based strategies on a simple transfer learning task. The evaluated strategies are random sample selection, i.e., supervised learning, user based sample selection based on readability, and user based sample selection based on readability and uncertainty. We show that sampling based on readability and uncertainty tends to produce better learning results than the other two strategies. Furthermore, we evaluate the use of the learner’s uncertainty for self directed learning and find that effects similar to the Dunning-Kruger effect prevent this use case. The learning task in this study is document image binarization, i.e., the separation of text foreground from page background and the source domain of the transfer are texts written on paper in Latin characters, while the target domain are texts written on palm leaves in Balinese script.

  • 6.
    Westphal, Florian
    et al.
    Blekinge Institute of Technology, Karlskrona, Sweden.
    Lavesson, Niklas
    Jönköping University, School of Engineering, JTH, Computer Science and Informatics, JTH, Jönköping AI Lab (JAIL). Blekinge Institute of Technology, Karlskrona, Sweden.
    Grahn, Håkan
    Blekinge Institute of Technology, Karlskrona, Sweden.
    A case for guided machine learning2019In: Machine learning and knowledge extraction: Third IFIP TC 5, TC 12, WG 8.4, WG 8.9, WG 12.9 International Cross-Domain Conference, CD-MAKE 2019, Canterbury, UK, August 26–29, 2019, Proceedings / [ed] A. Holzinger, P. Kieseberg, A. M. Tjoa & E. Weippl, Cham: Springer, 2019, p. 353-361Conference paper (Refereed)
    Abstract [en]

    Involving humans in the learning process of a machine learning algorithm can have many advantages ranging from establishing trust into a particular model to added personalization capabilities to reducing labeling efforts. While these approaches are commonly summarized under the term interactive machine learning (iML), no unambiguous definition of iML exists to clearly define this area of research. In this position paper, we discuss the shortcomings of current definitions of iML and propose and define the term guided machine learning (gML) as an alternative.

  • 7.
    Westphal, Florian
    et al.
    Blekinge Tekniska Högskola, Institutionen för datalogi och datorsystemteknik.
    Lavesson, Niklas
    Jönköping University, School of Engineering, JTH, Computer Science and Informatics, JTH, Jönköping AI Lab (JAIL). Blekinge Tekniska Högskola, Institutionen för datalogi och datorsystemteknik.
    Grahn, Håkan
    Blekinge Tekniska Högskola, Institutionen för datalogi och datorsystemteknik.
    Document Image Binarization Using Recurrent Neural Networks2018In: Proceedings - 13th IAPR International Workshop on Document Analysis Systems, DAS 2018, 2018, p. 263-268Conference paper (Refereed)
    Abstract [en]

    In the context of document image analysis, image binarization is an important preprocessing step for other document analysis algorithms, but also relevant on its own by improving the readability of images of historical documents. While historical document image binarization is challenging due to common image degradations, such as bleedthrough, faded ink or stains, achieving good binarization performance in a timely manner is a worthwhile goal to facilitate efficient information extraction from historical documents. In this paper, we propose a recurrent neural network based algorithm using Grid Long Short-Term Memory cells for image binarization, as well as a pseudo F-Measure based weighted loss function. We evaluate the binarization and execution performance of our algorithm for different choices of footprint size, scale factor and loss function. Our experiments show a significant trade-off between binarization time and quality for different footprint sizes. However, we see no statistically significant difference when using different scale factors and only limited differences for different loss functions. Lastly, we compare the binarization performance of our approach with the best performing algorithm in the 2016 handwritten document image binarization contest and show that both algorithms perform equally well.

1 - 7 of 7
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf