Mid Sweden University

miun.sePublications
Planned maintenance
A system upgrade is planned for 10/12-2024, at 12:00-13:00. During this time DiVA will be unavailable.
Change search
Link to record
Permanent link

Direct link
Krug, Silvia
Publications (10 of 27) Show all publications
Schneider, S., Goetze, M., Krug, S. & Hutschenreuther, T. (2024). A Retrofit Streetlamp Monitoring Solution Using LoRaWAN Communications. Eng, 5(1), 513-531
Open this publication in new window or tab >>A Retrofit Streetlamp Monitoring Solution Using LoRaWAN Communications
2024 (English)In: Eng, ISSN 2673-4117, Vol. 5, no 1, p. 513-531Article in journal (Refereed) Published
Abstract [en]

Ubiquitous street lighting is essential for urban areas. While nowadays, LED-based “smart lamps” are commercially available, municipalities can only switch to them in the long run due to financial constraints. Especially, older types of lamps require frequent bulb replacements to maintain the lighting infrastructure’s function. To speed up the detection of defects and enable better planning, a non-invasively retrofittable IoT sensor solution is proposed that monitors lamps for defects via visible light sensors, communicates measurement data wirelessly to a central location via LoRaWAN, and processes and visualizes the resulting information centrally. The sensor nodes are capable of automatically adjusting to shifting day- and nighttimes thanks to a second sensor monitoring ambient light. The work specifically addresses aspects of energy efficiency essential to the battery-powered operation of the sensor nodes. Besides design considerations and implementation details, the paper also summarizes the experimental validation of the system by way of an extensive field trial and expounds upon further experiences from it.

Place, publisher, year, edition, pages
MDPI AG, 2024
Keywords
energy efficienct nodes, IoT, lamp monitoring, LoRaWAN, retrofit solution, smart city
National Category
Computer Engineering
Identifiers
urn:nbn:se:miun:diva-50996 (URN)10.3390/eng5010028 (DOI)001215798300001 ()2-s2.0-85188818186 (Scopus ID)
Available from: 2024-04-03 Created: 2024-04-03 Last updated: 2024-05-17
Krug, S. & Hutschenreuther, T. (2024). Enhancing Apple Cultivar Classification Using Multiview Images. Journal of Imaging, 10(4), Article ID 94.
Open this publication in new window or tab >>Enhancing Apple Cultivar Classification Using Multiview Images
2024 (English)In: Journal of Imaging, ISSN 2313-433X, Vol. 10, no 4, article id 94Article in journal (Refereed) Published
Abstract [en]

Apple cultivar classification is challenging due to the inter-class similarity and high intra-class variations. Human experts do not rely on single-view features but rather study each viewpoint of the apple to identify a cultivar, paying close attention to various details. Following our previous work, we try to establish a similar multiview approach for machine-learning (ML)-based apple classification in this paper. In our previous work, we studied apple classification using one single view. While these results were promising, it also became clear that one view alone might not contain enough information in the case of many classes or cultivars. Therefore, exploring multiview classification for this task is the next logical step. Multiview classification is nothing new, and we use state-of-the-art approaches as a base. Our goal is to find the best approach for the specific apple classification task and study what is achievable with the given methods towards our future goal of applying this on a mobile device without the need for internet connectivity. In this study, we compare an ensemble model with two cases where we use single networks: one without view specialization trained on all available images without view assignment and one where we combine the separate views into a single image of one specific instance. The two latter options reflect dataset organization and preprocessing to allow the use of smaller models in terms of stored weights and number of operations than an ensemble model. We compare the different approaches based on our custom apple cultivar dataset. The results show that the state-of-the-art ensemble provides the best result. However, using images with combined views shows a decrease in accuracy by 3% while requiring only 60% of the memory for weights. Thus, simpler approaches with enhanced preprocessing can open a trade-off for classification tasks on mobile devices. 

Place, publisher, year, edition, pages
MDPI, 2024
Keywords
apple cultivar recognition, deep learning, multiview classification
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:miun:diva-51295 (URN)10.3390/jimaging10040094 (DOI)001210573100001 ()2-s2.0-85191504409 (Scopus ID)
Available from: 2024-05-08 Created: 2024-05-08 Last updated: 2024-05-17
Krug, S. & Hutschenreuther, T. (2023). A Case Study toward Apple Cultivar Classification Using Deep Learning. AGRIENGINEERING, 5(2), 814-828
Open this publication in new window or tab >>A Case Study toward Apple Cultivar Classification Using Deep Learning
2023 (English)In: AGRIENGINEERING, ISSN 2624-7402, Vol. 5, no 2, p. 814-828Article in journal (Refereed) Published
Abstract [en]

Machine Learning (ML) has enabled many image-based object detection and recognition-based solutions in various fields and is the state-of-the-art method for these tasks currently. Therefore, it is of interest to apply this technique to different questions. In this paper, we explore whether it is possible to classify apple cultivars based on fruits using ML methods and images of the apple in question. The goal is to develop a tool that is able to classify the cultivar based on images that could be used in the field. This helps to draw attention to the variety and diversity in fruit growing and to contribute to its preservation. Classifying apple cultivars is a certain challenge in itself, as all apples are similar, while the variety within one class can be high. At the same time, there are potentially thousands of cultivars indicating that the task becomes more challenging when more cultivars are added to the dataset. Therefore, the first question is whether a ML approach can extract enough information to correctly classify the apples. In this paper, we focus on the technical requirements and prerequisites to verify whether ML approaches are able to fulfill this task with a limited number of cultivars as proof of concept. We apply transfer learning on popular image processing convolutional neural networks (CNNs) by retraining them on a custom apple dataset. Afterward, we analyze the classification results as well as possible problems. Our results show that apple cultivars can be classified correctly, but the system design requires some extra considerations.

Place, publisher, year, edition, pages
MDPI, 2023
Keywords
apple cultivar recognition, deep learning, challenges
National Category
Agricultural Science, Forestry and Fisheries Computer and Information Sciences
Identifiers
urn:nbn:se:miun:diva-48921 (URN)10.3390/agriengineering5020050 (DOI)001013854600001 ()2-s2.0-85163578973 (Scopus ID)
Available from: 2023-07-06 Created: 2023-07-06 Last updated: 2023-08-14Bibliographically approved
Krug, S., Goetze, M., Schneider, S. & Hutschenreuther, T. (2023). A Modular Platform to Build Task-Specific IoT Network Solutions for Agriculture and Forestry. In: 2023 IEEE International Workshop on Metrology for Agriculture and Forestry (MetroAgriFor): . Paper presented at 2023 IEEE International Workshop on Metrology for Agriculture and Forestry, MetroAgriFor 2023 - Proceedings (pp. 820-825). IEEE conference proceedings
Open this publication in new window or tab >>A Modular Platform to Build Task-Specific IoT Network Solutions for Agriculture and Forestry
2023 (English)In: 2023 IEEE International Workshop on Metrology for Agriculture and Forestry (MetroAgriFor), IEEE conference proceedings, 2023, p. 820-825Conference paper, Published paper (Refereed)
Abstract [en]

The Internet of Things (IoT) enables various applications by providing means to capture environmental effects and possibly control some part of the environment. This is also true for different Smart Farming applications and as a result various systems are available. However, when designing a measurement campaign for a certain task, it becomes obvious that the current products lack flexibility to build the best solution using a single system. This concerns available communication options but also the option to add further sensors for specific tasks. In this paper, we present a modular platform for IoT nodes that can be configured as needed for different applications. We first present our concept and then highlight the ability to use different communication options, power options and the ability to integrate further sensors via adaptation modules. In addition, we show two example use cases that, based on the state of the art, would require multiple parallel systems. 

Place, publisher, year, edition, pages
IEEE conference proceedings, 2023
Keywords
Internet of Things, Modular Platform, Smart Agriculture, Task-Specific Measurement Setup
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:miun:diva-50865 (URN)10.1109/MetroAgriFor58484.2023.10424104 (DOI)001174612900152 ()2-s2.0-85186511869 (Scopus ID)9798350312720 (ISBN)
Conference
2023 IEEE International Workshop on Metrology for Agriculture and Forestry, MetroAgriFor 2023 - Proceedings
Available from: 2024-03-13 Created: 2024-03-13 Last updated: 2024-05-17Bibliographically approved
Saqib, E., Sánchez Leal, I., Shallari, I., Jantsch, A., Krug, S. & O'Nils, M. (2023). Optimizing the IoT Performance: A Case Study on Pruning a Distributed CNN. In: 2023 IEEE Sensors Applications Symposium (SAS): . Paper presented at 2023 IEEE Sensors Applications Symposium, SAS 2023.
Open this publication in new window or tab >>Optimizing the IoT Performance: A Case Study on Pruning a Distributed CNN
Show others...
2023 (English)In: 2023 IEEE Sensors Applications Symposium (SAS), 2023Conference paper, Published paper (Refereed)
Abstract [en]

Implementing Convolutional Neural Networks (CNN) based computer vision algorithms in Internet of Things (IoT) sensor nodes can be difficult due to strict computational, memory, and latency constraints. To address these challenges, researchers have utilized techniques such as quantization, pruning, and model partitioning. Partitioning the CNN reduces the computational burden on an individual node, but the overall system computational load remains constant. Additionally, communication energy is also incurred. To understand the effect of partitioning and pruning on energy and latency, we conducted a case study using a feet detection application realized with Tiny Yolo-v3 on a 12th Gen Intel CPU with NVIDIA GeForce RTX 3090 GPU. After partitioning the CNN between the sequential layers, we apply quantization, pruning, and compression and study the effects on energy and latency. We analyze the extent to which computational tasks, data, and latency can be reduced while maintaining a high level of accuracy. After achieving this reduction, we offloaded the remaining partitioned model to the edge node. We found that over 90% computation reduction and over 99% data transmission reduction are possible while maintaining mean average precision above 95%. This results in up to 17x energy savings and up to 5.2x performance speed-up. 

Keywords
CNN, IoT, Partitioning, Pruning, Quantization, Tiny YOLO-v3
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:miun:diva-49648 (URN)10.1109/SAS58821.2023.10254054 (DOI)2-s2.0-85174060733 (Scopus ID)9798350323078 (ISBN)
Conference
2023 IEEE Sensors Applications Symposium, SAS 2023
Available from: 2023-10-24 Created: 2023-10-24 Last updated: 2023-10-24Bibliographically approved
Pandey, R., Uziel, S., Hutschenreuther, T. & Krug, S. (2023). Towards Deploying DNN Models on Edge for Predictive Maintenance Applications. Electronics, 12(3), Article ID 639.
Open this publication in new window or tab >>Towards Deploying DNN Models on Edge for Predictive Maintenance Applications
2023 (English)In: Electronics, E-ISSN 2079-9292, Vol. 12, no 3, article id 639Article in journal (Refereed) Published
Abstract [en]

Almost all rotating machinery in the industry has bearings as their key building block and most of these machines run 24 × 7. This makes bearing health prediction an active research area for predictive maintenance solutions. Many state of the art Deep Neural Network (DNN) models have been proposed to solve this. However, most of these high performance models are computationally expensive and have high memory requirements. This limits their use to very specific industrial applications with powerful hardwares deployed close the the machinery. In order to bring DNN-based solutions to a potential use in the industry, we need to deploy these models on Microcontroller Units (MCUs) which are cost effective and energy efficient. However, this step is typically neglected in literature as it poses new challenges. The primary concern when inferencing the DNN models on MCUs is the on chip memory of the MCU that has to fit the model, the data and additional code to run the system. Almost all the state of the art models fail this litmus test since they feature too many parameters. In this paper, we show the challenges related to the deployment, review possible solutions and evaluate one of them showing how the deployment can be realized and what steps are needed. The focus is on the steps required for the actual deployment rather than finding the optimal solution. This paper is among the first to show the deployment on MCUs for a predictive maintenance use case. We first analyze the gap between State Of The Art benchmark DNN models for bearing defect classification and the memory constraint of two MCU variants. Additionally, we review options to reduce the model size such as pruning and quantization. Afterwards, we evaluate a solution to deploy the DNN models by pruning them in order to fit them into microcontrollers. Our results show that most models under test can be reduced to fit MCU memory for a maximum loss of (Formula presented.) in average accuracy of the pruned models in comparison to the original models. Based on the results, we also discuss which methods are promising and which combination of model and feature work best for the given classification problem. 

Keywords
artificial intelligence (AI), edge AI, embedded AI, model deployment, predictive maintenance, pruning
National Category
Computer Systems
Identifiers
urn:nbn:se:miun:diva-47621 (URN)10.3390/electronics12030639 (DOI)000931965200001 ()2-s2.0-85147885344 (Scopus ID)
Available from: 2023-02-21 Created: 2023-02-21 Last updated: 2023-03-16Bibliographically approved
Sánchez Leal, I., Saqib, E., Shallari, I., Jantsch, A., Krug, S. & O'Nils, M. (2023). Waist Tightening of CNNs: A Case study on Tiny YOLOv3 for Distributed IoT Implementations. In: ACM International Conference Proceeding Series: . Paper presented at 2023 Cyber-Physical Systems and Internet-of-Things Week, CPS-IoT Week 2023, 9 May 2023 through 12 May 2023 (pp. 241-246). Association for Computing Machinery (ACM)
Open this publication in new window or tab >>Waist Tightening of CNNs: A Case study on Tiny YOLOv3 for Distributed IoT Implementations
Show others...
2023 (English)In: ACM International Conference Proceeding Series, Association for Computing Machinery (ACM), 2023, p. 241-246Conference paper, Published paper (Refereed)
Abstract [en]

Computer vision systems in sensor nodes of the Internet of Things (IoT) based on Deep Learning (DL) are demanding because the DL models are memory and computation hungry while the nodes often come with tight constraints on energy, latency, and memory. Consequently, work has been done to reduce the model size or distribute part of the work to other nodes. However, then the question arises how these approaches impact the energy consumption at the node and the inference time of the system. In this work, we perform a case study to explore the impact of partitioning a Convolutional Neural Network (CNN) such that one part is implemented on the IoT node, while the rest is implemented on an edge device. The goal is to explore how the choice of partition point, quantization method and communication technology affects the IoT system. We identify possible partitioning points between layers, where we transform the feature maps passed between layers by applying quantization and compression to reduce the data sent over the communication channel between the two partitions in Tiny YOLOv3. The results show that a reduction of transmitted data by 99.8% reduces the network accuracy by 3 percentage points. Furthermore, the evaluation of various IoT communication protocols shows that the quantization of data facilitates CNN network partitioning with significant reduction of overall latency and node energy consumption. 

Place, publisher, year, edition, pages
Association for Computing Machinery (ACM), 2023
Keywords
CNN partitioning, convolutional neural networks, intelligence partitioning, Internet of Things, smart camera
National Category
Communication Systems
Identifiers
urn:nbn:se:miun:diva-48419 (URN)10.1145/3576914.3587518 (DOI)001054880600042 ()2-s2.0-85159789406 (Scopus ID)9798400700491 (ISBN)
Conference
2023 Cyber-Physical Systems and Internet-of-Things Week, CPS-IoT Week 2023, 9 May 2023 through 12 May 2023
Available from: 2023-06-07 Created: 2023-06-07 Last updated: 2023-10-13Bibliographically approved
Pandey, R., Uziel, S., Hutschenreuther, T. & Krug, S. (2023). Weighted Pruning with Filter Search to Deploy DNN Models on Microcontrollers. In: 2023 IEEE 12th International Conference on Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications (IDAACS): . Paper presented at 2023 IEEE 12th International Conference on Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications (IDAACS) (pp. 1077-1082). IEEE conference proceedings
Open this publication in new window or tab >>Weighted Pruning with Filter Search to Deploy DNN Models on Microcontrollers
2023 (English)In: 2023 IEEE 12th International Conference on Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications (IDAACS), IEEE conference proceedings, 2023, p. 1077-1082Conference paper, Published paper (Refereed)
Abstract [en]

Predictive Maintenance (PdM) helps to determine the condition of in-service industrial equipment and components and their timely replacement. This can be achieved by Artificial Intelligence (AI) enabled information systems. AI has been used extensively in addressing the condition monitoring problems. Most existing Deep Neural Network (DNN) models which are capable of solving PdM problems have a large memory foot print and are functional on remote machines using cloud based infrastructure only. In order to inference them close to the process, they need to run on memory constrained devices like microcontrollers (MCUs). In this work, we propose a weighted pruning algorithm to reduce the number of trainable parameters in the DNN model for bearing fault classification to enable its execution on the MCU. In addition to the pruning, we reduce the trainable model parameters by making an extensive filter size search. The model size is reduced without compromising on the performance of the pruned models by using the magnitude based method. In case of AlexNet, LeNet and Autoencoder we could reduce the model size upto 89%, 39% and 54% respectively with the new approach in comparison to the magnitude based state of the art approach. 

Place, publisher, year, edition, pages
IEEE conference proceedings, 2023
Keywords
DNN, Embedded Systems, Intelligent Systems, Predictive Maintenance, Pruning
National Category
Computer Systems
Identifiers
urn:nbn:se:miun:diva-50595 (URN)10.1109/IDAACS58523.2023.10348867 (DOI)2-s2.0-85184795811 (Scopus ID)9798350358056 (ISBN)
Conference
2023 IEEE 12th International Conference on Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications (IDAACS)
Available from: 2024-02-20 Created: 2024-02-20 Last updated: 2024-02-20Bibliographically approved
Onus, U., Uziel, S., Hutschenreuther, T. & Krug, S. (2022). Trade-off between Spectral Feature Extractors for Machine Health Prognostics on Microcontrollers. In: CIVEMSA 2022 - IEEE International Conference on Computational Intelligence and Virtual Environments for Measurement Systems and Applications, Proceedings: . Paper presented at 10th IEEE International Conference on Computational Intelligence and Virtual Environments for Measurement Systems and Applications, CIVEMSA 2022, 15 June 2022 through 17 June 2022. IEEE
Open this publication in new window or tab >>Trade-off between Spectral Feature Extractors for Machine Health Prognostics on Microcontrollers
2022 (English)In: CIVEMSA 2022 - IEEE International Conference on Computational Intelligence and Virtual Environments for Measurement Systems and Applications, Proceedings, IEEE, 2022Conference paper, Published paper (Refereed)
Abstract [en]

Machine learning methods have shown a high impact on machine health prognostics solutions. However, most studies stop after building a model on a server or pc, without deploying it to embedded systems close to the machinery. Bringing machine learning models to small embedded systems with a small energy budget does require adapted models and raw time series data processing to handle resource constraints while maintaining high model performance. Feature extraction plays a crucial role in this process. One of the most common methods for machinery data feature is its spectral information, that are extracted via digital filters. Calculating spectral features on microcontrollers has a great impact on the computational requirements of the overall estimations. In this paper, we analyze mel-spectrogram and infinite impulse response (IIR) based spectral feature extractors regarding their estimation performances and their computational requirements. The goal is to evaluate possible trade-offs when selecting one feature extractor over the other. To achieve this, we study the cost of both methods theoretically and via run-time measurements after analyzing the feature design space to ensure good model performance. Our results show that by selecting an appropriate filter to the problem, its feature space dimensionality and, consequently, its computational load can be reduced. 

Place, publisher, year, edition, pages
IEEE, 2022
Keywords
embedded system, filter-banks, logarithmic mel-spectrogram, machine condition estimation, machine learning, microcontroller
National Category
Computer Sciences
Identifiers
urn:nbn:se:miun:diva-46168 (URN)10.1109/CIVEMSA53371.2022.9853642 (DOI)000859912400002 ()2-s2.0-85137787617 (Scopus ID)9781665434454 (ISBN)
Conference
10th IEEE International Conference on Computational Intelligence and Virtual Environments for Measurement Systems and Applications, CIVEMSA 2022, 15 June 2022 through 17 June 2022
Available from: 2022-09-27 Created: 2022-09-27 Last updated: 2022-10-20Bibliographically approved
Krug, S., Miethe, S. & Hutschenreuther, T. (2021). Comparing BLE and NB-IoT as communication options for smart viticulture IoT applications. In: 2021 IEEE Sensors Applications Symposium (SAS): . Paper presented at 2021 IEEE Sensors Applications Symposium, SAS 2021, 23 August 2021 through 25 August 2021.
Open this publication in new window or tab >>Comparing BLE and NB-IoT as communication options for smart viticulture IoT applications
2021 (English)In: 2021 IEEE Sensors Applications Symposium (SAS), 2021Conference paper, Published paper (Refereed)
Abstract [en]

Choosing the appropriate communication technology for outdoor applications has been a challenge over years and let to many different options. This makes it difficult for designers and users to chose the best option for their setup as each option has unique pros and cons. In this paper, we evaluate and compare Narrow Band Internet of Things (NB-IoT) and Bluetooth Low Energy (BLE) regarding their applicability for a smart viticulture scenario. We study how the node density and system energy consumption varies for various configurations and are thus able to highlight challenges in deployments as well as tradeoffs between the technologies. 

Keywords
Analytical evaluation, BLE, Energy consumption, IoT, NB-IoT, Smart agriculture
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:miun:diva-43353 (URN)10.1109/SAS51076.2021.9530069 (DOI)000755460900021 ()2-s2.0-85116138563 (Scopus ID)978-1-7281-9431-8 (ISBN)
Conference
2021 IEEE Sensors Applications Symposium, SAS 2021, 23 August 2021 through 25 August 2021
Available from: 2021-10-12 Created: 2021-10-12 Last updated: 2022-03-03Bibliographically approved
Organisations

Search in DiVA

Show all publications