Mid Sweden University

miun.sePublications
Planned maintenance
A system upgrade is planned for 10/12-2024, at 12:00-13:00. During this time DiVA will be unavailable.
Change search
Refine search result
1 - 27 of 27
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Krug, Silvia
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    MatLab Functions for Cost Estimation of IoT Data Transfers2018Data set
    Abstract [en]

    This zip folder contains the MatLab code enabling analytical cost estimation of various communication technologies suitable for the IoT.

    Download full text (zip)
    matlab_com_models
  • 2.
    Krug, Silvia
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Bader, Sebastian
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Oelmann, Bengt
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    O'Nils, Mattias
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Suitability of Communication Technologies for Harvester-Powered IoT-Nodes2019In: IEEE International Workshop on Factory Communication Systems - Proceedings, WFCS, Institute of Electrical and Electronics Engineers (IEEE), 2019, article id 8758042Conference paper (Refereed)
    Abstract [en]

    The Internet of Things introduces Internet connectivity to things and objects in the physical world and thus enables them to communicate with other nodes via the Internet directly. This enables new applications that for example allow seamless process monitoring and control in industrial environments. One core requirement is that the nodes involved in the network have a long system lifetime, despite limited access to the power grid and potentially difficult propagation conditions. Energy harvesting can provide the required energy for this long lifetime if the node is able to send the data based on the available energy budget. In this paper, we therefore analyze and evaluate which common IoT communication technologies are suitable for nodes powered by energy harvesters. The comparison includes three different constraints from different energy sources and harvesting technologies besides several communication technologies. Besides identifying possible technologies in general, we evaluate the impact of duty-cycling and different data sizes. The results in this paper give a road map for combining energy harvesting technology with IoT communication technology to design industrial sensor nodes. 

  • 3.
    Krug, Silvia
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Computer and Electrical Engineering (2023-). IMMS Institut für Mikroelektronik-und Mechatronik-Systeme Gemeinnützige GmbH (IMMS GmbH), Ilmenau, Germany.
    Goetze, Marco
    Schneider, Soren
    Hutschenreuther, Tino
    A Modular Platform to Build Task-Specific IoT Network Solutions for Agriculture and Forestry2023In: 2023 IEEE International Workshop on Metrology for Agriculture and Forestry (MetroAgriFor), IEEE conference proceedings, 2023, p. 820-825Conference paper (Refereed)
    Abstract [en]

    The Internet of Things (IoT) enables various applications by providing means to capture environmental effects and possibly control some part of the environment. This is also true for different Smart Farming applications and as a result various systems are available. However, when designing a measurement campaign for a certain task, it becomes obvious that the current products lack flexibility to build the best solution using a single system. This concerns available communication options but also the option to add further sensors for specific tasks. In this paper, we present a modular platform for IoT nodes that can be configured as needed for different applications. We first present our concept and then highlight the ability to use different communication options, power options and the ability to integrate further sensors via adaptation modules. In addition, we show two example use cases that, based on the state of the art, would require multiple parallel systems. 

  • 4.
    Krug, Silvia
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Computer and Electrical Engineering (2023-). IMMS Inst Mikroelekt & Mechatron Syst Gemeinnutzig, IMMS GmbH, Syst Design Dept, Ehrenbergstr 27, D-98693 Ilmenau, Germany..
    Hutschenreuther, Tino
    IMMS Inst Mikroelekt & Mechatron Syst Gemeinnutzig, IMMS GmbH, Syst Design Dept, Ehrenbergstr 27, D-98693 Ilmenau, Germany..
    A Case Study toward Apple Cultivar Classification Using Deep Learning2023In: AGRIENGINEERING, ISSN 2624-7402, Vol. 5, no 2, p. 814-828Article in journal (Refereed)
    Abstract [en]

    Machine Learning (ML) has enabled many image-based object detection and recognition-based solutions in various fields and is the state-of-the-art method for these tasks currently. Therefore, it is of interest to apply this technique to different questions. In this paper, we explore whether it is possible to classify apple cultivars based on fruits using ML methods and images of the apple in question. The goal is to develop a tool that is able to classify the cultivar based on images that could be used in the field. This helps to draw attention to the variety and diversity in fruit growing and to contribute to its preservation. Classifying apple cultivars is a certain challenge in itself, as all apples are similar, while the variety within one class can be high. At the same time, there are potentially thousands of cultivars indicating that the task becomes more challenging when more cultivars are added to the dataset. Therefore, the first question is whether a ML approach can extract enough information to correctly classify the apples. In this paper, we focus on the technical requirements and prerequisites to verify whether ML approaches are able to fulfill this task with a limited number of cultivars as proof of concept. We apply transfer learning on popular image processing convolutional neural networks (CNNs) by retraining them on a custom apple dataset. Afterward, we analyze the classification results as well as possible problems. Our results show that apple cultivars can be classified correctly, but the system design requires some extra considerations.

  • 5.
    Krug, Silvia
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Computer and Electrical Engineering (2023-). IMMS Institut für Mikroelektronik- und Mechatronik-Systeme Gemeinnützige GmbH.
    Hutschenreuther, Tino
    Enhancing Apple Cultivar Classification Using Multiview Images2024In: Journal of Imaging, ISSN 2313-433X, Vol. 10, no 4, article id 94Article in journal (Refereed)
    Abstract [en]

    Apple cultivar classification is challenging due to the inter-class similarity and high intra-class variations. Human experts do not rely on single-view features but rather study each viewpoint of the apple to identify a cultivar, paying close attention to various details. Following our previous work, we try to establish a similar multiview approach for machine-learning (ML)-based apple classification in this paper. In our previous work, we studied apple classification using one single view. While these results were promising, it also became clear that one view alone might not contain enough information in the case of many classes or cultivars. Therefore, exploring multiview classification for this task is the next logical step. Multiview classification is nothing new, and we use state-of-the-art approaches as a base. Our goal is to find the best approach for the specific apple classification task and study what is achievable with the given methods towards our future goal of applying this on a mobile device without the need for internet connectivity. In this study, we compare an ensemble model with two cases where we use single networks: one without view specialization trained on all available images without view assignment and one where we combine the separate views into a single image of one specific instance. The two latter options reflect dataset organization and preprocessing to allow the use of smaller models in terms of stored weights and number of operations than an ensemble model. We compare the different approaches based on our custom apple cultivar dataset. The results show that the state-of-the-art ensemble provides the best result. However, using images with combined views shows a decrease in accuracy by 3% while requiring only 60% of the memory for weights. Thus, simpler approaches with enhanced preprocessing can open a trade-off for classification tasks on mobile devices. 

    Download full text (pdf)
    fulltext
  • 6.
    Krug, Silvia
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design. IMMS Institut für Mikroelektronik- und Mechatronik-Systeme gemeinnützige GmbH (IMMS GmbH), Ilmenau, Germany.
    Miethe, Sebastian
    Hutschenreuther, Tino
    Comparing BLE and NB-IoT as communication options for smart viticulture IoT applications2021In: 2021 IEEE Sensors Applications Symposium (SAS), 2021Conference paper (Refereed)
    Abstract [en]

    Choosing the appropriate communication technology for outdoor applications has been a challenge over years and let to many different options. This makes it difficult for designers and users to chose the best option for their setup as each option has unique pros and cons. In this paper, we evaluate and compare Narrow Band Internet of Things (NB-IoT) and Bluetooth Low Energy (BLE) regarding their applicability for a smart viticulture scenario. We study how the node density and system energy consumption varies for various configurations and are thus able to highlight challenges in deployments as well as tradeoffs between the technologies. 

  • 7.
    Krug, Silvia
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    O'Nils, Mattias
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    IoT Communication Introduced Limitations for High Sampling Rate Applications2018In: GI/ITG KuVS Fachgespräch Sensornetze 13. & 14. September 2018, Braunschweig : Technical Report, 2018Conference paper (Refereed)
    Abstract [en]

    Networking solutions for the Internet of Things aretypically designed for applications that require low data rates andfeature rare transmission events. The initial assumption leads to asystem design towards minimal data transfers and packet sizes.However, this can become a challenge, if applications requiredifferent traffic patterns or cooperative interaction betweendevices. Applications requiring a high sampling rate to capturethe desired phenomenon produce larger amounts of data thatneed to be transported. In this paper, we present a studyhighlighting some of the challenging aspects for such applicationsand how the choice of communication technology can limit bothapplication behavior and network structure.

    Download full text (pdf)
    fulltext
  • 8.
    Krug, Silvia
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    O'Nils, Mattias
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Modeling and Comparison of Delay and Energy Cost of IoT Data Transfers2019In: IEEE Access, E-ISSN 2169-3536, Vol. 7, p. 58654-58675Article in journal (Refereed)
    Abstract [en]

    Communication is often considered as the most costly component of a wireless sensor node. As a result, a variety of technologies and protocols aim to reduce the energy consumption for the communication especially in the Internet of Things context. In order to select the best suitable technology for a given use case, a tool that allows the comparison of these options is needed. The goal of this paper is to introduce a new modular modeling framework that enables a comparison of various technologies based on analytical calculations. We chose to model the cost for a single data transfer of arbitrary application data amounts in order to provide flexibility regarding the data amount and traffic patterns. The modeling approach covers the stack traversal of application data and thus in comparison to other approaches includes the required protocol overhead directly. By applying our models to different data amounts, we are able to show tradeoffs between various technologies and enable comparisons for different scenarios. In addition, our results reveal the impact of design decisions that can help to identify future development challenges.

    Download full text (pdf)
    fulltext
  • 9.
    Krug, Silvia
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Shallari, Irida
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    O'Nils, Mattias
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    A Case Study on Energy Overhead of Different IoT Network Stacks2019In: 2019 IEEE 5th World Forum on Internet of Things (WF-IoT), IEEE, 2019, p. 528-529Conference paper (Refereed)
    Abstract [en]

    Due to the limited energy budget for sensor nodes in the Internet of Things (IoT), it is crucial to develop energy efficient communications amongst others. This need leads to the development of various energy-efficient protocols that consider different aspects of the energy status of a node. However, a single protocol covers only one part of the whole stack and savings on one level might not be as efficient for the overall system, if other levels are considered as well. In this paper, we analyze the energy required for an end device to maintain connectivity to the network as well as perform application specific tasks. By integrating the complete stack perspective, we build a more holistic view on the energy consumption and overhead for a wireless sensor node. For better understanding, we compare three different stack variants in a base scenario and add an extended study to evaluate the impact of retransmissions as a robustness mechanism. Our results show, that the overhead introduced by the complete stack has an significant impact on the nodes energy consumption especially if retransmissions are required.

  • 10. Onus, Umut
    et al.
    Uziel, Sebastian
    Hutschenreuther, Tino
    Krug, Silvia
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design. IMMS Institut für Mikroelektronik- und Mechatronik-Systeme gemeinnützige GmbH (IMMS GmbH), Ilmenau, Germany.
    Trade-off between Spectral Feature Extractors for Machine Health Prognostics on Microcontrollers2022In: CIVEMSA 2022 - IEEE International Conference on Computational Intelligence and Virtual Environments for Measurement Systems and Applications, Proceedings, IEEE, 2022Conference paper (Refereed)
    Abstract [en]

    Machine learning methods have shown a high impact on machine health prognostics solutions. However, most studies stop after building a model on a server or pc, without deploying it to embedded systems close to the machinery. Bringing machine learning models to small embedded systems with a small energy budget does require adapted models and raw time series data processing to handle resource constraints while maintaining high model performance. Feature extraction plays a crucial role in this process. One of the most common methods for machinery data feature is its spectral information, that are extracted via digital filters. Calculating spectral features on microcontrollers has a great impact on the computational requirements of the overall estimations. In this paper, we analyze mel-spectrogram and infinite impulse response (IIR) based spectral feature extractors regarding their estimation performances and their computational requirements. The goal is to evaluate possible trade-offs when selecting one feature extractor over the other. To achieve this, we study the cost of both methods theoretically and via run-time measurements after analyzing the feature design space to ensure good model performance. Our results show that by selecting an appropriate filter to the problem, its feature space dimensionality and, consequently, its computational load can be reduced. 

  • 11. Pandey, Rick
    et al.
    Uziel, Sebastian
    Hutschenreuther, Tino
    Krug, Silvia
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Computer and Electrical Engineering (2023-). System Design Department, IMMS Institut für Mikroelektronik- und Mechatronik-Systeme Gemeinnützige GmbH (IMMS GmbH), Ehrenbergstraße 27, 98693 Ilmenau, Germany.
    Towards Deploying DNN Models on Edge for Predictive Maintenance Applications2023In: Electronics, E-ISSN 2079-9292, Vol. 12, no 3, article id 639Article in journal (Refereed)
    Abstract [en]

    Almost all rotating machinery in the industry has bearings as their key building block and most of these machines run 24 × 7. This makes bearing health prediction an active research area for predictive maintenance solutions. Many state of the art Deep Neural Network (DNN) models have been proposed to solve this. However, most of these high performance models are computationally expensive and have high memory requirements. This limits their use to very specific industrial applications with powerful hardwares deployed close the the machinery. In order to bring DNN-based solutions to a potential use in the industry, we need to deploy these models on Microcontroller Units (MCUs) which are cost effective and energy efficient. However, this step is typically neglected in literature as it poses new challenges. The primary concern when inferencing the DNN models on MCUs is the on chip memory of the MCU that has to fit the model, the data and additional code to run the system. Almost all the state of the art models fail this litmus test since they feature too many parameters. In this paper, we show the challenges related to the deployment, review possible solutions and evaluate one of them showing how the deployment can be realized and what steps are needed. The focus is on the steps required for the actual deployment rather than finding the optimal solution. This paper is among the first to show the deployment on MCUs for a predictive maintenance use case. We first analyze the gap between State Of The Art benchmark DNN models for bearing defect classification and the memory constraint of two MCU variants. Additionally, we review options to reduce the model size such as pruning and quantization. Afterwards, we evaluate a solution to deploy the DNN models by pruning them in order to fit them into microcontrollers. Our results show that most models under test can be reduced to fit MCU memory for a maximum loss of (Formula presented.) in average accuracy of the pruned models in comparison to the original models. Based on the results, we also discuss which methods are promising and which combination of model and feature work best for the given classification problem. 

  • 12. Pandey, Rick
    et al.
    Uziel, Sebastian
    Hutschenreuther, Tino
    Krug, Silvia
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Computer and Electrical Engineering (2023-). Imms Institut für Mikroelektronik- und Mechatronik-Systeme Gemeinnützige GmbH (IMMS GmbH), Ilmenau, Germany.
    Weighted Pruning with Filter Search to Deploy DNN Models on Microcontrollers2023In: 2023 IEEE 12th International Conference on Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications (IDAACS), IEEE conference proceedings, 2023, p. 1077-1082Conference paper (Refereed)
    Abstract [en]

    Predictive Maintenance (PdM) helps to determine the condition of in-service industrial equipment and components and their timely replacement. This can be achieved by Artificial Intelligence (AI) enabled information systems. AI has been used extensively in addressing the condition monitoring problems. Most existing Deep Neural Network (DNN) models which are capable of solving PdM problems have a large memory foot print and are functional on remote machines using cloud based infrastructure only. In order to inference them close to the process, they need to run on memory constrained devices like microcontrollers (MCUs). In this work, we propose a weighted pruning algorithm to reduce the number of trainable parameters in the DNN model for bearing fault classification to enable its execution on the MCU. In addition to the pruning, we reduce the trainable model parameters by making an extensive filter size search. The model size is reduced without compromising on the performance of the pruned models by using the magnitude based method. In case of AlexNet, LeNet and Autoencoder we could reduce the model size upto 89%, 39% and 54% respectively with the new approach in comparison to the magnitude based state of the art approach. 

  • 13.
    Saqib, Eiraj
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Computer and Electrical Engineering (2023-).
    Sánchez Leal, Isaac
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Computer and Electrical Engineering (2023-).
    Shallari, Irida
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Computer and Electrical Engineering (2023-).
    Jantsch, Axel
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Computer and Electrical Engineering (2023-). Tu Wien, Vienna, Austria.
    Krug, Silvia
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Computer and Electrical Engineering (2023-). Imms Institut für Mikroelektronik- und Mechatronik-Systeme Gemeinnützige GmbH (IMMS GmbH), Ilmenau, Germany.
    O'Nils, Mattias
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Computer and Electrical Engineering (2023-).
    Optimizing the IoT Performance: A Case Study on Pruning a Distributed CNN2023In: 2023 IEEE Sensors Applications Symposium (SAS), 2023Conference paper (Refereed)
    Abstract [en]

    Implementing Convolutional Neural Networks (CNN) based computer vision algorithms in Internet of Things (IoT) sensor nodes can be difficult due to strict computational, memory, and latency constraints. To address these challenges, researchers have utilized techniques such as quantization, pruning, and model partitioning. Partitioning the CNN reduces the computational burden on an individual node, but the overall system computational load remains constant. Additionally, communication energy is also incurred. To understand the effect of partitioning and pruning on energy and latency, we conducted a case study using a feet detection application realized with Tiny Yolo-v3 on a 12th Gen Intel CPU with NVIDIA GeForce RTX 3090 GPU. After partitioning the CNN between the sequential layers, we apply quantization, pruning, and compression and study the effects on energy and latency. We analyze the extent to which computational tasks, data, and latency can be reduced while maintaining a high level of accuracy. After achieving this reduction, we offloaded the remaining partitioned model to the edge node. We found that over 90% computation reduction and over 99% data transmission reduction are possible while maintaining mean average precision above 95%. This results in up to 17x energy savings and up to 5.2x performance speed-up. 

  • 14. Schneider, Sören
    et al.
    Goetze, Marco
    Krug, Silvia
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Computer and Electrical Engineering (2023-). IMMS Institut für Mikroelektronik- und Mechatronik-Systeme gemeinnützige GmbH.
    Hutschenreuther, Tino
    A Retrofit Streetlamp Monitoring Solution Using LoRaWAN Communications2024In: Eng, ISSN 2673-4117, Vol. 5, no 1, p. 513-531Article in journal (Refereed)
    Abstract [en]

    Ubiquitous street lighting is essential for urban areas. While nowadays, LED-based “smart lamps” are commercially available, municipalities can only switch to them in the long run due to financial constraints. Especially, older types of lamps require frequent bulb replacements to maintain the lighting infrastructure’s function. To speed up the detection of defects and enable better planning, a non-invasively retrofittable IoT sensor solution is proposed that monitors lamps for defects via visible light sensors, communicates measurement data wirelessly to a central location via LoRaWAN, and processes and visualizes the resulting information centrally. The sensor nodes are capable of automatically adjusting to shifting day- and nighttimes thanks to a second sensor monitoring ambient light. The work specifically addresses aspects of energy efficiency essential to the battery-powered operation of the sensor nodes. Besides design considerations and implementation details, the paper also summarizes the experimental validation of the system by way of an extensive field trial and expounds upon further experiences from it.

    Download full text (pdf)
    fulltext
  • 15.
    Shallari, Irida
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design. Mid Sweden University.
    Krug, Silvia
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Example of design space reduction method using Intelligence partitioning2021Data set
    Abstract [en]

    This zip folder contains the MatLab code that can be used during design space exploration to identify optimisation areas for a given design case. Based on a preliminary set of data such as an estimate of the number of operations, an energy constraint, and the intermediate data volume between the processing stages, we can use this tool to identify areas where optimisation efforts would provide the highest impact on the node energy efficiency.

    Download (zip)
    Design space reduction method
  • 16.
    Shallari, Irida
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Krug, Silvia
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    O'Nils, Mattias
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Architectural evaluation of node: server partitioning for people counting2018In: ACM International Conference Proceeding Series, New York: ACM Digital Library, 2018, article id Article No. 1Conference paper (Refereed)
    Abstract [en]

    The Internet of Things has changed the range of applications for cameras requiring them to be easily deployed for a variety of scenarios indoor and outdoor, while achieving high performance in processing. As a result, future projections emphasise the need for battery operated smart cameras, capable of complex image processing tasks that also communicate within one another, and the server. Based on these considerations, we evaluate in-node and node – server configurations of image processing tasks to provide an insight of how tasks partitioning affects the overall energy consumption. The two main energy components taken in consideration for their influence in the total energy consumption are processing and communication energy. The results from the people counting scenario proved that processing background modelling, subtraction and segmentation in-node while transferring the remaining tasks to the server results in the most energy efficient configuration, optimising both processing and communication energy. In addition, the inclusion of data reduction techniques such as data aggregation and compression not always resulted in lower energy consumption as generally assumed, and the final optimal partition did not include data reduction.

  • 17.
    Shallari, Irida
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Krug, Silvia
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    O'Nils, Mattias
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Communication and Computation Inter-Effects in People Counting Using Intelligence Partitioning2020In: Journal of Real-Time Image Processing, ISSN 1861-8200, E-ISSN 1861-8219, Vol. 17, p. 1869-1882Article in journal (Other academic)
    Abstract [en]

    The rapid development of the Internet of Things is affecting the requirements towards wireless vision sensor networks (WVSN). Future smart camera architectures require battery-operated devices to facilitate deployment for scenarios such as industrial monitoring, environmental monitoring and smart city, consequently imposing constraints on the node energy consumption. This paper provides an analysis of the inter-effects between computation and communication energy for a smart camera node. Based on a people counting scenario, we evaluate the trade-off for the node energy consumption with different processing configurations of the image processing tasks, and several communication technologies. The results indicate that the optimal partition between the smart camera node and remote processing is with background modelling, segmentation, morphology and binary compression implemented in the smart camera, supported by Bluetooth Low Energy (BLE) version 5 technologies. The comparative assessment of these results with other implementation scenarios underlines the energy efficiency of this approach. This work changes pre-conceptions regarding design space exploration in WVSN, motivating further investigation regarding the inclusion of intermediate processing layers between the node and the cloud to interlace low-power configurations of communication and processing architectures.

    Download full text (pdf)
    fulltext
  • 18.
    Shallari, Irida
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Sánchez Leal, Isaac
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Krug, Silvia
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design. IMMS Institut für Mikroelektronik-und Mechatronik-Systeme gemeinnützige GmbH (IMMS GmbH), Ehrenbergstraße 27, 98693 Ilmenau, Germany.
    Jantsch, Axel
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design. TU Wien, Karlsplatz 13, 1040 Vienna, Austria..
    O'Nils, Mattias
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Design space exploration for an IoT node: Trade-offs in processing and communication2021In: IEEE Access, E-ISSN 2169-3536, Vol. 9, p. 65078-65090Article in journal (Refereed)
    Abstract [en]

    Optimising the energy consumption of IoT nodes can be tedious due to the due to complex trade-offs involved between processing and communication. In this article, we investigate the partitioning of processing between the sensor node and a server and study the energy trade-offs involved. We propose a method that provides a trade-off analysis for a given set of constraints and allows for exploring several intelligence partitioning configurations. Furthermore, we demonstrate how this method can be used for the analysis of four design examples with traditional and CNN-based image processing systems, and we also provide an implementation of it on Matlab. CCBY

    Download full text (pdf)
    fulltext
  • 19.
    Sánchez Leal, Isaac
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Computer and Electrical Engineering (2023-).
    Saqib, Eiraj
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Computer and Electrical Engineering (2023-).
    Shallari, Irida
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Computer and Electrical Engineering (2023-).
    Jantsch, Axel
    Krug, Silvia
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Computer and Electrical Engineering (2023-). IMMS Institut für Mikroelektronik und Mechatronik-Systeme Gemeinnützige GmbH (IMMS GmbH)., Germany.
    O'Nils, Mattias
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Computer and Electrical Engineering (2023-).
    Waist Tightening of CNNs: A Case study on Tiny YOLOv3 for Distributed IoT Implementations2023In: ACM International Conference Proceeding Series, Association for Computing Machinery (ACM), 2023, p. 241-246Conference paper (Refereed)
    Abstract [en]

    Computer vision systems in sensor nodes of the Internet of Things (IoT) based on Deep Learning (DL) are demanding because the DL models are memory and computation hungry while the nodes often come with tight constraints on energy, latency, and memory. Consequently, work has been done to reduce the model size or distribute part of the work to other nodes. However, then the question arises how these approaches impact the energy consumption at the node and the inference time of the system. In this work, we perform a case study to explore the impact of partitioning a Convolutional Neural Network (CNN) such that one part is implemented on the IoT node, while the rest is implemented on an edge device. The goal is to explore how the choice of partition point, quantization method and communication technology affects the IoT system. We identify possible partitioning points between layers, where we transform the feature maps passed between layers by applying quantization and compression to reduce the data sent over the communication channel between the two partitions in Tiny YOLOv3. The results show that a reduction of transmitted data by 99.8% reduces the network accuracy by 3 percentage points. Furthermore, the evaluation of various IoT communication protocols shows that the quantization of data facilitates CNN network partitioning with significant reduction of overall latency and node energy consumption. 

  • 20.
    Sánchez Leal, Isaac
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Shallari, Irida
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Krug, Silvia
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design. System Design Department, IMMS Institut für Mikroelektronik und Mechatronik-Systeme Gemeinnützige GmbH.
    Jantsch, Axel
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design. Institute of Computer Technology, TU Wien (Vienna University of Technology).
    O'Nils, Mattias
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Impact of input data on intelligence partitioning decisions for IoT smart camera nodes2021In: Electronics, E-ISSN 2079-9292, Vol. 10, no 16, article id 1898Article in journal (Refereed)
    Abstract [en]

    Image processing systems exploit image information for a purpose determined by the application at hand. The implementation of image processing systems in an Internet of Things (IoT) context is a challenge due to the amount of data in an image processing system, which affects the three main node constraints: memory, latency and energy. One method to address these challenges is the partitioning of tasks between the IoT node and a server. In this work, we present an in-depth analysis of how the input image size and its content within the conventional image processing systems affect the decision on where tasks should be implemented, with respect to node energy and latency. We focus on explaining how the characteristics of the image are transferred through the system until finally influencing partition decisions. Our results show that the image size affects significantly the efficiency of the node offloading configurations. This is mainly due to the dominant cost of communication over processing as the image size increases. Furthermore, we observed that image content has limited effects in the node offloading analysis.

    Download full text (pdf)
    fulltext
  • 21.
    Taami, Tania
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Krug, Silvia
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    O'Nils, Mattias
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Experimental Characterization of Latency in Distributed IoT Systems with Cloud Fog Offloading2019In: IEEE International Workshop on Factory Communication Systems - Proceedings, WFCS, Institute of Electrical and Electronics Engineers (IEEE), 2019, article id 8757960Conference paper (Refereed)
    Abstract [en]

    The Internet of Things (IoT) enables users to gather and analyze data from a large number of devices. Knowledge obtained by these systems is valuable in order to understand, control, and enhance the monitored process. The mass of information to process leads however to new challenges related to required resources for both data processing and data transportation. Two critical metrics are latency and consumed energy to complete a given task. Both metrics might be exceed if all processing is done locally at the sensor device level. Cloud and Fog computing concepts can help to mitigate this effect. However, using such offloading concepts add complexity and overhead to the system. In this paper, we study the latency for processing and communication tasks in a distributed IoT systems with respect to cloud or fog offloading and derive characteristic cost functions for the studied tasks. Our results give valuable insights into the tradeoffs and constraint within our example scenario. The developed characterization methodology can however be applied to any kind of IoT system and thus allowing more general analysis. 

  • 22.
    Troci, Jurgen
    et al.
    IMMS Institut für Mikroelektronik- und Mechatronik-Systeme gemeinnützige GmbH (IMMS GmbH),Ilmenau,Germany.
    Krug, Silvia
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design. IMMS Institut für Mikroelektronik- und Mechatronik-Systeme gemeinnützige GmbH (IMMS GmbH),Ilmenau,Germany.
    Hutschenreuther, Tino
    IMMS Institut für Mikroelektronik- und Mechatronik-Systeme gemeinnützige GmbH (IMMS GmbH),Ilmenau,Germany.
    Applying Event-Based Sending Intervals to Enable Low Energy OPC-UA on Sensor Nodes2019In: 2019 27th Telecommunications Forum (TELFOR), IEEE, 2019Conference paper (Refereed)
    Abstract [en]

    Integrating typical wireless sensors into industrial control systems is a crucial task to realize the full potential of the Industrial Internet of Things (IoT). Many lightweight protocols exist for the use in the IoT context with a focuson energy efficient data transmissions. Industrial protocols on the other hand ensure reliable and timely transfer of data but typically at the cost of a higher energy consumption. In this paper, we analyze and compare one protocol from each category to determine the different energy consumption and then evaluate how different sending schemes can enhance the energy consumption of the industrial protocol. The goal is to reduce the energy consumption while keeping the relevant information under industrial constraints and enabling tuning to different scenarios. Our results show that these goals are achievable by applying event-based sending approaches. However, a good understanding of the process at hand is required to trade-off different constraints.

  • 23.
    Vilar, Cristian
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Krug, Silvia
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design. System Design Department, IMMS Institut für Mikroelektronik- und Mechatronik-Systeme Gemeinnützige GmbH (IMMS GmbH).
    O'Nils, Mattias
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Realworld 3d object recognition using a 3d extension of the hog descriptor and a depth camera2021In: Sensors, E-ISSN 1424-8220, Vol. 21, no 3, article id 910Article in journal (Refereed)
    Abstract [en]

    3D object recognition is an generic task in robotics and autonomous vehicles. In this paper, we propose a 3D object recognition approach using a 3D extension of the histogram-of-gradients object descriptor with data captured with a depth camera. The presented method makes use of synthetic objects for training the object classifier, and classify real objects captured by the depth camera. The preprocessing methods include operations to achieve rotational invariance as well as to maximize the recognition accuracy while reducing the feature dimensionality at the same time. By studying different preprocessing options, we show challenges that need to be addressed when moving from synthetic to real data. The recognition performance was evaluated with a real dataset captured by a depth camera and the results show a maximum recognition accuracy of 81.5%. 

    Download full text (pdf)
    fulltext
  • 24.
    Vilar, Cristian
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Krug, Silvia
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design. IMMS Institut für Mikroelektronik- und Mechatronik-Systeme gemeinnützige GmbH, Ilmenau, Germany.
    Qureshi, Faisal Z
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design. University of Ontario Institute of Technology, Canada.
    O'Nils, Mattias
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Evaluation of 2D-/3D-Feet-Detection Methods for Semi-Autonomous Powered Wheelchair Navigation2021In: Journal of Imaging, ISSN 2313-433X, Vol. 7, no 12, article id 255Article in journal (Refereed)
    Abstract [en]

    Powered wheelchairs have enhanced the mobility and quality of life of people with special needs. The next step in the development of powered wheelchairs is to incorporate sensors and electronic systems for new control applications and capabilities to improve their usability and the safety of their operation, such as obstacle avoidance or autonomous driving. However, autonomous powered wheelchairs require safe navigation in different environments and scenarios, making their development complex. In our research, we propose, instead, to develop contactless control for powered wheelchairs where the position of the caregiver is used as a control reference. Hence, we used a depth camera to recognize the caregiver and measure at the same time their relative distance from the powered wheelchair. In this paper, we compared two different approaches for real-time object recognition using a 3DHOG hand-crafted object descriptor based on a 3D extension of the histogram of oriented gradients (HOG) and a convolutional neural network based on YOLOv4-Tiny. To evaluate both approaches, we constructed Miun-Feet—a custom dataset of images of labeled caregiver’s feet in different scenarios, with backgrounds, objects, and lighting conditions. The experimental results showed that the YOLOv4-Tiny approach outperformed 3DHOG in all the analyzed cases. In addition, the results showed that the recognition accuracy was not improved using the depth channel, enabling the use of a monocular RGB camera only instead of a depth camera and reducing the computational cost and heat dissipation limitations. Hence, the paper proposes an additional method to compute the caregiver’s distance and angle from the Powered Wheelchair (PW) using only the RGB data. This work shows that it is feasible to use the location of the caregiver’s feet as a control signal for the control of a powered wheelchair and that it is possible to use a monocular RGB camera to compute their relative positions.

    Download full text (pdf)
    fulltext
  • 25.
    Vilar, Cristian
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Krug, Silvia
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Thörnberg, Benny
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Processing chain for 3D histogram of gradients based real-time object recognition2021In: International Journal of Advanced Robotic Systems, ISSN 1729-8806, E-ISSN 1729-8814, Vol. 18, no 1, article id 1729881420978363Article in journal (Refereed)
    Abstract [en]

    3D object recognition has been a cutting-edge research topic since the popularization of depth cameras. These cameras enhance the perception of the environment and so are particularly suitable for autonomous robot navigation applications. Advanced deep learning approaches for 3D object recognition are based on complex algorithms and demand powerful hardware resources. However, autonomous robots and powered wheelchairs have limited resources, which affects the implementation of these algorithms for real-time performance. We propose to use instead a 3D voxel-based extension of the 2D histogram of oriented gradients (3DVHOG) as a handcrafted object descriptor for 3D object recognition in combination with a pose normalization method for rotational invariance and a supervised object classifier. The experimental goal is to reduce the overall complexity and the system hardware requirements, and thus enable a feasible real-time hardware implementation. This article compares the 3DVHOG object recognition rates with those of other 3D recognition approaches, using the ModelNet10 object data set as a reference. We analyze the recognition accuracy for 3DVHOG using a variety of voxel grid selections, different numbers of neurons (N-h ) in the single hidden layer feedforward neural network, and feature dimensionality reduction using principal component analysis. The experimental results show that the 3DVHOG descriptor achieves a recognition accuracy of 84.91% with a total processing time of 21.4 ms. Despite the lower recognition accuracy, this is close to the current state-of-the-art approaches for deep learning while enabling real-time performance.

    Download full text (pdf)
    fulltext
  • 26.
    Vilar, Cristian
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Krug, Silvia
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Thörnberg, Benny
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Rotational Invariant Object Recognition for Robotic Vision2019In: ICACR 2019 Proceedings of the 2019 3rd International Conference on Automation, Control and Robots, ACM Digital Library, 2019, p. 1-6Conference paper (Refereed)
    Abstract [en]

    Depth cameras have enhanced the environment perception for robotic applications significantly. They allow to measure true distances and thus enable a 3D measurement of the robot surroundings. In order to enable robust robot vision, the objects recognition has to handle rotated data because object can be viewed from different dynamic perspectives when the robot is moving. Therefore, the 3D descriptors used of object recognition for robotic applications have to be rotation invariant and implementable on the embedded system, with limited memory and computing resources. With the popularization of the depth cameras, the Histogram of Gradients (HOG) descriptor has been extended to recognize also 3D volumetric objects (3DVHOG). Unfortunately, both version are not rotation invariant. There are different methods to achieve rotation invariance for 3DVHOG, but they increase significantly the computational cost of the overall data processing. Hence, they are unfeasible to be implemented in a low cost processor for real-time operation. In this paper, we propose an object pose normalization method to achieve 3DVHOG rotation invariance while reducing the number of processing operations as much as possible. Our method is based on Principal Component Analysis (PCA) normalization. We tested our method using the Princeton Modelnet10 dataset.

  • 27.
    Vilar, Cristian
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Thörnberg, Benny
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Krug, Silvia
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Evaluation of embedded camera systems for autonomous wheelchairs2019In: VEHITS 2019 - Proceedings of the 5th International Conference on Vehicle Technology and Intelligent Transport Systems, SciTePress , 2019, p. 76-85Conference paper (Refereed)
    Abstract [en]

    Autonomously driving Power Wheelchairs (PWCs) are valuable tools to enhance the life quality of their users. In order to enable truly autonomous PWCs, camera systems are essential. Image processing enables the development of applications for both autonomous driving and obstacle avoidance. This paper explores the challenges that arise when selecting a suitable embedded camera system for these applications. Our analysis is based on a comparison of two well-known camera principles, Stereo-Cameras (STCs) and Time-of-Flight (ToF) cameras, using the standard deviation of the ground plane at various lighting conditions as a key quality measure. In addition, we also consider other metrics related to both the image processing task and the embedded system constraints. We believe that this assessment is valuable when choosing between using STC or ToF cameras for PWCs.

1 - 27 of 27
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf