Mid Sweden University

miun.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Tiny Machine Learning for Structural Health Monitoring with Acoustic Emissions
Mid Sweden University, Faculty of Science, Technology and Media, Department of Computer and Electrical Engineering (2023-).ORCID iD: 0000-0002-8617-0435
2024 (English)Licentiate thesis, comprehensive summary (Other academic)
Abstract [en]

Acoustic Emission (AE) technology, as one of the non-destructive Structural Health Monitoring (SHM) methods, is increasingly utilized for the damage prediction, classification, maintenance, and real-time monitoring of infrastructure. Addressing the need for low latency, power consumption and high portability, a novel approach has been adopted where processing algorithms are embedded close to the sensors on these devices. Continuous data monitoring and collection, coupled with data processing and interpretation comparable to human experts, are anticipated from the next generation of the Internet of Things and smart sensing systems. While Machine Learning (ML) and Deep Learning (DL) has been successfully applied in a number of domains including SHM, resource-constrained, low-power devices pose a challenge for computationally complex ML algorithm execution.

To explore the feasibility of deploying ML and DL algorithms on edge devices, this study first proposes a lightweight CNN model based on raw AE signals for concrete damage classification and evaluates its performance on an ultra-low-power microcontroller unit (MCU). Subsequently, to further simplify the algorithm and explore the adaptability across various MCU platforms, a raw AE signal-based Artificial Neural Network (ANN) model is proposed, and its deployment performance on multiple MCUs is assessed. Additionally, the study assesses the impact of feature extraction on ANN performance with raw AE signals on MCUs, finding that using raw data directly is more resource and time-efficient. Lastly, the study investigates the generalization ability of the aforementioned CNN on a carbon fiber panel AE dataset, as well as the performance of 13 traditional ML algorithms on this dataset and their final deployment performance on MCUs. Due to the small size of the dataset, various data augmentation methods were also introduced and their impact on model robustness and accuracy was evaluated.

This thesis demonstrates for the first time that real-time inference on edge devices using AE signals for SHM is feasible. It also effectively demonstrates how to balance the critical trade-offs between accuracy, resource demands, and power consumption. Different MCUs and signal preprocessing methods are evaluated, and the impact of various data augmentation techniques on the accuracy of different ML algorithms and their inference robustness is explored in response to the challenge of collecting AE data, which is crucial for the next generation of SHM devices.

Place, publisher, year, edition, pages
Sundsvall: Mid Sweden University , 2024. , p. 48
Series
Mid Sweden University licentiate thesis, ISSN 1652-8948 ; 204
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
URN: urn:nbn:se:miun:diva-51322ISBN: 978-91-89786-69-1 (print)OAI: oai:DiVA.org:miun-51322DiVA, id: diva2:1857441
Presentation
2024-06-13, C312, Holmgatan 10, Sundsvall, 13:00 (English)
Opponent
Supervisors
Note

Vid tidpunkten för framläggningen av avhandlingen var följande delarbeten opublicerade: delarbete 4 och 5 (inskickade manuskript).

At the time of the defence the following papers were unpublished: paper 4 and 5 (submitted manuscripts).

Available from: 2024-05-14 Created: 2024-05-13 Last updated: 2024-05-14Bibliographically approved
List of papers
1. A Lightweight Convolutional Neural Network Model for Concrete Damage Classification using Acoustic Emissions
Open this publication in new window or tab >>A Lightweight Convolutional Neural Network Model for Concrete Damage Classification using Acoustic Emissions
2022 (English)In: 2022 IEEE Sensors Applications Symposium, SAS 2022 - Proceedings, IEEE, 2022Conference paper, Published paper (Refereed)
Abstract [en]

In this study, a convolutional neural network (CNN) model was developed for non-destructive damage classification of concrete materials based on acoustic emission techniques. The raw acoustic emission signal is used as the network model input, while the damage type is used as the output. In the study, 15,000 acoustic emission signals were used as the dataset, of which 12,000 signals were used for training, 1,500 signals for validation, and 1,500 signals for testing. Adaptive moment estimation (Adam) was used as the learning algorithm. Batch normalization and dropout layers were used to solve the overfitting problem generated in earlier versions of the model. The proposed model achieves an accuracy of 99.70% with 20,243 parameters, which provides a significant improvement over previous models. As a result, the classification of damages and decisions based upon them in non-destructive structural health monitoring applications can be improved. 

Place, publisher, year, edition, pages
IEEE, 2022
Keywords
acoustic emission, convolutional neural network, damage classification, Non-destructive
National Category
Control Engineering
Identifiers
urn:nbn:se:miun:diva-46299 (URN)10.1109/SAS54819.2022.9881386 (DOI)000861380600060 ()2-s2.0-85139088014 (Scopus ID)9781665409810 (ISBN)
Conference
17th IEEE Sensors Applications Symposium, SAS 2022, 1 August 2022 through 3 August 2022
Available from: 2022-10-20 Created: 2022-10-20 Last updated: 2024-05-13Bibliographically approved
2. Leveraging Acoustic Emission and Machine Learning for Concrete Materials Damage Classification on Embedded Devices
Open this publication in new window or tab >>Leveraging Acoustic Emission and Machine Learning for Concrete Materials Damage Classification on Embedded Devices
2023 (English)In: IEEE Transactions on Instrumentation and Measurement, ISSN 0018-9456, E-ISSN 1557-9662, Vol. 72, article id 2525108Article in journal (Refereed) Published
Abstract [en]

For the field of structural health monitoring (SHM), acoustic emission (AE) technology is important as a damage identification technique that does not cause secondary damage to concrete. Nowadays, applications of non-destructive concrete damage identification are mostly limited to commercial software or identification algorithms running on desktop computers. It has so far not been deployed in low-power embedded devices. In this study, a lightweight convolutional neural network (CNN) model for online non-destructive damage type recognition of concrete materials is presented and deployed on a resource-constrained microcontroller unit as a tiny machine learning (TinyML) application. The CNN model uses raw acoustic emission signals as input and damage recognition types as output. 15,000 acoustic emission signals are used as data sets divided into training, validation, and test sets in the ratio of 8:1:1. The experimental results show that an accuracy of 99.6% is achieved on the nRF52840 microcontroller (ARM Cortex M4) with only 166.822 ms and 0.555mJ for a single inference using only 20K parameters and 30.5KB model size. This work demonstrates the effectiveness and feasibility of the proposed model, which achieves a trade-off between high classification accuracy and deployability on resource-constrained MCUs. Consequently, it provides strong support for online continuous non-destructive structural health monitoring. 

Place, publisher, year, edition, pages
IEEE, 2023
Keywords
Acoustic emission, acoustic emissions, Convolution, Convolutional neural networks, damage classification, Data models, embedded systems, Monitoring, Non-destructive testing, structural health monitoring, Testing, TinyML, Training
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:miun:diva-49234 (URN)10.1109/TIM.2023.3307751 (DOI)001063248800019 ()2-s2.0-85168744324 (Scopus ID)
Available from: 2023-09-05 Created: 2023-09-05 Last updated: 2024-05-13Bibliographically approved
3. Tiny Machine Learning for Damage Classification in Concrete Using Acoustic Emission Signals
Open this publication in new window or tab >>Tiny Machine Learning for Damage Classification in Concrete Using Acoustic Emission Signals
2023 (English)In: 2023 IEEE International Instrumentation and Measurement Technology Conference (I2MTC), IEEE, 2023Conference paper, Published paper (Refereed)
Abstract [en]

Acoustic emission (AE) is a widely used non-destructive test method in structural health monitoring applications to identify the damage type in the material. Usually, the analysis of the AE signal is done by using traditional parameter-based methods. Recently, machine learning methods showed promising results for the analysis of AE signals. However, these machine learning models are complex, slow, and consume significant amounts of energy. To address these limitations and to explore the trade-off between model complexity and the classification accuracy, this paper presents a lightweight artificial neural network model to classify damage types in concrete material using raw acoustic emission signals. The model consists of one hidden layer with four neurons and is trained on a public acoustic emission signal dataset. The created model is deployed to several microcontrollers and the performance of the model is evaluated and compared with a state-of-the-art machine learning model. The model achieves 98.4% accuracy on the test data with only 4019 parameters. In terms of evaluation metrics, the proposed tiny machine learning model outperforms previously proposed models 10 to 1000 times. The proposed model thus enables machine learning in real-time structural health monitoring applications. 

Place, publisher, year, edition, pages
IEEE, 2023
Keywords
acoustic emission, damage classification, embedded systems, IoT, machine learning, structural-health-monitoring, TinyML
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:miun:diva-49095 (URN)10.1109/I2MTC53148.2023.10175972 (DOI)001039259600092 ()2-s2.0-85166377110 (Scopus ID)9781665453837 (ISBN)
Conference
2023 IEEE International Instrumentation and Measurement Technology Conference (I2MTC)
Available from: 2023-08-17 Created: 2023-08-17 Last updated: 2024-05-13Bibliographically approved
4. Comparison of Tiny Machine Learning Techniques for Embedded Acoustic Emission Analysis
Open this publication in new window or tab >>Comparison of Tiny Machine Learning Techniques for Embedded Acoustic Emission Analysis
2024 (English)In: 2024 IEEE 10th World Forum on Internet of Things (WF-IoT), IEEE conference proceedings, 2024Conference paper, Published paper (Refereed)
Abstract [en]

This paper compares machine learning approaches with different input data formats for the classification of acoustic emission (AE) signals. AE signals are a promising monitoring technique in many structural health monitoring applications. Machine learning has been demonstrated as an effective data analysis method, classifying different AE signals according to the damage mechanism they represent. These classifications can be performed based on the entire AE waveform or specific features that have been extracted from it. However, it is currently unknown which of these approaches is preferred. With the goal of model deployment on resource-constrained embedded Internet of Things (IoT) systems, this work evaluates and compares both approaches in terms of classification accuracy, memory requirement, processing time, and energy consumption. To accomplish this, features are extracted and carefully selected, neural network models are designed and optimized for each input data scenario, and the models are deployed on a low-power IoT node. The comparative analysis reveals that all models can achieve high classification accuracies of over 99\%, but that embedded feature extraction is computationally expensive. Consequently, models utilizing the raw AE signal as input have the fastest processing speed and thus the lowest energy consumption, which comes at the cost of a larger memory requirement.

Place, publisher, year, edition, pages
IEEE conference proceedings, 2024
Keywords
TinyML, acoustic emission, machine learning, structural health monitoring, feature extraction
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:miun:diva-51320 (URN)10.1109/WF-IoT62078.2024.10811219 (DOI)979-8-3503-7301-1 (ISBN)
Conference
10th IEEE World Forum on Internet of Things, WF-IoT 2024, Ottawa, Canada, 10 November - 13 November, 2024
Available from: 2025-02-11 Created: 2024-05-13 Last updated: 2025-02-11Bibliographically approved
5.
The record could not be found. The reason may be that the record is no longer available or you may have typed in a wrong id in the address field.

Open Access in DiVA

fulltext(1213 kB)252 downloads
File information
File name FULLTEXT02.pdfFile size 1213 kBChecksum SHA-512
c5a899fd6e60860cda090dda555dd22d6c277e931242d9b99c88b10a5632fc1fb554a4f208d02650a3dcce1bf94598237c465148ac32a029bb46247a07339ee0
Type fulltextMimetype application/pdf

Authority records

Zhang, Yuxuan

Search in DiVA

By author/editor
Zhang, Yuxuan
By organisation
Department of Computer and Electrical Engineering (2023-)
Electrical Engineering, Electronic Engineering, Information Engineering

Search outside of DiVA

GoogleGoogle Scholar
Total: 253 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

isbn
urn-nbn

Altmetric score

isbn
urn-nbn
Total: 941 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf