Mid Sweden University

miun.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
ALOHA: Leveraging Additional Information to Learn Robust Representations for Human Activity Recognition
Mid Sweden University, Faculty of Science, Technology and Media, Department of Computer and Electrical Engineering (2023-).
Mid Sweden University, Faculty of Science, Technology and Media, Department of Computer and Electrical Engineering (2023-).ORCID iD: 0000-0002-8382-0359
2025 (English)In: 2025 International Conference on Activity and Behavior Computing (ABC), IEEE conference proceedings, 2025Conference paper, Published paper (Refereed)
Abstract [en]

Human Activity Recognition using wearable sensors has applications in health monitoring, entertainment, and industrial settings. However, the performance of Human Activity Recognition models in real-life settings is usually lower than in laboratory settings due to the reduced quantity and quality of the sensors available in the former. Here, we propose using a suitable shared representation space to incorporate the information of additional sensors available during training time to address these limitations. We evaluate two representation spaces: one created using Feature Agglomeration and the other using Uniform Manifold Approximation and Projection (UMAP) under three conditions to evaluate their performance and robustness to noise: clean data, Gaussian noise, and Magnitude Warping noise using three datasets: Opportunity, Cooking, and PAMAP2. Our results consistently show that the representation spaces enhances performance relative to the conventional single-sensor method. The UMAP approach outperforms Feature Agglomeration, achieving up to a 14% improvement in the F1-Score metric when using clean data. In the presence of Gaussian noise, the UMAP representation space not only improves classification performance but also exhibits resilience to noise in the Opportunity and PAMAP2 datasets. While the UMAP method exhibits lower robustness to noise in the Cooking dataset, it still achieves the highest performance. When experimenting with Magnitude Warping noise, the UMAP representation space shows varying levels of robustness across datasets but still enhances performance to some extent. Using shared representations, we leverage the higher number and quality of sensors available in laboratory settings for training HAR models, while releasing the usual requirement of using the same number of sensors at the final deployment. 

Place, publisher, year, edition, pages
IEEE conference proceedings, 2025.
Keywords [en]
Additional Information, Noise Robustness, Representation Space, Transfer Learning, Agglomeration, Pattern Recognition, Signal Processing, Wearable Sensors, Gaussians, Human Activity Recognition, Performance, Robustness To Noise, Shared Representations, Warpings, Gaussian Noise (electronic)
National Category
Signal Processing
Identifiers
URN: urn:nbn:se:miun:diva-55580DOI: 10.1109/ABC64332.2025.11118576ISI: 001567389900001Scopus ID: 2-s2.0-105015558385ISBN: 9798331534370 (print)OAI: oai:DiVA.org:miun-55580DiVA, id: diva2:2000096
Conference
2025 International Conference on Activity and Behavior Computing (ABC)
Available from: 2025-09-23 Created: 2025-09-23 Last updated: 2025-11-21Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Nguyen Phuong Vu, QuynhBader, Sebastian

Search in DiVA

By author/editor
Nguyen Phuong Vu, QuynhBader, Sebastian
By organisation
Department of Computer and Electrical Engineering (2023-)
Signal Processing

Search outside of DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 29 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf