Open this publication in new window or tab >>Show others...
2025 (English)In: IEEE Access, E-ISSN 2169-3536, Vol. 13, p. 103640-103648Article in journal (Refereed) Published
Abstract [en]
This paper introduces an enhanced approach for deploying deep learning models on resource-constrained IoT devices by combining model partitioning, autoencoder-based compression, quantization with Time Dependent Clustering Loss (TCL) regularization, and lossless compression, to reduce communication overhead, minimizing latency while maintaining accuracy. The autoencoder compresses feature maps at the partitioning point before quantization, effectively reducing data size and preserving accuracy. TCL regularization clusters activations at the partitioning point to align with quantization levels, minimizing quantization error and ensuring accuracy even with extreme low-bitwidth quantization. Our method is evaluated on classification models (ResNet-50, EfficientNetV2-S) and an object detection model (YOLOv10n) using the TinyImageNet-200 and Pascal VOC datasets. Deployed on Raspberry Pi 4 B and GPU, each model is tested across various partitioning points, quantization bit-widths (1-bit, 2-bit, and 3-bit), communication datarate (1MB/s to 10MB/s), and LZMA lossless compression. For a partitioned ResNet-50 after the convolutional stem block, the speed-up against a server solution is 2.33× and 1.85x compared to the all-in-node solution, with only a minimal accuracy drop of less than one percentage points. The proposed framework offers a scalable solution for deploying high-performance AI models on IoT devices, extending the feasibility of real-time inference in resource-constrained environments.
Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2025
Keywords
CNN, IoT, Partitioning, Quantization
National Category
Computer Systems
Identifiers
urn:nbn:se:miun:diva-54756 (URN)10.1109/ACCESS.2025.3579107 (DOI)001512606800010 ()2-s2.0-105008273568 (Scopus ID)
2025-06-242025-06-242025-09-25