miun.sePublications
Change search
Link to record
Permanent link

Direct link
BETA
Publications (4 of 4) Show all publications
Shallari, I., Anwar, Q., Imran, M. & O'Nils, M. (2017). Background Modelling, Analysis and Implementation for Thermographic Images. In: PROCEEDINGS OF THE 2017 SEVENTH INTERNATIONAL CONFERENCE ON IMAGE PROCESSING THEORY, TOOLS AND APPLICATIONS (IPTA 2017): . Paper presented at Seventh International Conference on Image Processing Theory, Tools and Applications (IPTA 2017), Montreal, Canada; November 28 - December 1, 2017. IEEE
Open this publication in new window or tab >>Background Modelling, Analysis and Implementation for Thermographic Images
2017 (English)In: PROCEEDINGS OF THE 2017 SEVENTH INTERNATIONAL CONFERENCE ON IMAGE PROCESSING THEORY, TOOLS AND APPLICATIONS (IPTA 2017), IEEE, 2017Conference paper, Published paper (Refereed)
Abstract [en]

Background subtraction is one of the fundamental steps in the image-processing pipeline for distinguishing foreground from background. Most of the methods have been investigated with respect to visual images, in which case challenges are different compared to thermal images. Thermal sensors are invariant to light changes and have reduced privacy concerns. We propose the use of a low-pass IIR filter for background modelling in thermographic imagery due to its better performance compared to algorithms such as Mixture of Gaussians and K-nearest neighbour, while reducing memory requirements for implementation in embedded architectures. Based on the analysis of four different image datasets both indoor and outdoor, with and without people presence, the learning rate for the filter is set to 3×10-3 Hz and the proposed model is implemented on an Artix-7 FPGA.

Place, publisher, year, edition, pages
IEEE, 2017
Keywords
Infrared; visual; pedestrian detection; smart camera; architecture; surveillance.
National Category
Embedded Systems
Identifiers
urn:nbn:se:miun:diva-32445 (URN)10.1109/IPTA.2017.8310078 (DOI)000428743900002 ()2-s2.0-85050756650 (Scopus ID)978-1-5386-1842-4 (ISBN)
Conference
Seventh International Conference on Image Processing Theory, Tools and Applications (IPTA 2017), Montreal, Canada; November 28 - December 1, 2017
Projects
City MovementsSMART (Smarta system och tjänster för ett effektivt och innovativt samhälle)
Available from: 2017-12-13 Created: 2017-12-13 Last updated: 2019-09-10Bibliographically approved
Anwar, Q., Imran, M. & O'Nils, M. (2016). Intelligence Partitioning as a Method for Architectural Exploration of Wireless Sensor Node. In: Proceedings of the International Conference on Computational Science and Computational Intelligence (CSCI), 2016.: . Paper presented at 2016 International Conference on Computational Science and Computational Intelligence, 15-17 Dec. 2016, Las Vegas, NV, USA (pp. 935-940). IEEE Press, Article ID 7881473.
Open this publication in new window or tab >>Intelligence Partitioning as a Method for Architectural Exploration of Wireless Sensor Node
2016 (English)In: Proceedings of the International Conference on Computational Science and Computational Intelligence (CSCI), 2016., IEEE Press, 2016, p. 935-940, article id 7881473Conference paper, Published paper (Refereed)
Abstract [en]

Embedded systems with integrated sensing, processing and wireless communication are driving future connectivity concepts such as Wireless Sensor Networks (WSNs) and Internet of Things (IoTs). Because of resource limitations, there still exists a number of challenges such as low latency and energy consumption to realize these concepts to full potential. To address and understand these challenges, we have developed and employed an intelligence partitioning method which generates different implementation alternatives by distributing processing load across multiple nodes. The task-to-node mapping has exponential complexity which is hard to compute for a large scale system. Regarding this, our method provides recommendation to handle and minimize such complexity for a large system. Experiments on a use-case concludes that the proposed method is able to identify unfavourable architecture solutions in which forward and backword communication paths exists in task-to-node mapping. These solution can be avoided for further architectural exploration, thus limiting the space for architecture exploration of a sensor node.

Place, publisher, year, edition, pages
IEEE Press, 2016
Keywords
Edge computing, intelligence partitioning, embedded computing
National Category
Computer Systems
Identifiers
urn:nbn:se:miun:diva-30736 (URN)10.1109/CSCI.2016.0180 (DOI)000405582400172 ()2-s2.0-85017325247 (Scopus ID)STC (Local ID)978-1-5090-5510-4 (ISBN)STC (Archive number)STC (OAI)
Conference
2016 International Conference on Computational Science and Computational Intelligence, 15-17 Dec. 2016, Las Vegas, NV, USA
Projects
ASISSMART (Smarta system och tjänster för ett effektivt och innovativt samhälle)
Funder
Knowledge Foundation
Available from: 2017-05-16 Created: 2017-05-16 Last updated: 2019-09-09Bibliographically approved
Malik, A. W., Thörnberg, B., Anwar, Q., Johansen, T. A. & Shahzad, K. (2015). Real Time Decoding of Color Symbol for Optical Positioning System. International Journal of Advanced Robotic Systems, 12(5)
Open this publication in new window or tab >>Real Time Decoding of Color Symbol for Optical Positioning System
Show others...
2015 (English)In: International Journal of Advanced Robotic Systems, ISSN 1729-8806, E-ISSN 1729-8814, Vol. 12, no 5Article in journal (Refereed) Published
Abstract [en]

This paper presents the design and real-time decoding of a color symbol that can be used as a reference marker for optical navigation. The designed symbol has a circular shape and is printed on paper using two distinct colors. This pair of colors is selected based on the highest achievable signal to noise ratio. The symbol is designed to carry eight bit information. Real time decoding of this symbol is performed using a heterogeneous combination of Field Programmable Gate Array (FPGA) and a microcontroller.  An image sensor having a resolution of 1600 by 1200 pixels is used to capture images of symbols in complex backgrounds. Dynamic image segmentation, component labeling and feature extraction was performed on the FPGA. The region of interest was further computed from the extracted features. Feature data belonging to the symbol was sent from the FPGA to the microcontroller. Image processing tasks are partitioned between the FPGA and microcontroller based on data intensity. Experiments were performed to verify the rotational independence of the symbols. The maximum distance between camera and symbol allowing for correct detection and decoding was analyzed. Experiments were also performed to analyze the number of generated image components and sub-pixel precision versus different light sources and intensities. The proposed hardware architecture can process up to 55 frames per second for accurate detection and decoding of symbols at two Megapixels resolution. The power consumption of the complete system is 342mw.

Keywords
Indoor navigation, Reference symbol, Robotic vision
National Category
Robotics
Identifiers
urn:nbn:se:miun:diva-23168 (URN)10.5772/59680 (DOI)000350647600001 ()2-s2.0-84923346270 (Scopus ID)STC (Local ID)STC (Archive number)STC (OAI)
Funder
Knowledge Foundation
Available from: 2014-10-08 Created: 2014-10-08 Last updated: 2017-10-27Bibliographically approved
Anwar, Q., Malik, A. W. & Thörnberg, B. (2013). Design of coded reference labels for indoor optical navigation using monocular camera. In: 2013 International Conference on Indoor Positioning and Indoor Navigation, IPIN 2013: . Paper presented at 2013 International Conference on Indoor Positioning and Indoor Navigation, IPIN 2013; Montbeliard-Belfort; France; 28 October 2013 through 31 October 2013; Category numberCFP1309J-ART; Code 105425 (pp. Art. no. 6817925). IEEE Computer Society
Open this publication in new window or tab >>Design of coded reference labels for indoor optical navigation using monocular camera
2013 (English)In: 2013 International Conference on Indoor Positioning and Indoor Navigation, IPIN 2013, IEEE Computer Society, 2013, p. Art. no. 6817925-Conference paper, Published paper (Refereed)
Abstract [en]

We present a machine vision based indoor navigation system. The paper describes a pose estimation of machine vision system by recognizing rotationally independent optimized color reference labels combined with a geometrical camera calibration model, which determines a set of camera parameters. A reference label carries one byte of information, which can be uniquely designed for various values. More than four reference labels are used in the image to calculate the localization coordinates of the system. An algorithm in Matlab has been developed so that a machine vision system can recognize N number of labels at any given orientation. In addition, a one channel color technique is applied in segmentation process, due to this technique the number of segmented image components is reduced significantly, limiting the memory storage requirement and processing time. The algorithm for pose estimation is based on direct linear transformation (DLT) method with a set of control reference labels in relation to the camera calibration model. From the experiments we concluded that the pose of the machine vision system can be calculated with relatively high precision, in the calibrated environment of reference labels. © 2013 IEEE.

Place, publisher, year, edition, pages
IEEE Computer Society, 2013
Series
International Conference on Indoor Positioning and Indoor Navigation, ISSN 2162-7347 ; 2013
Keywords
DLT, label recognition, least square estimation, Machine vision, Matlab, Optical navigation, Pose, Reference labels
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:miun:diva-22611 (URN)10.1109/IPIN.2013.6817925 (DOI)000341663400086 ()2-s2.0-84902155063 (Scopus ID)978-1-4799-4043-1 (ISBN)
Conference
2013 International Conference on Indoor Positioning and Indoor Navigation, IPIN 2013; Montbeliard-Belfort; France; 28 October 2013 through 31 October 2013; Category numberCFP1309J-ART; Code 105425
Funder
Knowledge Foundation
Note

Export Date: 20 August 2014

Available from: 2014-10-10 Created: 2014-08-20 Last updated: 2015-09-25Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0001-5615-7347

Search in DiVA

Show all publications