Mid Sweden University

miun.sePublications
Change search
Link to record
Permanent link

Direct link
Malik, Waheed
Alternative names
Publications (10 of 19) Show all publications
Malik, A. W., Thörnberg, B., Anwar, Q., Johansen, T. A. & Shahzad, K. (2015). Real Time Decoding of Color Symbol for Optical Positioning System. International Journal of Advanced Robotic Systems, 12(5)
Open this publication in new window or tab >>Real Time Decoding of Color Symbol for Optical Positioning System
Show others...
2015 (English)In: International Journal of Advanced Robotic Systems, ISSN 1729-8806, E-ISSN 1729-8814, Vol. 12, no 5Article in journal (Refereed) Published
Abstract [en]

This paper presents the design and real-time decoding of a color symbol that can be used as a reference marker for optical navigation. The designed symbol has a circular shape and is printed on paper using two distinct colors. This pair of colors is selected based on the highest achievable signal to noise ratio. The symbol is designed to carry eight bit information. Real time decoding of this symbol is performed using a heterogeneous combination of Field Programmable Gate Array (FPGA) and a microcontroller.  An image sensor having a resolution of 1600 by 1200 pixels is used to capture images of symbols in complex backgrounds. Dynamic image segmentation, component labeling and feature extraction was performed on the FPGA. The region of interest was further computed from the extracted features. Feature data belonging to the symbol was sent from the FPGA to the microcontroller. Image processing tasks are partitioned between the FPGA and microcontroller based on data intensity. Experiments were performed to verify the rotational independence of the symbols. The maximum distance between camera and symbol allowing for correct detection and decoding was analyzed. Experiments were also performed to analyze the number of generated image components and sub-pixel precision versus different light sources and intensities. The proposed hardware architecture can process up to 55 frames per second for accurate detection and decoding of symbols at two Megapixels resolution. The power consumption of the complete system is 342mw.

Keywords
Indoor navigation, Reference symbol, Robotic vision
National Category
Robotics
Identifiers
urn:nbn:se:miun:diva-23168 (URN)10.5772/59680 (DOI)000350647600001 ()2-s2.0-84923346270 (Scopus ID)STC (Local ID)STC (Archive number)STC (OAI)
Funder
Knowledge Foundation
Available from: 2014-10-08 Created: 2014-10-08 Last updated: 2017-10-27Bibliographically approved
Imran, M., Khursheed, K., Ahmad, N., O'Nils, M., Lawal, N. & Waheed, M. A. (2014). Complexity Analysis of Vision Functions for Comparison of Wireless Smart Cameras. International Journal of Distributed Sensor Networks, Art. no. 710685
Open this publication in new window or tab >>Complexity Analysis of Vision Functions for Comparison of Wireless Smart Cameras
Show others...
2014 (English)In: International Journal of Distributed Sensor Networks, ISSN 1550-1329, E-ISSN 1550-1477, p. Art. no. 710685-Article in journal (Refereed) Published
Abstract [en]

There are a number of challenges caused by the large amount of data and limited resources such as memory, processing capability, energy consumption, and bandwidth, when implementing vision systems on wireless smart cameras using embedded platforms. It is usual for research in this field to focus on the development of a specific solution for a particular problem. There is a requirement for a tool which facilitates the complexity estimation and comparison of wireless smart camera systems in order to develop efficient generic solutions. To develop such a tool, we have presented, in this paper, a complexity model by using a system taxonomy. In this model, we have investigated the arithmetic complexity and memory requirements of vision functions with the help of system taxonomy. To demonstrate the use of the proposed model, a number of actual systems are analyzed in a case study. The complexity model, together with system taxonomy, is used for the complexity estimation of vision functions and for a comparison of vision systems. After comparison, the systems are evaluated for implementation on a single generic architecture. The proposed approach will assist researchers in benchmarking and will assist in proposing efficient generic solutions for the same class of problems with reduced design and development costs.

National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:miun:diva-22002 (URN)10.1155/2014/710685 (DOI)000330458300001 ()2-s2.0-84893189660 (Scopus ID)STC (Local ID)STC (Archive number)STC (OAI)
Available from: 2014-06-04 Created: 2014-05-28 Last updated: 2017-12-05Bibliographically approved
Malik, A. W., Thörnberg, B., Imran, M. & Lawal, N. (2014). Hardware Architecture for Real-time  Computation of Image Component Feature Descriptors on a FPGA. International Journal of Distributed Sensor Networks, Art. no. 815378
Open this publication in new window or tab >>Hardware Architecture for Real-time  Computation of Image Component Feature Descriptors on a FPGA
2014 (English)In: International Journal of Distributed Sensor Networks, ISSN 1550-1329, E-ISSN 1550-1477, p. Art. no. 815378-Article in journal (Refereed) Published
Abstract [en]

This paper describes a hardwarearchitecture for real-time image component labelingand the computation of image component featuredescriptors. These descriptors are object relatedproperties used to describe each image component.Embedded machine vision systems demand a robustperformance, power efficiency as well as minimumarea utilization, depending on the deployedapplication. In the proposed architecture, the hardwaremodules for component labeling and featurecalculation run in parallel. A CMOS image sensor(MT9V032), operating at a maximum clock frequencyof 27MHz, was used to capture the images. Thearchitecture was synthesized and implemented on aXilinx Spartan-6 FPGA. The developed architecture iscapable of processing 390 video frames per second ofsize 640x480 pixels. Dynamic power consumption is13mW at 86 frames per second.

National Category
Electrical Engineering, Electronic Engineering, Information Engineering Embedded Systems
Identifiers
urn:nbn:se:miun:diva-20382 (URN)10.1155/2014/815378 (DOI)000330042300001 ()2-s2.0-84893832573 (Scopus ID)STC (Local ID)STC (Archive number)STC (OAI)
Funder
Knowledge Foundation
Available from: 2013-11-29 Created: 2013-11-29 Last updated: 2017-12-06Bibliographically approved
Malik, A. W. (2014). Real-Time Optical Position Sensing on FPGA. (Doctoral dissertation). Sundsvall: Mid Sweden University
Open this publication in new window or tab >>Real-Time Optical Position Sensing on FPGA
2014 (English)Doctoral thesis, comprehensive summary (Other academic)
Place, publisher, year, edition, pages
Sundsvall: Mid Sweden University, 2014. p. 95
Series
Mid Sweden University doctoral thesis, ISSN 1652-893X ; 176
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:miun:diva-24035 (URN)STC (Local ID)978-91-87557-29-3 (ISBN)STC (Archive number)STC (OAI)
Supervisors
Available from: 2015-01-08 Created: 2015-01-07 Last updated: 2017-03-06Bibliographically approved
Malik, A. W., Thörnberg, B. & Palaniappan, P. K. (2013). Comparison of Three Smart Camera Architectures for Real-time Machine Vision System. International Journal of Advanced Robotic Systems, 10, Art. no. 402
Open this publication in new window or tab >>Comparison of Three Smart Camera Architectures for Real-time Machine Vision System
2013 (English)In: International Journal of Advanced Robotic Systems, ISSN 1729-8806, E-ISSN 1729-8814, Vol. 10, p. Art. no. 402-Article in journal (Refereed) Published
Abstract [en]

This paper presents a machine vision system for real-time computation of distance and angle of a camera from a set of reference points located on a target board. Three different smart camera architectures were explored to compare performance parameters such as power consumption, frame speed and latency.  Architecture 1 consists of hardware machine vision modules modeled at Register Transfer (RT) level and a soft-core processor on a single FPGA chip. Architecture 2 is commercially available software based smart camera, Matrox Iris GT. Architecture 3 is a two-chip solution composed of hardware machine vision modules on FPGA and an external micro-controller. Results from a performance comparison show that Architecture 2 has higher latency and consumes much more power than Architecture 1 and 3. However, Architecture 2 benefits from an easy programming model. Smart camera system with FPGA and external microcontroller has lower latency and consumes less power as compared to single FPGA chip having hardware modules and soft-core processor.

Keywords
Machine Vision, Component Labeling, Smart Camera
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:miun:diva-19953 (URN)10.5772/57135 (DOI)000328072100001 ()2-s2.0-84890537511 (Scopus ID)STC (Local ID)STC (Archive number)STC (OAI)
Funder
Knowledge Foundation
Available from: 2013-09-30 Created: 2013-09-30 Last updated: 2017-12-06Bibliographically approved
Anwar, Q., Malik, A. W. & Thörnberg, B. (2013). Design of coded reference labels for indoor optical navigation using monocular camera. In: 2013 International Conference on Indoor Positioning and Indoor Navigation, IPIN 2013: . Paper presented at 2013 International Conference on Indoor Positioning and Indoor Navigation, IPIN 2013; Montbeliard-Belfort; France; 28 October 2013 through 31 October 2013; Category numberCFP1309J-ART; Code 105425 (pp. Art. no. 6817925). IEEE Computer Society
Open this publication in new window or tab >>Design of coded reference labels for indoor optical navigation using monocular camera
2013 (English)In: 2013 International Conference on Indoor Positioning and Indoor Navigation, IPIN 2013, IEEE Computer Society, 2013, p. Art. no. 6817925-Conference paper, Published paper (Refereed)
Abstract [en]

We present a machine vision based indoor navigation system. The paper describes a pose estimation of machine vision system by recognizing rotationally independent optimized color reference labels combined with a geometrical camera calibration model, which determines a set of camera parameters. A reference label carries one byte of information, which can be uniquely designed for various values. More than four reference labels are used in the image to calculate the localization coordinates of the system. An algorithm in Matlab has been developed so that a machine vision system can recognize N number of labels at any given orientation. In addition, a one channel color technique is applied in segmentation process, due to this technique the number of segmented image components is reduced significantly, limiting the memory storage requirement and processing time. The algorithm for pose estimation is based on direct linear transformation (DLT) method with a set of control reference labels in relation to the camera calibration model. From the experiments we concluded that the pose of the machine vision system can be calculated with relatively high precision, in the calibrated environment of reference labels. © 2013 IEEE.

Place, publisher, year, edition, pages
IEEE Computer Society, 2013
Series
International Conference on Indoor Positioning and Indoor Navigation, ISSN 2162-7347 ; 2013
Keywords
DLT, label recognition, least square estimation, Machine vision, Matlab, Optical navigation, Pose, Reference labels
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:miun:diva-22611 (URN)10.1109/IPIN.2013.6817925 (DOI)000341663400086 ()2-s2.0-84902155063 (Scopus ID)978-1-4799-4043-1 (ISBN)
Conference
2013 International Conference on Indoor Positioning and Indoor Navigation, IPIN 2013; Montbeliard-Belfort; France; 28 October 2013 through 31 October 2013; Category numberCFP1309J-ART; Code 105425
Funder
Knowledge Foundation
Note

Export Date: 20 August 2014

Available from: 2014-10-10 Created: 2014-08-20 Last updated: 2015-09-25Bibliographically approved
Imran, M., Ahmad, N., Khursheed, K., Malik, A. W., Lawal, N. & O’Nils, M. (2013). Implementation of Wireless Vision Sensor Node With a Lightweight Bi-Level Video Coding. IEEE Journal on Emerging and Selected Topics in Circuits and Systems, 3(2), 198-209, Article ID 6508941.
Open this publication in new window or tab >>Implementation of Wireless Vision Sensor Node With a Lightweight Bi-Level Video Coding
Show others...
2013 (English)In: IEEE Journal on Emerging and Selected Topics in Circuits and Systems, ISSN 2156-3357, E-ISSN 2156-3365, Vol. 3, no 2, p. 198-209, article id 6508941Article in journal (Refereed) Published
Abstract [en]

Wireless vision sensor networks (WVSNs) consist ofa number of wireless vision sensor nodes (VSNs) which have limitedresources i.e., energy, memory, processing, and wireless bandwidth.The processing and communication energy requirements ofindividual VSN have been a challenge because of limited energyavailability. To meet this challenge, we have proposed and implementeda programmable and energy efficient VSN architecturewhich has lower energy requirements and has a reduced designcomplexity. In the proposed system, vision tasks are partitionedbetween the hardware implemented VSN and a server. The initialdata dominated tasks are implemented on the VSN while thecontrol dominated complex tasks are processed on a server. Thisstrategy will reduce both the processing energy consumption andthe design complexity. The communication energy consumption isreduced by implementing a lightweight bi-level video coding on theVSN. The energy consumption is measured on real hardware fordifferent applications and proposed VSN is compared against publishedsystems. The results show that, depending on the application,the energy consumption can be reduced by a factor of approximately1.5 up to 376 as compared to VSN without the bi-level videocoding. The proposed VSN offers energy efficient, generic architecturewith smaller design complexity on hardware reconfigurableplatform and offers easy adaptation for a number of applicationsas compared to published systems.

Place, publisher, year, edition, pages
IEEE Press, 2013
Keywords
Architecture, smart camera, video coding, wireless vision sensor networks (WVSNs), wireless vision sensor node (VSN)
National Category
Engineering and Technology
Identifiers
urn:nbn:se:miun:diva-19193 (URN)10.1109/JETCAS.2013.2256816 (DOI)000337789200009 ()2-s2.0-84879076204 (Scopus ID)STC (Local ID)STC (Archive number)STC (OAI)
Available from: 2013-06-12 Created: 2013-06-12 Last updated: 2024-01-05Bibliographically approved
Imran, M., Khursheed, K., Malik, A. W., Ahmad, N., O'Nils, M., Lawal, N. & Thörnberg, B. (2012). Architecture Exploration Based on Tasks Partitioning Between Hardware, Software and Locality for a Wireless Vision Sensor Node. International Journal of Distributed Systems and Technologies, 3(2), 58-71
Open this publication in new window or tab >>Architecture Exploration Based on Tasks Partitioning Between Hardware, Software and Locality for a Wireless Vision Sensor Node
Show others...
2012 (English)In: International Journal of Distributed Systems and Technologies, ISSN 1947-3532, E-ISSN 1947-3540, Vol. 3, no 2, p. 58-71Article in journal (Refereed) Published
Abstract [en]

Wireless Vision Sensor Networks (WVSNs) is an emerging field which consists of a number of Visual Sensor Nodes (VSNs). Compared to traditional sensor networks, WVSNs operates on two dimensional data, which requires high bandwidth and high energy consumption. In order to minimize the energy consumption, the focus is on finding energy efficient and programmable architectures for the VSN by partitioning the vision tasks among hardware (FPGA), software (Micro-controller) and locality (sensor node or server). The energy consumption, cost and design time of different processing strategies is analyzed for the implementation of VSN. Moreover, the processing energy and communication energy consumption of VSN is investigated in order to maximize the lifetime. Results show that by introducing a reconfigurable platform such as FPGA with small static power consumption and by transmitting the compressed images after pixel based tasks from the VSN results in longer battery lifetime for the VSN.

Place, publisher, year, edition, pages
IGI Global, USA.: , 2012
Keywords
Wireless Vision Sensor Networks; Vision Sensor Node; Hardware/Software Partitioning; Reconfigurable Architecture; Image Processing.
National Category
Engineering and Technology
Identifiers
urn:nbn:se:miun:diva-14940 (URN)10.4018/jdst.2012040104 (DOI)2-s2.0-84880522514 (Scopus ID)
Projects
Onparticle detection
Available from: 2012-01-04 Created: 2011-11-27 Last updated: 2017-12-08Bibliographically approved
Imran, M., Khursheed, K., Ahmad, N., O'Nils, M., Lawal, N. & Waheed, M. A. (2012). Architecture of Wireless Visual Sensor Node with Region of Interest Coding. In: Proceedings - 2012 IEEE 3rd International Conference on Networked Embedded Systems for Every Application, NESEA 2012: . Paper presented at 2012 IEEE 3rd International Conference on Networked Embedded Systems for Every Application, NESEA 2012; Liverpool; United Kingdom; 13 December 2012 through 14 December 2012; Category numberCFP12NEE-ART; Code 96291 (pp. Art. no. 6474029). IEEE conference proceedings
Open this publication in new window or tab >>Architecture of Wireless Visual Sensor Node with Region of Interest Coding
Show others...
2012 (English)In: Proceedings - 2012 IEEE 3rd International Conference on Networked Embedded Systems for Every Application, NESEA 2012, IEEE conference proceedings, 2012, p. Art. no. 6474029-Conference paper, Published paper (Refereed)
Abstract [en]

The challenges involved in designing a wirelessVision Sensor Node include the reduction in processing andcommunication energy consumption, in order to maximize itslifetime. This work presents an architecture for a wireless VisionSensor Node, which consumes low processing andcommunication energy. The processing energy consumption isreduced by processing lightweight vision tasks on the VSN andby partitioning the vision tasks between the wireless VisionSensor Node and the server. The communication energyconsumption is reduced with Region Of Interest coding togetherwith a suitable bi-level compression scheme. A number ofdifferent processing strategies are investigated to realize awireless Vision Sensor Node with a low energy consumption. Theinvestigation shows that the wireless Vision Sensor Node, usingRegion Of Interest coding and CCITT group4 compressiontechnique, consumes 43 percent lower processing andcommunication energy as compared to the wireless Vision SensorNode implemented without Region Of Interest coding. Theproposed wireless Vision Sensor Node can achieve a lifetime of5.4 years, with a sample period of 5 minutes by using 4 AAbatteries.

Place, publisher, year, edition, pages
IEEE conference proceedings, 2012
Keywords
architecture, wireless vision sensor node, Region of interest coding, Smart camera, wireless visual sensor networks, wireless multimedia sensor networks.
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:miun:diva-18021 (URN)10.1109/NESEA.2012.6474029 (DOI)000319471300019 ()2-s2.0-84875603760 (Scopus ID)STC (Local ID)978-146734723-5 (ISBN)STC (Archive number)STC (OAI)
Conference
2012 IEEE 3rd International Conference on Networked Embedded Systems for Every Application, NESEA 2012; Liverpool; United Kingdom; 13 December 2012 through 14 December 2012; Category numberCFP12NEE-ART; Code 96291
Available from: 2012-12-19 Created: 2012-12-19 Last updated: 2016-10-20Bibliographically approved
Imran, M., Khursheed, K., Ahmad, N., Malik, A. W., O'Nils, M. & Lawal, N. (2012). Complexity Analysis of Vision Functions for implementation of Wireless Smart Cameras using System Taxonomy. In: Proceedings of SPIE - The International Society for Optical Engineering: . Paper presented at Real-Time Image and Video Processing 2012;Brussels;19 April 2012through19 April 2012;Code90041 (pp. Art. no. 84370C). Belgium: SPIE - International Society for Optical Engineering
Open this publication in new window or tab >>Complexity Analysis of Vision Functions for implementation of Wireless Smart Cameras using System Taxonomy
Show others...
2012 (English)In: Proceedings of SPIE - The International Society for Optical Engineering, Belgium: SPIE - International Society for Optical Engineering, 2012, p. Art. no. 84370C-Conference paper, Published paper (Refereed)
Abstract [en]

There are a number of challenges caused by the large amount of data and limited resources such as memory, processing capability, energy consumption and bandwidth when implementing vision systems on wireless smart cameras using embedded platforms. It is usual for research in this field to focus on the development of a specific solution for a particular problem. There is a requirement for a tool which has the ability to predict the resource requirements for the development and comparison of vision solutions in wireless smart cameras. To accelerate the development of such tool, we have used a system taxonomy, which shows that the majority of wireless smart cameras have common functions. In this paper, we have investigated the arithmetic complexity and memory requirements of vision functions by using the system taxonomy and proposed an abstract complexity model. To demonstrate the use of this model, we have analysed a number of implemented systems with this model and showed that complexity model together with system taxonomy can be used for comparison and generalization of vision solutions. Moreover, it will assist researchers/designers to predict the resource requirements for different class of vision systems in a reduced time and which will involve little effort. 

Place, publisher, year, edition, pages
Belgium: SPIE - International Society for Optical Engineering, 2012
Series
Proceedings of SPIE, ISSN 0277-786X ; 8437
Keywords
wireless smart camera, complexity analysis, system taxonomy, comparison, resource requirements
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:miun:diva-16036 (URN)10.1117/12.923797 (DOI)000305693900010 ()2-s2.0-84861946720 (Scopus ID)STC (Local ID)978-0-8194-9129-9 (ISBN)STC (Archive number)STC (OAI)
Conference
Real-Time Image and Video Processing 2012;Brussels;19 April 2012through19 April 2012;Code90041
Available from: 2012-03-30 Created: 2012-03-30 Last updated: 2016-10-20Bibliographically approved
Organisations

Search in DiVA

Show all publications