Mid Sweden University

miun.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Real-Time Optical Position Sensing on FPGA
Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
2014 (English)Doctoral thesis, comprehensive summary (Other academic)
Place, publisher, year, edition, pages
Sundsvall: Mid Sweden University , 2014. , p. 95
Series
Mid Sweden University doctoral thesis, ISSN 1652-893X ; 176
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
URN: urn:nbn:se:miun:diva-24035Local ID: STCISBN: 978-91-87557-29-3 (print)OAI: oai:DiVA.org:miun-24035DiVA, id: diva2:776214
Supervisors
Available from: 2015-01-08 Created: 2015-01-07 Last updated: 2017-03-06Bibliographically approved
List of papers
1. Real-time Component Labelling with Centre of Gravity Calculation on FPGA
Open this publication in new window or tab >>Real-time Component Labelling with Centre of Gravity Calculation on FPGA
2011 (English)In: 2011 Proceedings of Sixth International Conference on Systems, 2011Conference paper, Published paper (Refereed)
Abstract [en]

In this paper we present a hardware unit for real time component labelling with Centre of Gravity (COG) calculation. The main targeted application area is light spots used as references for robotic navigation. COG calculation can be done in parallel with a single pass component labelling unit without first having to resolve merged labels. We present hardware architecture suitable for implementation of this COG unit on Field programmable Gate Arrays (FPGA). As result, we get high frame speed, low power and low latency. The device utilization and estimated power dissipation are reported for Xilinx Virtex II pro device simulated at 86 VGA sized frames per second. Maximum speed is 410 frames per second at 126 MHz clock.

Identifiers
urn:nbn:se:miun:diva-12170 (URN)STC (Local ID)STC (Archive number)STC (OAI)
Conference
Sixth International Conference on Systems ICONS 2011
Projects
OptiPos
Available from: 2010-10-29 Created: 2010-10-29 Last updated: 2016-10-19Bibliographically approved
2. Hardware Architecture for Real-time  Computation of Image Component Feature Descriptors on a FPGA
Open this publication in new window or tab >>Hardware Architecture for Real-time  Computation of Image Component Feature Descriptors on a FPGA
2014 (English)In: International Journal of Distributed Sensor Networks, ISSN 1550-1329, E-ISSN 1550-1477, p. Art. no. 815378-Article in journal (Refereed) Published
Abstract [en]

This paper describes a hardwarearchitecture for real-time image component labelingand the computation of image component featuredescriptors. These descriptors are object relatedproperties used to describe each image component.Embedded machine vision systems demand a robustperformance, power efficiency as well as minimumarea utilization, depending on the deployedapplication. In the proposed architecture, the hardwaremodules for component labeling and featurecalculation run in parallel. A CMOS image sensor(MT9V032), operating at a maximum clock frequencyof 27MHz, was used to capture the images. Thearchitecture was synthesized and implemented on aXilinx Spartan-6 FPGA. The developed architecture iscapable of processing 390 video frames per second ofsize 640x480 pixels. Dynamic power consumption is13mW at 86 frames per second.

National Category
Electrical Engineering, Electronic Engineering, Information Engineering Embedded Systems
Identifiers
urn:nbn:se:miun:diva-20382 (URN)10.1155/2014/815378 (DOI)000330042300001 ()2-s2.0-84893832573 (Scopus ID)STC (Local ID)STC (Archive number)STC (OAI)
Funder
Knowledge Foundation
Available from: 2013-11-29 Created: 2013-11-29 Last updated: 2017-12-06Bibliographically approved
3. Real-Time machine vision system using FPGA and soft-core processor
Open this publication in new window or tab >>Real-Time machine vision system using FPGA and soft-core processor
2012 (English)In: Proceedings of SPIE - The International Society for Optical Engineering, SPIE - International Society for Optical Engineering, 2012, p. Art. no. 84370Z-Conference paper, Published paper (Refereed)
Abstract [en]

This paper presents a machine vision system for real-time computation of distance and angle of a camera from reference points in the environment. Image pre-processing, component labeling and feature extraction modules were modeled at Register Transfer (RT) level and synthesized for implementation on field programmable gate arrays (FPGA). The extracted image component features were sent from the hardware modules to a soft-core processor, MicroBlaze, for computation of distance and angle. A CMOS imaging sensor operating at a clock frequency of 27MHz was used in our experiments to produce a video stream at the rate of 75 frames per second. Image component labeling and feature extraction modules were running in parallel having a total latency of 13ms. The MicroBlaze was interfaced with the component labeling and feature extraction modules through Fast Simplex Link (FSL). The latency for computing distance and angle of camera from the reference points was measured to be 2ms on the MicroBlaze, running at 100 MHz clock frequency. In this paper, we present the performance analysis, device utilization and power consumption for the designed system. The FPGA based machine vision system that we propose has high frame speed, low latency and a power consumption that is much lower compared to commercially available smart camera solutions. © 2012 SPIE.

Place, publisher, year, edition, pages
SPIE - International Society for Optical Engineering, 2012
Keywords
Component labeling; Machine vision; Smart camera
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:miun:diva-16697 (URN)10.1117/12.927854 (DOI)000305693900028 ()2-s2.0-84861951577 (Scopus ID)STC (Local ID)978-081949129-9 (ISBN)STC (Archive number)STC (OAI)
Conference
Real-Time Image and Video Processing 2012;Brussels;19 April 2012through19 April 2012;Code90041
Available from: 2012-08-10 Created: 2012-08-10 Last updated: 2016-10-20Bibliographically approved
4. Comparison of Three Smart Camera Architectures for Real-time Machine Vision System
Open this publication in new window or tab >>Comparison of Three Smart Camera Architectures for Real-time Machine Vision System
2013 (English)In: International Journal of Advanced Robotic Systems, ISSN 1729-8806, E-ISSN 1729-8814, Vol. 10, p. Art. no. 402-Article in journal (Refereed) Published
Abstract [en]

This paper presents a machine vision system for real-time computation of distance and angle of a camera from a set of reference points located on a target board. Three different smart camera architectures were explored to compare performance parameters such as power consumption, frame speed and latency.  Architecture 1 consists of hardware machine vision modules modeled at Register Transfer (RT) level and a soft-core processor on a single FPGA chip. Architecture 2 is commercially available software based smart camera, Matrox Iris GT. Architecture 3 is a two-chip solution composed of hardware machine vision modules on FPGA and an external micro-controller. Results from a performance comparison show that Architecture 2 has higher latency and consumes much more power than Architecture 1 and 3. However, Architecture 2 benefits from an easy programming model. Smart camera system with FPGA and external microcontroller has lower latency and consumes less power as compared to single FPGA chip having hardware modules and soft-core processor.

Keywords
Machine Vision, Component Labeling, Smart Camera
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:miun:diva-19953 (URN)10.5772/57135 (DOI)000328072100001 ()2-s2.0-84890537511 (Scopus ID)STC (Local ID)STC (Archive number)STC (OAI)
Funder
Knowledge Foundation
Available from: 2013-09-30 Created: 2013-09-30 Last updated: 2017-12-06Bibliographically approved
5. Optimized Color Pair Selection for Label Design
Open this publication in new window or tab >>Optimized Color Pair Selection for Label Design
2011 (English)In: Proceedings Elmar - International Symposium Electronics in Marine, Zadar, Croatia: IEEE conference proceedings, 2011, p. 115-118Conference paper, Published paper (Refereed)
Abstract [en]

We present in this paper a technique for designing reference labels that can be used for optical navigation. We optimize the selection of foreground and background colors used for the printed reference labels. This optimization calibrates for individual color responses among printers and cameras such that the Signal to Noise Ratio (SNR) is maximized. Experiments show that we get slightly smaller SNR for the color labels compared to using a monochrome technique. However, the number of segmented image components is reduced significantly by as much as 78 percent. This reduction of number of image components will in turn reduce the memory storage requirement for the computing embedded system.

Place, publisher, year, edition, pages
Zadar, Croatia: IEEE conference proceedings, 2011
Keywords
Label, Recognition, Position Measurement, COG, Subpixel Precision, RGB, HSI, YCbCr
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:miun:diva-14531 (URN)2-s2.0-80055085889 (Scopus ID)STC (Local ID)978-953-7044-12-1 (ISBN)STC (Archive number)STC (OAI)
Conference
53rd International Symposium ELMAR-2011; Zadar; 14 September 2011 through 16 September 2011
Projects
Optipos
Available from: 2011-09-26 Created: 2011-09-26 Last updated: 2016-10-19Bibliographically approved
6. Design of coded reference labels for indoor optical navigation using monocular camera
Open this publication in new window or tab >>Design of coded reference labels for indoor optical navigation using monocular camera
2013 (English)In: 2013 International Conference on Indoor Positioning and Indoor Navigation, IPIN 2013, IEEE Computer Society, 2013, p. Art. no. 6817925-Conference paper, Published paper (Refereed)
Abstract [en]

We present a machine vision based indoor navigation system. The paper describes a pose estimation of machine vision system by recognizing rotationally independent optimized color reference labels combined with a geometrical camera calibration model, which determines a set of camera parameters. A reference label carries one byte of information, which can be uniquely designed for various values. More than four reference labels are used in the image to calculate the localization coordinates of the system. An algorithm in Matlab has been developed so that a machine vision system can recognize N number of labels at any given orientation. In addition, a one channel color technique is applied in segmentation process, due to this technique the number of segmented image components is reduced significantly, limiting the memory storage requirement and processing time. The algorithm for pose estimation is based on direct linear transformation (DLT) method with a set of control reference labels in relation to the camera calibration model. From the experiments we concluded that the pose of the machine vision system can be calculated with relatively high precision, in the calibrated environment of reference labels. © 2013 IEEE.

Place, publisher, year, edition, pages
IEEE Computer Society, 2013
Series
International Conference on Indoor Positioning and Indoor Navigation, ISSN 2162-7347 ; 2013
Keywords
DLT, label recognition, least square estimation, Machine vision, Matlab, Optical navigation, Pose, Reference labels
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:miun:diva-22611 (URN)10.1109/IPIN.2013.6817925 (DOI)000341663400086 ()2-s2.0-84902155063 (Scopus ID)978-1-4799-4043-1 (ISBN)
Conference
2013 International Conference on Indoor Positioning and Indoor Navigation, IPIN 2013; Montbeliard-Belfort; France; 28 October 2013 through 31 October 2013; Category numberCFP1309J-ART; Code 105425
Funder
Knowledge Foundation
Note

Export Date: 20 August 2014

Available from: 2014-10-10 Created: 2014-08-20 Last updated: 2015-09-25Bibliographically approved
7. Real Time Decoding of Color Symbol for Optical Positioning System
Open this publication in new window or tab >>Real Time Decoding of Color Symbol for Optical Positioning System
Show others...
2015 (English)In: International Journal of Advanced Robotic Systems, ISSN 1729-8806, E-ISSN 1729-8814, Vol. 12, no 5Article in journal (Refereed) Published
Abstract [en]

This paper presents the design and real-time decoding of a color symbol that can be used as a reference marker for optical navigation. The designed symbol has a circular shape and is printed on paper using two distinct colors. This pair of colors is selected based on the highest achievable signal to noise ratio. The symbol is designed to carry eight bit information. Real time decoding of this symbol is performed using a heterogeneous combination of Field Programmable Gate Array (FPGA) and a microcontroller.  An image sensor having a resolution of 1600 by 1200 pixels is used to capture images of symbols in complex backgrounds. Dynamic image segmentation, component labeling and feature extraction was performed on the FPGA. The region of interest was further computed from the extracted features. Feature data belonging to the symbol was sent from the FPGA to the microcontroller. Image processing tasks are partitioned between the FPGA and microcontroller based on data intensity. Experiments were performed to verify the rotational independence of the symbols. The maximum distance between camera and symbol allowing for correct detection and decoding was analyzed. Experiments were also performed to analyze the number of generated image components and sub-pixel precision versus different light sources and intensities. The proposed hardware architecture can process up to 55 frames per second for accurate detection and decoding of symbols at two Megapixels resolution. The power consumption of the complete system is 342mw.

Keywords
Indoor navigation, Reference symbol, Robotic vision
National Category
Robotics
Identifiers
urn:nbn:se:miun:diva-23168 (URN)10.5772/59680 (DOI)000350647600001 ()2-s2.0-84923346270 (Scopus ID)STC (Local ID)STC (Archive number)STC (OAI)
Funder
Knowledge Foundation
Available from: 2014-10-08 Created: 2014-10-08 Last updated: 2017-10-27Bibliographically approved

Open Access in DiVA

No full text in DiVA

Authority records

Malik, Abdul Waheed

Search in DiVA

By author/editor
Malik, Abdul Waheed
By organisation
Department of Electronics Design
Electrical Engineering, Electronic Engineering, Information Engineering

Search outside of DiVA

GoogleGoogle Scholar

isbn
urn-nbn

Altmetric score

isbn
urn-nbn
Total: 1273 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf