miun.sePublications
Change search
Link to record
Permanent link

Direct link
BETA
Khursheed, KhursheedORCID iD iconorcid.org/0000-0002-6484-9260
Publications (10 of 28) Show all publications
Imran, M., Khursheed, K., Ahmad, N., O'Nils, M., Lawal, N. & Waheed, M. A. (2014). Complexity Analysis of Vision Functions for Comparison of Wireless Smart Cameras. International Journal of Distributed Sensor Networks, Art. no. 710685
Open this publication in new window or tab >>Complexity Analysis of Vision Functions for Comparison of Wireless Smart Cameras
Show others...
2014 (English)In: International Journal of Distributed Sensor Networks, ISSN 1550-1329, E-ISSN 1550-1477, p. Art. no. 710685-Article in journal (Refereed) Published
Abstract [en]

There are a number of challenges caused by the large amount of data and limited resources such as memory, processing capability, energy consumption, and bandwidth, when implementing vision systems on wireless smart cameras using embedded platforms. It is usual for research in this field to focus on the development of a specific solution for a particular problem. There is a requirement for a tool which facilitates the complexity estimation and comparison of wireless smart camera systems in order to develop efficient generic solutions. To develop such a tool, we have presented, in this paper, a complexity model by using a system taxonomy. In this model, we have investigated the arithmetic complexity and memory requirements of vision functions with the help of system taxonomy. To demonstrate the use of the proposed model, a number of actual systems are analyzed in a case study. The complexity model, together with system taxonomy, is used for the complexity estimation of vision functions and for a comparison of vision systems. After comparison, the systems are evaluated for implementation on a single generic architecture. The proposed approach will assist researchers in benchmarking and will assist in proposing efficient generic solutions for the same class of problems with reduced design and development costs.

National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:miun:diva-22002 (URN)10.1155/2014/710685 (DOI)000330458300001 ()2-s2.0-84893189660 (Scopus ID)STC (Local ID)STC (Archive number)STC (OAI)
Available from: 2014-06-04 Created: 2014-05-28 Last updated: 2017-12-05Bibliographically approved
Imran, M., Benkrid, K., Khursheed, K., Ahmad, N., O’Nils, M. & Lawal, N. (2013). Analysis and Characterization of Embedded Vision Systems for Taxonomy Formulation. In: Nasser Kehtarnavaz, Matthias F. Carlsohn, (Ed.), Proceedings of SPIE - The International Society for Optical Engineering: . Paper presented at Real-Time Image and Video Processing 2013; Burlingame, CA; United States; 6 February 2013 through 7 February 2013; Code 96385 (pp. Art. no. 86560J). USA: SPIE - International Society for Optical Engineering
Open this publication in new window or tab >>Analysis and Characterization of Embedded Vision Systems for Taxonomy Formulation
Show others...
2013 (English)In: Proceedings of SPIE - The International Society for Optical Engineering / [ed] Nasser Kehtarnavaz, Matthias F. Carlsohn,, USA: SPIE - International Society for Optical Engineering, 2013, p. Art. no. 86560J-Conference paper, Published paper (Refereed)
Abstract [en]

The current trend in embedded vision systems is to propose bespoke solutions for specific problems as each application has different requirement and constraints. There is no widely used model or benchmark which aims to facilitate generic solutions in embedded vision systems. Providing such model is a challenging task due to the wide number of use cases, environmental factors, and available technologies. However, common characteristics can be identified to propose an abstract model. Indeed, the majority of vision applications focus on the detection, analysis and recognition of objects. These tasks can be reduced to vision functions which can be used to characterize the vision systems. In this paper, we present the results of a thorough analysis of a large number of different types of vision systems. This analysis led us to the development of a system’s taxonomy, in which a number of vision functions as well as their combination characterize embedded vision systems. To illustrate the use of this taxonomy, we have tested it against a real vision system that detects magnetic particles in a flowing liquid to predict and avoid critical machinery failure. The proposed taxonomy is evaluated by using a quantitative parameter which shows that it covers 95 percent of the investigated vision systems and its flow is ordered for 60 percent systems. This taxonomy will serve as a tool for classification and comparison of systems and will enable the researchers to propose generic and efficient solutions for same class of systems.

Place, publisher, year, edition, pages
USA: SPIE - International Society for Optical Engineering, 2013
Series
Proceedings of SPIE, ISSN 0277-786X ; 8656
Keywords
System taxonomy, Smart cameras, Embedded vision systems, Wireless vision sensor networks
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:miun:diva-16035 (URN)10.1117/12.2000584 (DOI)000333051900018 ()2-s2.0-84875855354 (Scopus ID)STC (Local ID)978-0-8194-9429-0 (ISBN)STC (Archive number)STC (OAI)
Conference
Real-Time Image and Video Processing 2013; Burlingame, CA; United States; 6 February 2013 through 7 February 2013; Code 96385
Available from: 2013-02-05 Created: 2012-03-30 Last updated: 2016-10-20Bibliographically approved
Khursheed, K., Imran, M., Ahmad, N. & O'Nils, M. (2013). Bi-Level Video Codec for Machine Vision Embedded Applications. Elektronika Ir Elektrotechnika, 19(8), 93-96
Open this publication in new window or tab >>Bi-Level Video Codec for Machine Vision Embedded Applications
2013 (English)In: Elektronika Ir Elektrotechnika, ISSN 1392-1215, Vol. 19, no 8, p. 93-96Article in journal (Refereed) Published
Abstract [en]

Wireless Visual Sensor Networks (WVSN) are feasible today due to the advancement in many fields of electronics such as Complementary Metal Oxide Semiconductor (CMOS) cameras, low power electronics, distributed computing and radio transceivers. The energy budget in WVSN is limited due to the small form factor of the Visual Sensor Nodes (VSNs) and the wireless nature of the application. The images captured by VSN contain huge amount of data which leads to high communication energy consumptions. Hence there is a need for designing efficient algorithms which are computationally less complex and provide high compression ratio. The change coding and Region of Interest (ROIs) coding are the options for data reduction of the VSN. But, for higher number of objects in the images, the compression efficiency of both the change coding and ROI coding becomes worse than that of image coding. This paper explores the compression efficiency of the Bi-Level Video Codec (BVC) for several representative machine vision applications. We proposed to implement image coding, change coding and ROI coding at the VSN and to select the smallest bit stream among the three. Results show that the compression performance of the BVC for such applications is always better than that of change coding and ROI coding.

Keywords
Wireless sensor networks, low power electronics, embedded computing, image communication
National Category
Embedded Systems
Identifiers
urn:nbn:se:miun:diva-20650 (URN)10.5755/j01.eee.19.8.5401 (DOI)000325684100020 ()2-s2.0-84885620407 (Scopus ID)STC (Local ID)STC (Archive number)STC (OAI)
Available from: 2013-12-11 Created: 2013-12-11 Last updated: 2016-10-20Bibliographically approved
Khursheed, K., Ahmad, N., Imran, M. & O'Nils, M. (2013). Binary video codec for data reduction in wireless visual sensor networks. In: Kehtarnavaz, N; Carlsohn, MF (Ed.), Proceedings of SPIE - The International Society for Optical Engineering: . Paper presented at Real-Time Image and Video Processing 2013; Burlingame, CA; United States; 6 February 2013 through 7 February 2013; Code 96385 (pp. Art. no. 86560L). SPIE - International Society for Optical Engineering
Open this publication in new window or tab >>Binary video codec for data reduction in wireless visual sensor networks
2013 (English)In: Proceedings of SPIE - The International Society for Optical Engineering / [ed] Kehtarnavaz, N; Carlsohn, MF, SPIE - International Society for Optical Engineering, 2013, p. Art. no. 86560L-Conference paper, Published paper (Refereed)
Abstract [en]

Wireless Visual Sensor Networks (WVSN) is formed by deploying many Visual Sensor Nodes (VSNs) in the field. Typical applications of WVSN include environmental monitoring, health care, industrial process monitoring, stadium/airports monitoring for security reasons and many more. The energy budget in the outdoor applications of WVSN is limited to the batteries and the frequent replacement of batteries is usually not desirable. So the processing as well as the communication energy consumption of the VSN needs to be optimized in such a way that the network remains functional for longer duration. The images captured by VSN contain huge amount of data and require efficient computational resources for processing the images and wide communication bandwidth for the transmission of the results. Image processing algorithms must be designed and developed in such a way that they are computationally less complex and must provide high compression rate. For some applications of WVSN, the captured images can be segmented into bi-level images and hence bi-level image coding methods will efficiently reduce the information amount in these segmented images. But the compression rate of the bi-level image coding methods is limited by the underlined compression algorithm. Hence there is a need for designing other intelligent and efficient algorithms which are computationally less complex and provide better compression rate than that of bi-level image coding methods. Change coding is one such algorithm which is computationally less complex (require only exclusive OR operations) and provide better compression efficiency compared to image coding but it is effective for applications having slight changes between adjacent frames of the video. The detection and coding of the Region of Interest (ROIs) in the change frame efficiently reduce the information amount in the change frame. But, if the number of objects in the change frames is higher than a certain level then the compression efficiency of both the change coding and ROI coding becomes worse than that of image coding. This paper explores the compression efficiency of the Binary Video Codec (BVC) for the data reduction in WVSN. We proposed to implement all the three compression techniques i.e. image coding, change coding and ROI coding at the VSN and then select the smallest bit stream among the results of the three compression techniques. In this way the compression performance of the BVC will never become worse than that of image coding. We concluded that the compression efficiency of BVC is always better than that of change coding and is always better than or equal that of ROI coding and image coding. © COPYRIGHT SPIE. Downloading of the abstract is permitted for personal use only.

Place, publisher, year, edition, pages
SPIE - International Society for Optical Engineering, 2013
Series
Proceedings of SPIE, ISSN 0277-786X ; 8656
Keywords
Change Coding, Energy Consumption, Image Coding, ROI Coding, Video Coding, Wireless Visual Sensor Network
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:miun:diva-18976 (URN)10.1117/12.2003110 (DOI)000333051900020 ()2-s2.0-84875844212 (Scopus ID)STC (Local ID)978-0-8194-9429-0 (ISBN)STC (Archive number)STC (OAI)
Conference
Real-Time Image and Video Processing 2013; Burlingame, CA; United States; 6 February 2013 through 7 February 2013; Code 96385
Available from: 2013-05-22 Created: 2013-05-22 Last updated: 2016-10-20Bibliographically approved
Khursheed, K., Imran, M., Ahmad, N. & O'Nils, M. (2013). Efficient Data Reduction Techniques for Remote Applications of a Wireless Visual Sensor Network. International Journal of Advanced Robotic Systems, 10, Art. no. 240
Open this publication in new window or tab >>Efficient Data Reduction Techniques for Remote Applications of a Wireless Visual Sensor Network
2013 (English)In: International Journal of Advanced Robotic Systems, ISSN 1729-8806, E-ISSN 1729-8814, Vol. 10, p. Art. no. 240-Article in journal (Refereed) Published
Abstract [en]

A Wireless Visual Sensor Network (WVSN) is formed by deploying many Visual Sensor Nodes (VSNs) in the field. After acquiring an image of the area of interest, the VSN performs local processing on it and transmits the result using an embedded wireless transceiver. Wireless data transmission consumes a great deal of energy, where energy consumption is mainly dependent on the amount of information being transmitted. The image captured by the VSN contains a huge amount of data. For certain applications, segmentation can be performed on the captured images. The amount of information in the segmented images can be reduced by applying efficient bi-level image compression methods. In this way, the communication energy consumption of each of the VSNs can be reduced. However, the data reduction capability of bi-level image compression standards is fixed and is limited by the used compression algorithm. For applications attributing few changes in adjacent frames, change coding can be applied for further data reduction. Detecting and compressing only the Regions of Interest (ROIs) in the change frame is another possibility for further data reduction. In a communication system, where both the sender and the receiver know the employed compression standard, there is a possibility for further data reduction by not including the header information in the compressed bit stream of the sender. This paper summarizes different information reduction techniques such as image coding, change coding and ROI coding. The main contribution is the investigation of the combined effect of all these coding methods and their application to a few representative real life applications. This paper is intended to be a resource for researchers interested in techniques for information reduction in energy constrained embedded applications.

Keywords
Image Coding; Change Coding; ROI Coding; Energy Consumption; Visual Sensor Node; Image Header; Wireless Visual Sensor Network
National Category
Electrical Engineering, Electronic Engineering, Information Engineering Embedded Systems
Identifiers
urn:nbn:se:miun:diva-18596 (URN)10.5772/55996 (DOI)000318670000003 ()2-s2.0-84879235380 (Scopus ID)STC (Local ID)STC (Archive number)STC (OAI)
Available from: 2013-03-15 Created: 2013-03-15 Last updated: 2017-12-06Bibliographically approved
Imran, M., Ahmad, N., Khursheed, K., Malik, A. W., Lawal, N. & O’Nils, M. (2013). Implementation of Wireless Vision Sensor Node With a Lightweight Bi-Level Video Coding. IEEE Journal on Emerging and Selected Topics in Circuits and Systems, 3(2), 198-209, Article ID 6508941.
Open this publication in new window or tab >>Implementation of Wireless Vision Sensor Node With a Lightweight Bi-Level Video Coding
Show others...
2013 (English)In: IEEE Journal on Emerging and Selected Topics in Circuits and Systems, ISSN 2156-3357, Vol. 3, no 2, p. 198-209, article id 6508941Article in journal (Refereed) Published
Abstract [en]

Wireless vision sensor networks (WVSNs) consist ofa number of wireless vision sensor nodes (VSNs) which have limitedresources i.e., energy, memory, processing, and wireless bandwidth.The processing and communication energy requirements ofindividual VSN have been a challenge because of limited energyavailability. To meet this challenge, we have proposed and implementeda programmable and energy efficient VSN architecturewhich has lower energy requirements and has a reduced designcomplexity. In the proposed system, vision tasks are partitionedbetween the hardware implemented VSN and a server. The initialdata dominated tasks are implemented on the VSN while thecontrol dominated complex tasks are processed on a server. Thisstrategy will reduce both the processing energy consumption andthe design complexity. The communication energy consumption isreduced by implementing a lightweight bi-level video coding on theVSN. The energy consumption is measured on real hardware fordifferent applications and proposed VSN is compared against publishedsystems. The results show that, depending on the application,the energy consumption can be reduced by a factor of approximately1.5 up to 376 as compared to VSN without the bi-level videocoding. The proposed VSN offers energy efficient, generic architecturewith smaller design complexity on hardware reconfigurableplatform and offers easy adaptation for a number of applicationsas compared to published systems.

Place, publisher, year, edition, pages
IEEE Press, 2013
Keywords
Architecture, smart camera, video coding, wireless vision sensor networks (WVSNs), wireless vision sensor node (VSN)
National Category
Engineering and Technology
Identifiers
urn:nbn:se:miun:diva-19193 (URN)10.1109/JETCAS.2013.2256816 (DOI)000337789200009 ()2-s2.0-84879076204 (Scopus ID)STC (Local ID)STC (Archive number)STC (OAI)
Available from: 2013-06-12 Created: 2013-06-12 Last updated: 2016-10-20Bibliographically approved
Khursheed, K. (2013). Investigation of intelligence partitioning and data reduction in wireless visual sensor network. (Doctoral dissertation). Sundsvall: Mid Sweden University
Open this publication in new window or tab >>Investigation of intelligence partitioning and data reduction in wireless visual sensor network
2013 (English)Doctoral thesis, comprehensive summary (Other academic)
Place, publisher, year, edition, pages
Sundsvall: Mid Sweden University, 2013. p. 208
Series
Mid Sweden University doctoral thesis, ISSN 1652-893X ; 150
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:miun:diva-20976 (URN)STC (Local ID)978-91-87103-75-9 (ISBN)STC (Archive number)STC (OAI)
Supervisors
Available from: 2014-01-08 Created: 2014-01-08 Last updated: 2016-10-20Bibliographically approved
Imran, M., Ahmad, N., Khursheed, K., O’Nils, M. & Lawal, N. (2013). Low Complexity Background Subtraction for Wireless Vision Sensor Node. In: Proceedings - 16th Euromicro Conference on Digital System Design, DSD 2013: . Paper presented at 16th Euromicro Conference On Digital System Design; 4-6 Sep 2013; Santander, Spain (pp. 681-688).
Open this publication in new window or tab >>Low Complexity Background Subtraction for Wireless Vision Sensor Node
Show others...
2013 (English)In: Proceedings - 16th Euromicro Conference on Digital System Design, DSD 2013, 2013, p. 681-688Conference paper, Published paper (Refereed)
Abstract [en]

Wireless vision sensor nodes consist of limited resources such as energy, memory, wireless bandwidth and processing. Thus it becomes necessary to investigate lightweight vision tasks. To highlight the foreground objects, many machine vision applications depend on the background subtraction technique. Traditional background subtraction approaches employ recursive and non-recursive techniques and store the whole image in memory. This raises issues like complexity on hardware platform, energy requirements and latency. This work presents a low complexity background subtraction technique for a hardware implemented VSN. The proposed technique utilizes existing image scaling techniques for scaling down the image. The downscaled image is stored in memory of microcontroller which is already there for transmission. For subtraction operation, the background pixels are generated in real time through up scaling. The performance, and memory requirements of the system is compared for four image scaling techniques including nearest neighbor, averaging, bilinear, and bicubic. The results show that a system with lightweight scaling techniques, i.e., nearest neighbor and averaging, up to a scaling factor of 8, missed on average less than one object as compared to a system which uses a full original background image. The proposed approach will reduce the cost, design/implementation complexity and the memory requirement by a factor of up to 64.

Keywords
wireless vision sensor node, background subtraction, Smart camera, low complexity.
National Category
Other Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:miun:diva-19204 (URN)10.1109/DSD.2013.77 (DOI)2-s2.0-84890108886 (Scopus ID)STC (Local ID)978-076955074-9 (ISBN)STC (Archive number)STC (OAI)
Conference
16th Euromicro Conference On Digital System Design; 4-6 Sep 2013; Santander, Spain
Available from: 2013-06-12 Created: 2013-06-12 Last updated: 2016-10-20Bibliographically approved
Ahmad, N., Imran, M., Khursheed, K., Lawal, N. & O'Nils, M. (2013). Model, placement optimization and verification of a sky surveillance visual sensor network. International Journal of Space-Based and Situated Computing (IJSSC), 3(3), 125-135
Open this publication in new window or tab >>Model, placement optimization and verification of a sky surveillance visual sensor network
Show others...
2013 (English)In: International Journal of Space-Based and Situated Computing (IJSSC), ISSN 2044-4893, E-ISSN 2044-4907, Vol. 3, no 3, p. 125-135Article in journal (Refereed) Published
Abstract [en]

A visual sensor network (VSN) is a distributed system of a large number of camera nodes, which generates two dimensional data. This paper presents a model of a VSN to track large birds, such as golden eagle, in the sky. The model optimises the placement of camera nodes in VSN. A camera node is modelled as a function of lens focal length and camera sensor. The VSN provides full coverage between two altitude limits. The model can be used to minimise the number of sensor nodes for any given camera sensor, by exploring the focal lengths that fulfils both the full coverage and minimum object size requirement. For the case of large bird surveillance, 100% coverage is achieved for relevant altitudes using 20 camera nodes per km² for the investigated camera sensors. A real VSN is designed and measurements of VSN parameters are performed. The results obtained verify the VSN model.

National Category
Embedded Systems
Identifiers
urn:nbn:se:miun:diva-17118 (URN)10.1504/IJSSC.2013.056380 (DOI)
Available from: 2012-10-02 Created: 2012-10-02 Last updated: 2017-05-04Bibliographically approved
Ahmad, N., Khursheed, K., Imran, M., Lawal, N. & O'Nils, M. (2013). Modeling and Verification of a Heterogeneous Sky Surveillance Visual Sensor Network. International Journal of Distributed Sensor Networks, Art. id. 490489
Open this publication in new window or tab >>Modeling and Verification of a Heterogeneous Sky Surveillance Visual Sensor Network
Show others...
2013 (English)In: International Journal of Distributed Sensor Networks, ISSN 1550-1329, E-ISSN 1550-1477, p. Art. id. 490489-Article in journal (Refereed) Published
Abstract [en]

A visual sensor network (VSN) is a distributed system of a large number of camera nodes and has useful applications in many areas. The primary difference between a VSN and an ordinary scalar sensor network is the nature and volume of the information. In contrast to scalar sensor networks, a VSN generates two-dimensional data in the form of images. In this paper, we design a heterogeneous VSN to reduce the implementation cost required for the surveillance of a given area between two altitude limits. The VSN is designed by combining three sub-VSNs, which results in a heterogeneous VSN. Measurements are performed to verify full coverage and minimum achieved object image resolution at the lower and higher altitudes, respectively, for each sub-VSN. Verification of the sub-VSNs also verifies the full coverage of the heterogeneous VSN, between the given altitudes limits. Results show that the heterogeneous VSN is very effective to decrease the implementation cost required for the coverage of a given area. More than 70% decrease in cost is achieved by using a heterogeneous VSN to cover a given area, in comparison to homogeneous VSN. © 2013 Naeem Ahmad et al.

National Category
Embedded Systems
Identifiers
urn:nbn:se:miun:diva-17121 (URN)10.1155/2013/490489 (DOI)000324191600001 ()2-s2.0-84884237155 (Scopus ID)STC (Local ID)STC (Archive number)STC (OAI)
Available from: 2012-10-02 Created: 2012-10-02 Last updated: 2017-12-07Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0002-6484-9260

Search in DiVA

Show all publications