miun.sePublications
Change search
Link to record
Permanent link

Direct link
BETA
Publications (10 of 44) Show all publications
Brunnström, K., Sjöström, M., Imran, M., Pettersson, M. & Johanson, M. (2018). Quality Of Experience For A Virtual Reality Simulator. In: IS and T International Symposium on Electronic Imaging Science and Technology 2018: . Paper presented at Human Vision and Electronic Imaging (HVEI), Burlingame, California USA, 28 January - 2 February, 2018.
Open this publication in new window or tab >>Quality Of Experience For A Virtual Reality Simulator
Show others...
2018 (English)In: IS and T International Symposium on Electronic Imaging Science and Technology 2018, 2018Conference paper, Published paper (Refereed)
Abstract [en]

In this study, we investigate a VR simulator of a forestrycrane used for loading logs onto a truck, mainly looking at Qualityof Experience (QoE) aspects that may be relevant for taskcompletion, but also whether there are any discomfort relatedsymptoms experienced during task execution. The QoE test hasbeen designed to capture both the general subjective experience ofusing the simulator and to study task completion rate. Moreover, aspecific focus has been to study the effects of latency on thesubjective experience, with regards both to delays in the cranecontrol interface as well as lag in the visual scene rendering in thehead mounted display (HMD). Two larger formal subjectivestudies have been performed: one with the VR-system as it is andone where we have added controlled delay to the display updateand to the joystick signals. The baseline study shows that mostpeople are more or less happy with the VR-system and that it doesnot have strong effects on any symptoms as listed in the SSQ. In thedelay study we found significant effects on Comfort Quality andImmersion Quality for higher Display delay (30 ms), but verysmall impact of joystick delay. Furthermore, the Display delay hadstrong influence on the symptoms in the SSQ, as well as causingtest subjects to decide not to continue with the completeexperiments, and this was also found to be connected to the longerDisplay delays (≥ 20 ms).

Keywords
Quality of Experience, Virtual Reality, simulator, Remote operation
National Category
Media and Communication Technology
Identifiers
urn:nbn:se:miun:diva-33073 (URN)2-s2.0-85064043234 (Scopus ID)
Conference
Human Vision and Electronic Imaging (HVEI), Burlingame, California USA, 28 January - 2 February, 2018
Funder
Knowledge Foundation, 20160194
Available from: 2018-02-26 Created: 2018-02-26 Last updated: 2019-10-11Bibliographically approved
Shallari, I., Anwar, Q., Imran, M. & O'Nils, M. (2017). Background Modelling, Analysis and Implementation for Thermographic Images. In: PROCEEDINGS OF THE 2017 SEVENTH INTERNATIONAL CONFERENCE ON IMAGE PROCESSING THEORY, TOOLS AND APPLICATIONS (IPTA 2017): . Paper presented at Seventh International Conference on Image Processing Theory, Tools and Applications (IPTA 2017), Montreal, Canada; November 28 - December 1, 2017. IEEE
Open this publication in new window or tab >>Background Modelling, Analysis and Implementation for Thermographic Images
2017 (English)In: PROCEEDINGS OF THE 2017 SEVENTH INTERNATIONAL CONFERENCE ON IMAGE PROCESSING THEORY, TOOLS AND APPLICATIONS (IPTA 2017), IEEE, 2017Conference paper, Published paper (Refereed)
Abstract [en]

Background subtraction is one of the fundamental steps in the image-processing pipeline for distinguishing foreground from background. Most of the methods have been investigated with respect to visual images, in which case challenges are different compared to thermal images. Thermal sensors are invariant to light changes and have reduced privacy concerns. We propose the use of a low-pass IIR filter for background modelling in thermographic imagery due to its better performance compared to algorithms such as Mixture of Gaussians and K-nearest neighbour, while reducing memory requirements for implementation in embedded architectures. Based on the analysis of four different image datasets both indoor and outdoor, with and without people presence, the learning rate for the filter is set to 3×10-3 Hz and the proposed model is implemented on an Artix-7 FPGA.

Place, publisher, year, edition, pages
IEEE, 2017
Keywords
Infrared; visual; pedestrian detection; smart camera; architecture; surveillance.
National Category
Embedded Systems
Identifiers
urn:nbn:se:miun:diva-32445 (URN)10.1109/IPTA.2017.8310078 (DOI)000428743900002 ()2-s2.0-85050756650 (Scopus ID)978-1-5386-1842-4 (ISBN)
Conference
Seventh International Conference on Image Processing Theory, Tools and Applications (IPTA 2017), Montreal, Canada; November 28 - December 1, 2017
Projects
City MovementsSMART (Smarta system och tjänster för ett effektivt och innovativt samhälle)
Available from: 2017-12-13 Created: 2017-12-13 Last updated: 2019-09-10Bibliographically approved
Shallari, I., Imran, M., Lawal, N. & O'Nils, M. (2017). Evaluating Pre-Processing Pipelines for Thermal-Visual Smart Camera. In: Proceedings of the 11th International Conference on Distributed Smart Cameras: . Paper presented at 11th International Conference on Distributed Smart Cameras, Stanford University, Stanford; United States; 5 September 2017 through 7 September 2017 (pp. 95-100). ACM Digital Library, F132201
Open this publication in new window or tab >>Evaluating Pre-Processing Pipelines for Thermal-Visual Smart Camera
2017 (English)In: Proceedings of the 11th International Conference on Distributed Smart Cameras, ACM Digital Library, 2017, Vol. F132201, p. 95-100Conference paper, Published paper (Refereed)
Abstract [en]

Smart camera systems integrating multi-model image sensors provide better spectral sensitivity and hence better pass-fail decisions. In a given vision system, pre-processing tasks have a ripple effect on output data and pass-fail decision of high level tasks such as feature extraction, classification and recognition. In this work, we investigated four pre-processing pipelines and evaluated the effect on classification accuracy and output transmission data. The pre-processing pipelines processed four types of images, thermal grayscale, thermal binary, visual and visual binary. The results show that the pre-processing pipeline, which transmits visual compressed Region of Interest (ROI) images, offers 13 to 64 percent better classification accuracy as compared to thermal grayscale, thermal binary and visual binary. The results show that visual raw and visual compressed ROI with suitable quantization matrix offers similar classification accuracy but visual compressed ROI offers up to 99 percent reduced communication data as compared to visual ROI.

Place, publisher, year, edition, pages
ACM Digital Library, 2017
Keywords
Thermal imaging, FPGA, intelligence partitioning
National Category
Embedded Systems
Identifiers
urn:nbn:se:miun:diva-32437 (URN)10.1145/3131885.3131908 (DOI)2-s2.0-85038877488 (Scopus ID)978-1-4503-5487-5 (ISBN)
Conference
11th International Conference on Distributed Smart Cameras, Stanford University, Stanford; United States; 5 September 2017 through 7 September 2017
Projects
SMART (Smarta system och tjänster för ett effektivt och innovativt samhälle)
Funder
Knowledge Foundation
Available from: 2017-12-13 Created: 2017-12-13 Last updated: 2019-09-09Bibliographically approved
Lawal, N., O'Nils, M. & Imran, M. (2016). Design exploration of a multi-camera dome for sky monitoring. In: ACM International Conference Proceeding Series: . Paper presented at 10th International Conference on Distributed Smart Cameras, ICDSC 2016, 12 September 2016 through 15 September 2016 (pp. 14-18). Association for Computing Machinery (ACM), 12-15-September-2016, Article ID 2967419.
Open this publication in new window or tab >>Design exploration of a multi-camera dome for sky monitoring
2016 (English)In: ACM International Conference Proceeding Series, Association for Computing Machinery (ACM), 2016, Vol. 12-15-September-2016, p. 14-18, article id 2967419Conference paper, Published paper (Refereed)
Abstract [en]

Sky monitoring has many applications but also many challenges to be addressed before it can be realized. Some of the challenges are cost, energy consumption and complex deployment. One way to address these challenges is to compose a camera dome by grouping cameras that monitor a half sphere of the sky. In this paper, we present a model for design exploration that investigates how characteristics of camera chips and objective lenses affect the overall cost of a node of a camera dome. The investigation showed that by accepting more cameras in a single node can result in a reduced total cost of the system. This concludes that by using suitable design and camera placement technique, a cost-effective solution can be proposed for massive open-area i.e. sky monitoring.

Place, publisher, year, edition, pages
Association for Computing Machinery (ACM), 2016
Keywords
Design exploration, Distributed smart cameras, Sky monitoring, Volumetric surveillance
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:miun:diva-29141 (URN)10.1145/2967413.2967419 (DOI)2-s2.0-84989322238 (Scopus ID)STC (Local ID)9781450347860 (ISBN)STC (Archive number)STC (OAI)
Conference
10th International Conference on Distributed Smart Cameras, ICDSC 2016, 12 September 2016 through 15 September 2016
Projects
SMART (Smarta system och tjänster för ett effektivt och innovativt samhälle)
Note

Conference Paper

Available from: 2016-10-27 Created: 2016-10-27 Last updated: 2019-09-09Bibliographically approved
Imran, M., Rinner, B., Zand, S. Z. & O'Nils, M. (2016). Exploration of preprocessing architectures for field-programmable gate array-based thermal-visual smart camera. Journal of Electronic Imaging (JEI), 25(4), Article ID 041006.
Open this publication in new window or tab >>Exploration of preprocessing architectures for field-programmable gate array-based thermal-visual smart camera
2016 (English)In: Journal of Electronic Imaging (JEI), ISSN 1017-9909, E-ISSN 1560-229X, Vol. 25, no 4, article id 041006Article in journal (Refereed) Published
Abstract [en]

Embedded smart cameras are gaining in popularity for a number of real-Time outdoor surveillance applications. However, there are still challenges, i.e., computational latency, variation in illumination, and occlusion. To solve these challenges, multimodal systems, integrating multiple imagers can be utilized. However, trade-off is more stringent requirements on processing and communication for embedded platforms. To meet these challenges, we investigated two low-complexity and high-performance preprocessing architectures for a multiple imagers' node on a field-programmable gate array (FPGA). In the proposed architectures, majority of the tasks are performed on the thermal images because of the lower spatial resolution. Analysis with different sets of images show that the system with proposed architectures offers better detection performance and can reduce output data from 1.7 to 99 times as compared with full-size images. The proposed architectures can achieve a frame rate of 53 fps, logics utilization from 2.1% to 4.1%, memory consumption 987 to 148 KB and power consumption in the range of 141 to 163 mW on Artix-7 FPGA. This concludes that the proposed architectures offer reduced design complexity and lower processing and communication requirements while retaining the configurability of the system.

Keywords
architecture, field-programmable gate array, preprocessing, smart camera, thermal
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:miun:diva-28491 (URN)10.1117/1.JEI.25.4.041006 (DOI)000387787000006 ()2-s2.0-84973466871 (Scopus ID)STC (Local ID)STC (Archive number)STC (OAI)
Note

CODEN: JEIME

Available from: 2016-07-22 Created: 2016-07-21 Last updated: 2017-06-30Bibliographically approved
Anwar, Q., Imran, M. & O'Nils, M. (2016). Intelligence Partitioning as a Method for Architectural Exploration of Wireless Sensor Node. In: Proceedings of the International Conference on Computational Science and Computational Intelligence (CSCI), 2016.: . Paper presented at 2016 International Conference on Computational Science and Computational Intelligence, 15-17 Dec. 2016, Las Vegas, NV, USA (pp. 935-940). IEEE Press, Article ID 7881473.
Open this publication in new window or tab >>Intelligence Partitioning as a Method for Architectural Exploration of Wireless Sensor Node
2016 (English)In: Proceedings of the International Conference on Computational Science and Computational Intelligence (CSCI), 2016., IEEE Press, 2016, p. 935-940, article id 7881473Conference paper, Published paper (Refereed)
Abstract [en]

Embedded systems with integrated sensing, processing and wireless communication are driving future connectivity concepts such as Wireless Sensor Networks (WSNs) and Internet of Things (IoTs). Because of resource limitations, there still exists a number of challenges such as low latency and energy consumption to realize these concepts to full potential. To address and understand these challenges, we have developed and employed an intelligence partitioning method which generates different implementation alternatives by distributing processing load across multiple nodes. The task-to-node mapping has exponential complexity which is hard to compute for a large scale system. Regarding this, our method provides recommendation to handle and minimize such complexity for a large system. Experiments on a use-case concludes that the proposed method is able to identify unfavourable architecture solutions in which forward and backword communication paths exists in task-to-node mapping. These solution can be avoided for further architectural exploration, thus limiting the space for architecture exploration of a sensor node.

Place, publisher, year, edition, pages
IEEE Press, 2016
Keywords
Edge computing, intelligence partitioning, embedded computing
National Category
Computer Systems
Identifiers
urn:nbn:se:miun:diva-30736 (URN)10.1109/CSCI.2016.0180 (DOI)000405582400172 ()2-s2.0-85017325247 (Scopus ID)STC (Local ID)978-1-5090-5510-4 (ISBN)STC (Archive number)STC (OAI)
Conference
2016 International Conference on Computational Science and Computational Intelligence, 15-17 Dec. 2016, Las Vegas, NV, USA
Projects
ASISSMART (Smarta system och tjänster för ett effektivt och innovativt samhälle)
Funder
Knowledge Foundation
Available from: 2017-05-16 Created: 2017-05-16 Last updated: 2019-09-09Bibliographically approved
Imran, M., Wang, X., Lawal, N. & O'Nils, M. (2016). Pre-processing Architecture for IR-Visual Smart Camera Based on Post-Processing Constraints. In: : . Paper presented at 15th International Workshop on Cellular Nanoscale Networks and their Applications, Dresden, Germany, August 23-25, 2016. IEEE
Open this publication in new window or tab >>Pre-processing Architecture for IR-Visual Smart Camera Based on Post-Processing Constraints
2016 (English)Conference paper, Published paper (Refereed)
Abstract [en]

In embedded vision systems, the efficiency of pre-processing architectures have a ripple effect on post-processing functions such as feature extraction, classification and recognition. In this work, we investigated a pre-processing architecture for smart camera system, integrating a thermal and vision sensors, by considering the constraints of post-processing. By utilizing the locality feature of the system, we performed pre-processing on the camera node by using FPGA and post-processing on the client device by using the microprocessor platform, NVIDIA Tegra. The study shows that for outdoor people surveillance applications with complex background and varying lighting conditions, the pre-processing architecture, which transmits thermal binary Region-of-Interest (ROI) images, offers better classification accuracy and smaller complexity as compared to alternative approaches.

Place, publisher, year, edition, pages
IEEE, 2016
Keywords
Wireless smart camera, Infrared, Thermal, Pre-processing, Architecture, Post-processing
National Category
Engineering and Technology
Identifiers
urn:nbn:se:miun:diva-27371 (URN)STC (Local ID)STC (Archive number)STC (OAI)
Conference
15th International Workshop on Cellular Nanoscale Networks and their Applications, Dresden, Germany, August 23-25, 2016
Funder
Knowledge Foundation
Available from: 2016-04-11 Created: 2016-04-11 Last updated: 2017-06-30Bibliographically approved
Imran, M., O'Nils, M., Munir, H. & Thörnberg, B. (2015). Low complexity FPGA based background subtraction technique for thermal imagery. In: ACM International Conference Proceeding Series: . Paper presented at 9th International Conference on Distributed Smart Cameras, ICDSC 2015; Seville; Spain; 8 September 2015 through 11 September 2015; Code 117454 (pp. 1-6). Association for Computing Machinery (ACM)
Open this publication in new window or tab >>Low complexity FPGA based background subtraction technique for thermal imagery
2015 (English)In: ACM International Conference Proceeding Series, Association for Computing Machinery (ACM), 2015, p. 1-6Conference paper, Published paper (Refereed)
Abstract [en]

Embedded smart camera systems are gaining popularity for a number of real world surveillance applications. However, there are still challenges, i.e. variation in illumination, shadows, occlusion, and weather conditions while employing the vision algorithms in outdoor environments. For safety-critical surveillance applications, the visual sensors can be complemented with beyond-visual-range sensors. This in turn requires analysis, development and modification of existing imaging techniques. In this work, a low complexity background modelling and subtraction technique has been proposed for thermal imagery. The proposed technique has been implemented on Field Programmable Gate Arrays (FPGAs) after in-depth analysis of different sets of images, characterizing poor signal-to-noise ratio challenges, e.g. motion of high frequency background objects, temperature variation and camera jitter etc. The proposed technique dynamically updates the background on pixel level and requires a single frame storage as opposed to existing techniques. The comparison of this approach with two other approaches show that this approach performs better in different environmental conditions. The proposed technique has been modelled in Register Transfer Logic (RTL) and implementation on the latest FPGAs shows that the design requires less than 1 percent logics, 47 percent block RAMs, and consumes 91 mW power consumption on Artix-7 100T FPGA.

Place, publisher, year, edition, pages
Association for Computing Machinery (ACM), 2015
Keywords
Background modelling; subtraction; FPGA; architecture; smart camera, thermal imaging.
National Category
Embedded Systems
Identifiers
urn:nbn:se:miun:diva-25997 (URN)10.1145/2789116.2789121 (DOI)2-s2.0-84958251961 (Scopus ID)STC (Local ID)978-145033681-9 (ISBN)STC (Archive number)STC (OAI)
Conference
9th International Conference on Distributed Smart Cameras, ICDSC 2015; Seville; Spain; 8 September 2015 through 11 September 2015; Code 117454
Available from: 2015-09-28 Created: 2015-09-28 Last updated: 2016-12-23Bibliographically approved
Imran, M., O'Nils, M., Kardeby, V. & Munir, H. (2015). STC-CAM1, IR-visual based smart camera system. In: ACM International Conference Proceeding Series: . Paper presented at 9th International Conference on Distributed Smart Cameras, ICDSC 2015; Seville; Spain; 8 September 2015 through 11 September 2015; Code 117454 (pp. 195-196). Association for Computing Machinery (ACM)
Open this publication in new window or tab >>STC-CAM1, IR-visual based smart camera system
2015 (English)In: ACM International Conference Proceeding Series, Association for Computing Machinery (ACM), 2015, p. 195-196Conference paper, Published paper (Refereed)
Abstract [en]

Safety-critical applications require robust and real-time surveillance. For such applications, a vision sensor alone can give false positive results because of poor lighting conditions, occlusion, or different weather conditions. In this work, a visual sensor is complemented by an infrared thermal sensor which makes the system more resilient in unfavorable situations. In the proposed camera architecture, initial data intensive tasks are performed locally on the sensor node and then compressed data is transmitted to a client device where remaining vision tasks are performed. The proposed camera architecture is demonstrated as a proof-ofconcept and it offers a generic architecture with better surveillance while only performing low complexity computations on the resource constrained devices.

Place, publisher, year, edition, pages
Association for Computing Machinery (ACM), 2015
Keywords
Wireless smart camera, Infrared, Thermal, Architecture, Wireless vision sensor node, Internet-of-Things.
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:miun:diva-25999 (URN)10.1145/2789116.2802649 (DOI)2-s2.0-84958242971 (Scopus ID)STC (Local ID)978-145033681-9 (ISBN)STC (Archive number)STC (OAI)
Conference
9th International Conference on Distributed Smart Cameras, ICDSC 2015; Seville; Spain; 8 September 2015 through 11 September 2015; Code 117454
Available from: 2015-09-28 Created: 2015-09-28 Last updated: 2016-12-23Bibliographically approved
Imran, M., Khursheed, K., Ahmad, N., O'Nils, M., Lawal, N. & Waheed, M. A. (2014). Complexity Analysis of Vision Functions for Comparison of Wireless Smart Cameras. International Journal of Distributed Sensor Networks, Art. no. 710685
Open this publication in new window or tab >>Complexity Analysis of Vision Functions for Comparison of Wireless Smart Cameras
Show others...
2014 (English)In: International Journal of Distributed Sensor Networks, ISSN 1550-1329, E-ISSN 1550-1477, p. Art. no. 710685-Article in journal (Refereed) Published
Abstract [en]

There are a number of challenges caused by the large amount of data and limited resources such as memory, processing capability, energy consumption, and bandwidth, when implementing vision systems on wireless smart cameras using embedded platforms. It is usual for research in this field to focus on the development of a specific solution for a particular problem. There is a requirement for a tool which facilitates the complexity estimation and comparison of wireless smart camera systems in order to develop efficient generic solutions. To develop such a tool, we have presented, in this paper, a complexity model by using a system taxonomy. In this model, we have investigated the arithmetic complexity and memory requirements of vision functions with the help of system taxonomy. To demonstrate the use of the proposed model, a number of actual systems are analyzed in a case study. The complexity model, together with system taxonomy, is used for the complexity estimation of vision functions and for a comparison of vision systems. After comparison, the systems are evaluated for implementation on a single generic architecture. The proposed approach will assist researchers in benchmarking and will assist in proposing efficient generic solutions for the same class of problems with reduced design and development costs.

National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:miun:diva-22002 (URN)10.1155/2014/710685 (DOI)000330458300001 ()2-s2.0-84893189660 (Scopus ID)STC (Local ID)STC (Archive number)STC (OAI)
Available from: 2014-06-04 Created: 2014-05-28 Last updated: 2017-12-05Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0003-1923-3843

Search in DiVA

Show all publications