miun.sePublications
Change search
Link to record
Permanent link

Direct link
BETA
Publications (7 of 7) Show all publications
Brunnström, K., Dima, E., Andersson, M., Sjöström, M., quresh, t. & Johanson, M. (2019). Quality of Experience of hand controller latency in a Virtual Reality simulator. In: Damon Chandler, Mark McCourt and Jeffrey Mulligan, 2019 (Ed.), Human Vision and Electronic Imaging 2019: . Paper presented at Human Vision and Electronic Imaging 2019. Springfield, VA, United States, Article ID 3068450.
Open this publication in new window or tab >>Quality of Experience of hand controller latency in a Virtual Reality simulator
Show others...
2019 (English)In: Human Vision and Electronic Imaging 2019 / [ed] Damon Chandler, Mark McCourt and Jeffrey Mulligan, 2019, Springfield, VA, United States, 2019, article id 3068450Conference paper, Published paper (Refereed)
Abstract [en]

In this study, we investigate a VR simulator of a forestry crane used for loading logs onto a truck, mainly looking at Quality of Experience (QoE) aspects that may be relevant for task completion, but also whether there are any discomfort related symptoms experienced during task execution. A QoE test has been designed to capture both the general subjective experience of using the simulator and to study task performance. Moreover, a specific focus has been to study the effects of latency on the subjective experience, with regards to delays in the crane control interface. A formal subjective study has been performed where we have added controlled delays to the hand controller (joystick) signals. The added delays ranged from 0 ms to 800 ms. We found no significant effects of delays on the task performance on any scales up to 200 ms. A significant negative effect was found for 800 ms added delay. The Symptoms reported in the Simulator Sickness Questionnaire (SSQ) was significantly higher for all the symptom groups, but a majority of the participants reported only slight symptoms. Two out of thirty test persons stopped the test before finishing due to their symptoms.

Place, publisher, year, edition, pages
Springfield, VA, United States: , 2019
Series
Electronic Imaging, ISSN 2470-1173
Keywords
Quality of Experience, Virtual Reality, Simulator, QoE, Delay
National Category
Communication Systems Telecommunications Media Engineering
Identifiers
urn:nbn:se:miun:diva-35609 (URN)
Conference
Human Vision and Electronic Imaging 2019
Funder
Knowledge Foundation, 20160194
Available from: 2019-02-08 Created: 2019-02-08 Last updated: 2019-05-21Bibliographically approved
Dima, E., Brunnström, K., Sjöström, M., Andersson, M., Edlund, J., Johanson, M. & Qureshi, T. (2019). View Position Impact on QoE in an Immersive Telepresence System for Remote Operation. In: 2019 Eleventh International Conference on Quality of Multimedia Experience (QoMEX): . Paper presented at Quality of Multimedia Experience (QoMEX), Berlin, Germany, 5-7, 2019 (pp. 1-3). IEEE
Open this publication in new window or tab >>View Position Impact on QoE in an Immersive Telepresence System for Remote Operation
Show others...
2019 (English)In: 2019 Eleventh International Conference on Quality of Multimedia Experience (QoMEX), IEEE, 2019, p. 1-3Conference paper, Published paper (Refereed)
Abstract [en]

In this paper, we investigate how different viewing positions affect a user's Quality of Experience (QoE) and performance in an immersive telepresence system. A QoE experiment has been conducted with 27 participants to assess the general subjective experience and the performance of remotely operating a toy excavator. Two view positions have been tested, an overhead and a ground-level view, respectively, which encourage reliance on stereoscopic depth cues to different extents for accurate operation. Results demonstrate a significant difference between ground and overhead views: the ground view increased the perceived difficulty of the task, whereas the overhead view increased the perceived accomplishment as well as the objective performance of the task. The perceived helpfulness of the overhead view was also significant according to the participants.

Place, publisher, year, edition, pages
IEEE, 2019
Keywords
quality of experience, augmented telepresence, head mounted display, viewpoint, remote operation, camera view
National Category
Telecommunications Media Engineering Media and Communication Technology
Identifiers
urn:nbn:se:miun:diva-36256 (URN)10.1109/QoMEX.2019.8743147 (DOI)000482562000001 ()978-1-5386-8212-8 (ISBN)
Conference
Quality of Multimedia Experience (QoMEX), Berlin, Germany, 5-7, 2019
Funder
Knowledge Foundation, 20160194
Available from: 2019-06-10 Created: 2019-06-10 Last updated: 2019-09-23Bibliographically approved
Dima, E., Sjöström, M., Olsson, R., Kjellqvist, M., Litwic, L., Zhang, Z., . . . Flodén, L. (2018). LIFE: A Flexible Testbed For Light Field Evaluation. In: : . Paper presented at 2018 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON), Stockholm – Helsinki – Stockholm, 3-5 June 2018. , Article ID 8478550.
Open this publication in new window or tab >>LIFE: A Flexible Testbed For Light Field Evaluation
Show others...
2018 (English)Conference paper, Published paper (Refereed)
Abstract [en]

Recording and imaging the 3D world has led to the use of light fields. Capturing, distributing and presenting light field data is challenging, and requires an evaluation platform. We define a framework for real-time processing, and present the design and implementation of a light field evaluation system. In order to serve as a testbed, the system is designed to be flexible, scalable, and able to model various end-to-end light field systems. This flexibility is achieved by encapsulating processes and devices in discrete framework systems. The modular capture system supports multiple camera types, general-purpose data processing, and streaming to network interfaces. The cloud system allows for parallel transcoding and distribution of streams. The presentation system encapsulates rendering and display specifics. The real-time ability was tested in a latency measurement; the capture and presentation systems process and stream frames within a 40 ms limit.

Keywords
Multiview, 3DTV, Light field, Distributed surveillance, 360 video
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:miun:diva-33620 (URN)000454903900016 ()2-s2.0-85056147245 (Scopus ID)978-1-5386-6125-3 (ISBN)
Conference
2018 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON), Stockholm – Helsinki – Stockholm, 3-5 June 2018
Projects
LIFE Project
Funder
Knowledge Foundation, 20140200
Available from: 2018-05-15 Created: 2018-05-15 Last updated: 2019-02-15Bibliographically approved
Dima, E. (2018). Multi-Camera Light Field Capture: Synchronization, Calibration, Depth Uncertainty, and System Design. (Licentiate dissertation). Sundsvall, Sweden: Mid Sweden University
Open this publication in new window or tab >>Multi-Camera Light Field Capture: Synchronization, Calibration, Depth Uncertainty, and System Design
2018 (English)Licentiate thesis, comprehensive summary (Other academic)
Abstract [en]

The digital camera is the technological counterpart to the human eye, enabling the observation and recording of events in the natural world. Since modern life increasingly depends on digital systems, cameras and especially multiple-camera systems are being widely used in applications that affect our society, ranging from multimedia production and surveillance to self-driving robot localization. The rising interest in multi-camera systems is mirrored by the rising activity in Light Field research, where multi-camera systems are used to capture Light Fields - the angular and spatial information about light rays within a 3D space. 

The purpose of this work is to gain a more comprehensive understanding of how cameras collaborate and produce consistent data as a multi-camera system, and to build a multi-camera Light Field evaluation system. This work addresses three problems related to the process of multi-camera capture: first, whether multi-camera calibration methods can reliably estimate the true camera parameters; second, what are the consequences of synchronization errors in a multi-camera system; and third, how to ensure data consistency in a multi-camera system that records data with synchronization errors. Furthermore, this work addresses the problem of designing a flexible multi-camera system that can serve as a Light Field capture testbed.

The first problem is solved by conducting a comparative assessment of widely available multi-camera calibration methods. A special dataset is recorded, giving known constraints on camera ground-truth parameters to use as reference for calibration estimates. The second problem is addressed by introducing a depth uncertainty model that links the pinhole camera model and synchronization error to the geometric error in the 3D projections of recorded data. The third problem is solved for the color-and-depth multi-camera scenario, by using a proposed estimation of the depth camera synchronization error and correction of the recorded depth maps via tensor-based interpolation. The problem of designing a Light Field capture testbed is addressed empirically, by constructing and presenting a multi-camera system based on off-the-shelf hardware and a modular software framework.

The calibration assessment reveals that target-based and certain target-less calibration methods are relatively similar at estimating the true camera parameters. The results imply that for general-purpose multi-camera systems, target-less calibration is an acceptable choice. For high-accuracy scenarios, even commonly used target-based calibration approaches are insufficiently accurate. The proposed depth uncertainty model is used to show that converged multi-camera arrays are less sensitive to synchronization errors. The mean depth uncertainty of a camera system correlates to the rendered result in depth-based reprojection, as long as the camera calibration matrices are accurate. The proposed depthmap synchronization method is used to produce a consistent, synchronized color-and-depth dataset for unsynchronized recordings without altering the depthmap properties. Therefore, the method serves as a compatibility layer between unsynchronized multi-camera systems and applications that require synchronized color-and-depth data. Finally, the presented multi-camera system demonstrates a flexible, de-centralized framework where data processing is possible in the camera, in the cloud, and on the data consumer's side. The multi-camera system is able to act as a Light Field capture testbed and as a component in Light Field communication systems, because of the general-purpose computing and network connectivity support for each sensor, small sensor size, flexible mounts, hardware and software synchronization, and a segmented software framework. 

Place, publisher, year, edition, pages
Sundsvall, Sweden: Mid Sweden University, 2018. p. 64
Series
Mid Sweden University licentiate thesis, ISSN 1652-8948 ; 139
Keywords
Light field, Camera systems, Multiview, Synchronization, Camera calibration
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:miun:diva-33622 (URN)978-91-88527-56-1 (ISBN)
Presentation
2018-06-15, L111, Holmgatan 10, Sundsvall, 13:00 (English)
Opponent
Supervisors
Funder
Knowledge Foundation, 20140200
Note

Vid tidpunkten för framläggning av avhandlingen var följande delarbete opublicerat: delarbete 3 manuskript.

At the time of the defence the following paper was unpublished: paper 3 manuscript.

Available from: 2018-05-16 Created: 2018-05-15 Last updated: 2018-05-16Bibliographically approved
Dima, E., Sjöström, M. & Olsson, R. (2017). Modeling Depth Uncertainty of Desynchronized Multi-Camera Systems. In: 2017 International Conference on 3D Immersion (IC3D): . Paper presented at 2017 International Conference on 3D Immersion (IC3D 2017), Brussels, Belgium, 11th-12th December 2017. IEEE
Open this publication in new window or tab >>Modeling Depth Uncertainty of Desynchronized Multi-Camera Systems
2017 (English)In: 2017 International Conference on 3D Immersion (IC3D), IEEE, 2017Conference paper, Published paper (Refereed)
Abstract [en]

Accurately recording motion from multiple perspectives is relevant for recording and processing immersive multi-media and virtual reality content. However, synchronization errors between multiple cameras limit the precision of scene depth reconstruction and rendering. In order to quantify this limit, a relation between camera de-synchronization, camera parameters, and scene element motion has to be identified. In this paper, a parametric ray model describing depth uncertainty is derived and adapted for the pinhole camera model. A two-camera scenario is simulated to investigate the model behavior and how camera synchronization delay, scene element speed, and camera positions affect the system's depth uncertainty. Results reveal a linear relation between synchronization error, element speed, and depth uncertainty. View convergence is shown to affect mean depth uncertainty up to a factor of 10. Results also show that depth uncertainty must be assessed on the full set of camera rays instead of a central subset.

Place, publisher, year, edition, pages
IEEE, 2017
Keywords
Camera synchronization, Synchronization error, Depth estimation error, Multi-camera system
National Category
Signal Processing Media Engineering
Identifiers
urn:nbn:se:miun:diva-31841 (URN)10.1109/IC3D.2017.8251891 (DOI)000427148600001 ()2-s2.0-85049401578 (Scopus ID)978-1-5386-4655-7 (ISBN)
Conference
2017 International Conference on 3D Immersion (IC3D 2017), Brussels, Belgium, 11th-12th December 2017
Projects
LIFE project
Funder
Knowledge Foundation, 20140200
Available from: 2017-10-13 Created: 2017-10-13 Last updated: 2019-03-22
Dima, E., Sjöström, M. & Olsson, R. (2016). Assessment of Multi-Camera Calibration Algorithms for Two-Dimensional Camera Arrays Relative to Ground Truth Position and Direction. In: 3DTV-Conference: . Paper presented at 2016 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video, 3DTV-CON 2016; Hamburg; Germany; 4 July 2016 through 6 July 2016; Category numberCFP1655B-ART; Code 123582. IEEE Computer Society, Article ID 7548887.
Open this publication in new window or tab >>Assessment of Multi-Camera Calibration Algorithms for Two-Dimensional Camera Arrays Relative to Ground Truth Position and Direction
2016 (English)In: 3DTV-Conference, IEEE Computer Society, 2016, article id 7548887Conference paper, Published paper (Refereed)
Abstract [en]

Camera calibration methods are commonly evaluated on cumulative reprojection error metrics, on disparate one-dimensional da-tasets. To evaluate calibration of cameras in two-dimensional arrays, assessments need to be made on two-dimensional datasets with constraints on camera parameters. In this study, accuracy of several multi-camera calibration methods has been evaluated on camera parameters that are affecting view projection the most. As input data, we used a 15-viewpoint two-dimensional dataset with intrinsic and extrinsic parameter constraints and extrinsic ground truth. The assessment showed that self-calibration methods using structure-from-motion reach equal intrinsic and extrinsic parameter estimation accuracy with standard checkerboard calibration algorithm, and surpass a well-known self-calibration toolbox, BlueCCal. These results show that self-calibration is a viable approach to calibrating two-dimensional camera arrays, but improvements to state-of-art multi-camera feature matching are necessary to make BlueCCal as accurate as other self-calibration methods for two-dimensional camera arrays.

Place, publisher, year, edition, pages
IEEE Computer Society, 2016
Keywords
Camera calibration, multi-view image dataset, 2D camera array, self-calibration, calibration assessment
National Category
Signal Processing Media and Communication Technology
Identifiers
urn:nbn:se:miun:diva-27960 (URN)10.1109/3DTV.2016.7548887 (DOI)000390840500006 ()2-s2.0-84987849952 (Scopus ID)STC (Local ID)978-1-5090-3313-3 (ISBN)STC (Archive number)STC (OAI)
Conference
2016 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video, 3DTV-CON 2016; Hamburg; Germany; 4 July 2016 through 6 July 2016; Category numberCFP1655B-ART; Code 123582
Funder
Knowledge Foundation, 20140200
Available from: 2016-06-17 Created: 2016-06-16 Last updated: 2018-05-15Bibliographically approved
Dima, E., Gao, Y., Sjöström, M., Olsson, R., Koch, R. & Esquivel, S.Estimation and Post-Capture Compensation of Synchronization Error in Unsynchronized Multi-Camera Systems.
Open this publication in new window or tab >>Estimation and Post-Capture Compensation of Synchronization Error in Unsynchronized Multi-Camera Systems
Show others...
(English)Manuscript (preprint) (Other academic)
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:miun:diva-33621 (URN)
Available from: 2018-05-15 Created: 2018-05-15 Last updated: 2018-05-16Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0002-4967-3033

Search in DiVA

Show all publications