Mid Sweden University

miun.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Multi-Camera Light Field Capture: Synchronization, Calibration, Depth Uncertainty, and System Design
Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Systems and Technology. (Realistic 3D)ORCID iD: 0000-0002-4967-3033
2018 (English)Licentiate thesis, comprehensive summary (Other academic)
Abstract [en]

The digital camera is the technological counterpart to the human eye, enabling the observation and recording of events in the natural world. Since modern life increasingly depends on digital systems, cameras and especially multiple-camera systems are being widely used in applications that affect our society, ranging from multimedia production and surveillance to self-driving robot localization. The rising interest in multi-camera systems is mirrored by the rising activity in Light Field research, where multi-camera systems are used to capture Light Fields - the angular and spatial information about light rays within a 3D space. 

The purpose of this work is to gain a more comprehensive understanding of how cameras collaborate and produce consistent data as a multi-camera system, and to build a multi-camera Light Field evaluation system. This work addresses three problems related to the process of multi-camera capture: first, whether multi-camera calibration methods can reliably estimate the true camera parameters; second, what are the consequences of synchronization errors in a multi-camera system; and third, how to ensure data consistency in a multi-camera system that records data with synchronization errors. Furthermore, this work addresses the problem of designing a flexible multi-camera system that can serve as a Light Field capture testbed.

The first problem is solved by conducting a comparative assessment of widely available multi-camera calibration methods. A special dataset is recorded, giving known constraints on camera ground-truth parameters to use as reference for calibration estimates. The second problem is addressed by introducing a depth uncertainty model that links the pinhole camera model and synchronization error to the geometric error in the 3D projections of recorded data. The third problem is solved for the color-and-depth multi-camera scenario, by using a proposed estimation of the depth camera synchronization error and correction of the recorded depth maps via tensor-based interpolation. The problem of designing a Light Field capture testbed is addressed empirically, by constructing and presenting a multi-camera system based on off-the-shelf hardware and a modular software framework.

The calibration assessment reveals that target-based and certain target-less calibration methods are relatively similar at estimating the true camera parameters. The results imply that for general-purpose multi-camera systems, target-less calibration is an acceptable choice. For high-accuracy scenarios, even commonly used target-based calibration approaches are insufficiently accurate. The proposed depth uncertainty model is used to show that converged multi-camera arrays are less sensitive to synchronization errors. The mean depth uncertainty of a camera system correlates to the rendered result in depth-based reprojection, as long as the camera calibration matrices are accurate. The proposed depthmap synchronization method is used to produce a consistent, synchronized color-and-depth dataset for unsynchronized recordings without altering the depthmap properties. Therefore, the method serves as a compatibility layer between unsynchronized multi-camera systems and applications that require synchronized color-and-depth data. Finally, the presented multi-camera system demonstrates a flexible, de-centralized framework where data processing is possible in the camera, in the cloud, and on the data consumer's side. The multi-camera system is able to act as a Light Field capture testbed and as a component in Light Field communication systems, because of the general-purpose computing and network connectivity support for each sensor, small sensor size, flexible mounts, hardware and software synchronization, and a segmented software framework. 

Place, publisher, year, edition, pages
Sundsvall, Sweden: Mid Sweden University , 2018. , p. 64
Series
Mid Sweden University licentiate thesis, ISSN 1652-8948 ; 139
Keywords [en]
Light field, Camera systems, Multiview, Synchronization, Camera calibration
National Category
Computer and Information Sciences
Identifiers
URN: urn:nbn:se:miun:diva-33622ISBN: 978-91-88527-56-1 (print)OAI: oai:DiVA.org:miun-33622DiVA, id: diva2:1205723
Presentation
2018-06-15, L111, Holmgatan 10, Sundsvall, 13:00 (English)
Opponent
Supervisors
Funder
Knowledge Foundation, 20140200
Note

Vid tidpunkten för framläggning av avhandlingen var följande delarbete opublicerat: delarbete 3 manuskript.

At the time of the defence the following paper was unpublished: paper 3 manuscript.

Available from: 2018-05-16 Created: 2018-05-15 Last updated: 2018-05-16Bibliographically approved
List of papers
1. Modeling Depth Uncertainty of Desynchronized Multi-Camera Systems
Open this publication in new window or tab >>Modeling Depth Uncertainty of Desynchronized Multi-Camera Systems
2017 (English)In: 2017 International Conference on 3D Immersion (IC3D), IEEE, 2017Conference paper, Published paper (Refereed)
Abstract [en]

Accurately recording motion from multiple perspectives is relevant for recording and processing immersive multi-media and virtual reality content. However, synchronization errors between multiple cameras limit the precision of scene depth reconstruction and rendering. In order to quantify this limit, a relation between camera de-synchronization, camera parameters, and scene element motion has to be identified. In this paper, a parametric ray model describing depth uncertainty is derived and adapted for the pinhole camera model. A two-camera scenario is simulated to investigate the model behavior and how camera synchronization delay, scene element speed, and camera positions affect the system's depth uncertainty. Results reveal a linear relation between synchronization error, element speed, and depth uncertainty. View convergence is shown to affect mean depth uncertainty up to a factor of 10. Results also show that depth uncertainty must be assessed on the full set of camera rays instead of a central subset.

Place, publisher, year, edition, pages
IEEE, 2017
Keywords
Camera synchronization, Synchronization error, Depth estimation error, Multi-camera system
National Category
Signal Processing Other Engineering and Technologies
Identifiers
urn:nbn:se:miun:diva-31841 (URN)10.1109/IC3D.2017.8251891 (DOI)000427148600001 ()2-s2.0-85049401578 (Scopus ID)978-1-5386-4655-7 (ISBN)
Conference
2017 International Conference on 3D Immersion (IC3D 2017), Brussels, Belgium, 11th-12th December 2017
Projects
LIFE project
Funder
Knowledge Foundation, 20140200
Available from: 2017-10-13 Created: 2017-10-13 Last updated: 2025-02-18
2. Assessment of Multi-Camera Calibration Algorithms for Two-Dimensional Camera Arrays Relative to Ground Truth Position and Direction
Open this publication in new window or tab >>Assessment of Multi-Camera Calibration Algorithms for Two-Dimensional Camera Arrays Relative to Ground Truth Position and Direction
2016 (English)In: 3DTV-Conference, IEEE Computer Society, 2016, article id 7548887Conference paper, Published paper (Refereed)
Abstract [en]

Camera calibration methods are commonly evaluated on cumulative reprojection error metrics, on disparate one-dimensional da-tasets. To evaluate calibration of cameras in two-dimensional arrays, assessments need to be made on two-dimensional datasets with constraints on camera parameters. In this study, accuracy of several multi-camera calibration methods has been evaluated on camera parameters that are affecting view projection the most. As input data, we used a 15-viewpoint two-dimensional dataset with intrinsic and extrinsic parameter constraints and extrinsic ground truth. The assessment showed that self-calibration methods using structure-from-motion reach equal intrinsic and extrinsic parameter estimation accuracy with standard checkerboard calibration algorithm, and surpass a well-known self-calibration toolbox, BlueCCal. These results show that self-calibration is a viable approach to calibrating two-dimensional camera arrays, but improvements to state-of-art multi-camera feature matching are necessary to make BlueCCal as accurate as other self-calibration methods for two-dimensional camera arrays.

Place, publisher, year, edition, pages
IEEE Computer Society, 2016
Keywords
Camera calibration, multi-view image dataset, 2D camera array, self-calibration, calibration assessment
National Category
Signal Processing Computer and Information Sciences
Identifiers
urn:nbn:se:miun:diva-27960 (URN)10.1109/3DTV.2016.7548887 (DOI)000390840500006 ()2-s2.0-84987849952 (Scopus ID)STC (Local ID)978-1-5090-3313-3 (ISBN)STC (Archive number)STC (OAI)
Conference
2016 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video, 3DTV-CON 2016; Hamburg; Germany; 4 July 2016 through 6 July 2016; Category numberCFP1655B-ART; Code 123582
Funder
Knowledge Foundation, 20140200
Available from: 2016-06-17 Created: 2016-06-16 Last updated: 2025-02-18Bibliographically approved
3. Estimation and Post-Capture Compensation of Synchronization Error in Unsynchronized Multi-Camera Systems
Open this publication in new window or tab >>Estimation and Post-Capture Compensation of Synchronization Error in Unsynchronized Multi-Camera Systems
Show others...
2021 (English)Report (Other academic)
Abstract [en]

Multi-camera systems are used in entertainment production, computer vision, industry and surveillance. The benefit of using multi-camera systems is the ability to recover the 3D structure, or depth, of the recorded scene. However, various types of cameras, including depth cameras, can not be reliably synchronized during recording, which leads to errors in depth estimation and scene rendering. The aim of this work is to propose a method for compensating synchronization errors in already recorded sequences, without changing the format of the recorded sequences. We describe a depth uncertainty model for parametrizing the impact of synchronization errors in a multi-camera system, and propose a method for synchronization error estimation and compensation. The proposed method is based on interpolating an image at a desired timeframe based on adjacent non-synchronized images in a single camera's sequence, using an array of per-pixel distortion vectors. This array is generated by using the difference between adjacent images to locate and segment the recorded moving objects, and does not require any object texture or distinguishing features beyond the observed difference in adjacent images. The proposed compensation method is compared with optical-flow based interpolation and sparse correspondence based morphing, and the proposed synchronization error estimation is compared with a state-of-the-art video alignment method. The proposed method shows better synchronization error estimation accuracy and compensation ability, especially in cases of low-texture, low-feature images. The effect of using data with synchronization errors is also demonstrated, as is the improvement gained by using compensated data. The compensation of synchronization errors is useful in scenarios where the recorded data is expected to be used by other processes that expect a sub-frame synchronization accuracy, such as depth-image-based rendering.

Publisher
p. 24
Keywords
Multi-camera systems, Synchronization, Multiview, 3D Acquisition, Video alignment, Depth uncertainty
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:miun:diva-33621 (URN)
Available from: 2018-05-15 Created: 2018-05-15 Last updated: 2021-09-09Bibliographically approved
4. LIFE: A Flexible Testbed For Light Field Evaluation
Open this publication in new window or tab >>LIFE: A Flexible Testbed For Light Field Evaluation
Show others...
2018 (English)Conference paper, Published paper (Refereed)
Abstract [en]

Recording and imaging the 3D world has led to the use of light fields. Capturing, distributing and presenting light field data is challenging, and requires an evaluation platform. We define a framework for real-time processing, and present the design and implementation of a light field evaluation system. In order to serve as a testbed, the system is designed to be flexible, scalable, and able to model various end-to-end light field systems. This flexibility is achieved by encapsulating processes and devices in discrete framework systems. The modular capture system supports multiple camera types, general-purpose data processing, and streaming to network interfaces. The cloud system allows for parallel transcoding and distribution of streams. The presentation system encapsulates rendering and display specifics. The real-time ability was tested in a latency measurement; the capture and presentation systems process and stream frames within a 40 ms limit.

Keywords
Multiview, 3DTV, Light field, Distributed surveillance, 360 video
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:miun:diva-33620 (URN)000454903900016 ()2-s2.0-85056147245 (Scopus ID)978-1-5386-6125-3 (ISBN)
Conference
2018 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON), Stockholm – Helsinki – Stockholm, 3-5 June 2018
Projects
LIFE Project
Funder
Knowledge Foundation, 20140200
Available from: 2018-05-15 Created: 2018-05-15 Last updated: 2021-04-15Bibliographically approved

Open Access in DiVA

MultiCameraLightFieldCapture(5752 kB)2939 downloads
File information
File name FULLTEXT01.pdfFile size 5752 kBChecksum SHA-512
fb55ccd2bb17ac5e74daa48707c80c98e588a1c51d42c9f978630b513f3ed32dfc9eb0532d4fcd24de58129ed62e97e556a0a9d61553bbd58c60222135029445
Type fulltextMimetype application/pdf

Authority records

Dima, Elijs

Search in DiVA

By author/editor
Dima, Elijs
By organisation
Department of Information Systems and Technology
Computer and Information Sciences

Search outside of DiVA

GoogleGoogle Scholar
Total: 2946 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

isbn
urn-nbn

Altmetric score

isbn
urn-nbn
Total: 2029 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf