miun.sePublikationer
Ändra sökning
RefereraExporteraLänk till posten
Permanent länk

Direktlänk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Assessment of Multi-Camera Calibration Algorithms for Two-Dimensional Camera Arrays Relative to Ground Truth Position and Direction
Mittuniversitetet, Fakulteten för naturvetenskap, teknik och medier, Avdelningen för informations- och kommunikationssystem. (Realistic3D)ORCID-id: 0000-0002-4967-3033
Mittuniversitetet, Fakulteten för naturvetenskap, teknik och medier, Avdelningen för informations- och kommunikationssystem. (Realistic3D)ORCID-id: 0000-0003-3751-6089
Mittuniversitetet, Fakulteten för naturvetenskap, teknik och medier, Avdelningen för informations- och kommunikationssystem. (Realistic3D)
2016 (Engelska)Ingår i: 3DTV-Conference, IEEE Computer Society, 2016, artikel-id 7548887Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

Camera calibration methods are commonly evaluated on cumulative reprojection error metrics, on disparate one-dimensional da-tasets. To evaluate calibration of cameras in two-dimensional arrays, assessments need to be made on two-dimensional datasets with constraints on camera parameters. In this study, accuracy of several multi-camera calibration methods has been evaluated on camera parameters that are affecting view projection the most. As input data, we used a 15-viewpoint two-dimensional dataset with intrinsic and extrinsic parameter constraints and extrinsic ground truth. The assessment showed that self-calibration methods using structure-from-motion reach equal intrinsic and extrinsic parameter estimation accuracy with standard checkerboard calibration algorithm, and surpass a well-known self-calibration toolbox, BlueCCal. These results show that self-calibration is a viable approach to calibrating two-dimensional camera arrays, but improvements to state-of-art multi-camera feature matching are necessary to make BlueCCal as accurate as other self-calibration methods for two-dimensional camera arrays.

Ort, förlag, år, upplaga, sidor
IEEE Computer Society, 2016. artikel-id 7548887
Nyckelord [en]
Camera calibration, multi-view image dataset, 2D camera array, self-calibration, calibration assessment
Nationell ämneskategori
Signalbehandling Medieteknik
Identifikatorer
URN: urn:nbn:se:miun:diva-27960DOI: 10.1109/3DTV.2016.7548887ISI: 000390840500006Scopus ID: 2-s2.0-84987849952Lokalt ID: STCISBN: 978-1-5090-3313-3 (tryckt)OAI: oai:DiVA.org:miun-27960DiVA, id: diva2:938875
Konferens
2016 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video, 3DTV-CON 2016; Hamburg; Germany; 4 July 2016 through 6 July 2016; Category numberCFP1655B-ART; Code 123582
Forskningsfinansiär
KK-stiftelsen, 20140200Tillgänglig från: 2016-06-17 Skapad: 2016-06-16 Senast uppdaterad: 2018-05-15Bibliografiskt granskad
Ingår i avhandling
1. Multi-Camera Light Field Capture: Synchronization, Calibration, Depth Uncertainty, and System Design
Öppna denna publikation i ny flik eller fönster >>Multi-Camera Light Field Capture: Synchronization, Calibration, Depth Uncertainty, and System Design
2018 (Engelska)Licentiatavhandling, sammanläggning (Övrigt vetenskapligt)
Abstract [en]

The digital camera is the technological counterpart to the human eye, enabling the observation and recording of events in the natural world. Since modern life increasingly depends on digital systems, cameras and especially multiple-camera systems are being widely used in applications that affect our society, ranging from multimedia production and surveillance to self-driving robot localization. The rising interest in multi-camera systems is mirrored by the rising activity in Light Field research, where multi-camera systems are used to capture Light Fields - the angular and spatial information about light rays within a 3D space. 

The purpose of this work is to gain a more comprehensive understanding of how cameras collaborate and produce consistent data as a multi-camera system, and to build a multi-camera Light Field evaluation system. This work addresses three problems related to the process of multi-camera capture: first, whether multi-camera calibration methods can reliably estimate the true camera parameters; second, what are the consequences of synchronization errors in a multi-camera system; and third, how to ensure data consistency in a multi-camera system that records data with synchronization errors. Furthermore, this work addresses the problem of designing a flexible multi-camera system that can serve as a Light Field capture testbed.

The first problem is solved by conducting a comparative assessment of widely available multi-camera calibration methods. A special dataset is recorded, giving known constraints on camera ground-truth parameters to use as reference for calibration estimates. The second problem is addressed by introducing a depth uncertainty model that links the pinhole camera model and synchronization error to the geometric error in the 3D projections of recorded data. The third problem is solved for the color-and-depth multi-camera scenario, by using a proposed estimation of the depth camera synchronization error and correction of the recorded depth maps via tensor-based interpolation. The problem of designing a Light Field capture testbed is addressed empirically, by constructing and presenting a multi-camera system based on off-the-shelf hardware and a modular software framework.

The calibration assessment reveals that target-based and certain target-less calibration methods are relatively similar at estimating the true camera parameters. The results imply that for general-purpose multi-camera systems, target-less calibration is an acceptable choice. For high-accuracy scenarios, even commonly used target-based calibration approaches are insufficiently accurate. The proposed depth uncertainty model is used to show that converged multi-camera arrays are less sensitive to synchronization errors. The mean depth uncertainty of a camera system correlates to the rendered result in depth-based reprojection, as long as the camera calibration matrices are accurate. The proposed depthmap synchronization method is used to produce a consistent, synchronized color-and-depth dataset for unsynchronized recordings without altering the depthmap properties. Therefore, the method serves as a compatibility layer between unsynchronized multi-camera systems and applications that require synchronized color-and-depth data. Finally, the presented multi-camera system demonstrates a flexible, de-centralized framework where data processing is possible in the camera, in the cloud, and on the data consumer's side. The multi-camera system is able to act as a Light Field capture testbed and as a component in Light Field communication systems, because of the general-purpose computing and network connectivity support for each sensor, small sensor size, flexible mounts, hardware and software synchronization, and a segmented software framework. 

Ort, förlag, år, upplaga, sidor
Sundsvall, Sweden: Mid Sweden University, 2018. s. 64
Serie
Mid Sweden University licentiate thesis, ISSN 1652-8948 ; 139
Nyckelord
Light field, Camera systems, Multiview, Synchronization, Camera calibration
Nationell ämneskategori
Data- och informationsvetenskap
Identifikatorer
urn:nbn:se:miun:diva-33622 (URN)978-91-88527-56-1 (ISBN)
Presentation
2018-06-15, L111, Holmgatan 10, Sundsvall, 13:00 (Engelska)
Opponent
Handledare
Forskningsfinansiär
KK-stiftelsen, 20140200
Anmärkning

Vid tidpunkten för framläggning av avhandlingen var följande delarbete opublicerat: delarbete 3 manuskript.

At the time of the defence the following paper was unpublished: paper 3 manuscript.

Tillgänglig från: 2018-05-16 Skapad: 2018-05-15 Senast uppdaterad: 2018-05-16Bibliografiskt granskad

Open Access i DiVA

AssessmentOfMultiCameraCalibrationAlgorithms(496 kB)592 nedladdningar
Filinformation
Filnamn FULLTEXT01.pdfFilstorlek 496 kBChecksumma SHA-512
dd6f332d28354fad49222ff2bb76d99af69cd01ad741de07aa572d039788aa31474afb15dcc8af80dd6ca56faf416f9980a7855ec052813ac6f505ae962b7b98
Typ fulltextMimetyp application/pdf

Övriga länkar

Förlagets fulltextScopus

Personposter BETA

Dima, ElijsSjöström, MårtenOlsson, Roger

Sök vidare i DiVA

Av författaren/redaktören
Dima, ElijsSjöström, MårtenOlsson, Roger
Av organisationen
Avdelningen för informations- och kommunikationssystem
SignalbehandlingMedieteknik

Sök vidare utanför DiVA

GoogleGoogle Scholar
Totalt: 592 nedladdningar
Antalet nedladdningar är summan av nedladdningar för alla fulltexter. Det kan inkludera t.ex tidigare versioner som nu inte längre är tillgängliga.

doi
isbn
urn-nbn

Altmetricpoäng

doi
isbn
urn-nbn
Totalt: 962 träffar
RefereraExporteraLänk till posten
Permanent länk

Direktlänk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf