A neural network with adversarial loss for light field synthesis from a single image
2021 (English) In: VISIGRAPP 2021 - Proceedings of the 16th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, SciTePress , 2021, p. 175-184Conference paper, Published paper (Refereed)
Abstract [en]
This paper describes a lightweight neural network architecture with an adversarial loss for generating a full light field from one single image. The method is able to estimate disparity maps and automatically identify occluded regions from one single image thanks to a disparity confidence map based on forward-backward consistency checks. The disparity confidence map also controls the use of an adversarial loss for occlusion handling. The approach outperforms reference methods when trained and tested on light field data. Besides, we also designed the method so that it can efficiently generate a full light field from one single image, even when trained only on stereo data. This allows us to generalize our approach for view synthesis to more diverse data and semantics. Copyright © 2021 by SCITEPRESS – Science and Technology Publications, Lda. All rights reserved.
Place, publisher, year, edition, pages SciTePress , 2021. p. 175-184
Keywords [en]
Deep learning, Depth estimation, Light field, Monocular, View synthesis, Computer graphics, Computer vision, Network architecture, Semantics, Stereo image processing, Confidence maps, Consistency checks, Disparity map, Light fields, Occlusion handling, Reference method, Single images, Neural networks
Identifiers URN: urn:nbn:se:miun:diva-43452 ISI: 000661288200016 Scopus ID: 2-s2.0-85102974199 ISBN: 9789897584886 (print) OAI: oai:DiVA.org:miun-43452 DiVA, id: diva2:1603729
Conference 16th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, VISIGRAPP 2021, 8 February 2021 through 10 February 2021
2021-10-182021-10-182021-10-18 Bibliographically approved