miun.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Area-Based Depth Estimation for Monochromatic Feature-Sparse Orthographic Capture
Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Systems and Technology. (Realistic3D)
Department of Optics, University of Valencia, Burjassot, Spain.
Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Systems and Technology.
Department of Optics, University of Valencia, Burjassot, Spain.
2018 (English)In: 2018 26th European Signal Processing Conference (EUSIPCO), IEEE conference proceedings, 2018, p. 206-210, article id 8553336Conference paper, Published paper (Refereed)
Abstract [en]

With the rapid development of light field technology, depth estimation has been highlighted as one of the critical problems in the field, and a number of approaches have been proposed to extract the depth of the scene. However, depthestimation by stereo matching becomes difficult and unreliable when the captured images lack both color and feature information. In this paper, we propose a scheme that extracts robust depth from monochromatic, feature-sparse scenes recorded in orthographic sub-aperture images. Unlike approaches which relyon the rich color and texture information across the sub-aperture views, our approach is based on depth from focus techniques. First, we superimpose shifted sub-aperture images on top of anarbitrarily chosen central image. To focus on different depths, the shift amount is varied based on the micro-lens array properties. Next, an area-based depth estimation approach is applied tofind the best match among the focal stack and generate the dense depth map. This process is repeated for each sub-aperture image. Finally, occlusions are handled by merging depth maps generated from different central images followed by a voting process. Results show that the proposed scheme is more suitable than conventional depth estimation approaches in the context of orthographic captures that have insufficient color and feature information, such as microscopic fluorescence imaging.

Place, publisher, year, edition, pages
IEEE conference proceedings, 2018. p. 206-210, article id 8553336
Keywords [en]
Depth estimation, integral imaging, orthographic views, depth from focus
National Category
Computer Sciences
Identifiers
URN: urn:nbn:se:miun:diva-34418ISI: 000455614900042Scopus ID: 2-s2.0-85059811493OAI: oai:DiVA.org:miun-34418DiVA, id: diva2:1248223
Conference
EUSIPCO 2018, 26th European Signal Processing Conference, Rome, Italy, September 3-7, 2018
Available from: 2018-09-14 Created: 2018-09-14 Last updated: 2019-03-19Bibliographically approved

Open Access in DiVA

fulltext(3390 kB)166 downloads
File information
File name FULLTEXT01.pdfFile size 3390 kBChecksum SHA-512
d38aae8e078ec91948063a7800593ea7c85d583efa7a111aed130f7e748addca7cdaa97760d09279f649d4184c6385b8606b15108511d469e8c5548ddddc7bce
Type fulltextMimetype application/pdf

Scopus

Authority records BETA

Li, YongweiSjöström, Mårten

Search in DiVA

By author/editor
Li, YongweiSjöström, Mårten
By organisation
Department of Information Systems and Technology
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar
Total: 166 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

urn-nbn

Altmetric score

urn-nbn
Total: 958 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf