miun.sePublications
Change search
Link to record
Permanent link

Direct link
BETA
Li, Yongwei
Publications (4 of 4) Show all publications
Li, Y. & Sjöström, M. (2019). Depth-Assisted Demosaicing for Light Field Data in Layered Object Space. In: 2019 IEEE International Conference on Image Processing (ICIP): . Paper presented at 2019 IEEE International Conference on Image Processing (ICIP), Taipei, Taiwan, 22-25 September, 2019 (pp. 3746-3750). IEEE, Article ID 8803441.
Open this publication in new window or tab >>Depth-Assisted Demosaicing for Light Field Data in Layered Object Space
2019 (English)In: 2019 IEEE International Conference on Image Processing (ICIP), IEEE, 2019, p. 3746-3750, article id 8803441Conference paper, Published paper (Refereed)
Abstract [en]

Light field technology, which emerged as a solution to the increasing demands of visually immersive experience, has shown its extraordinary potential for scene content representation and reconstruction. Unlike conventional photography that maps the 3D scenery onto a 2D plane by a projective transformation, light field preserves both the spatial and angular information, enabling further processing steps such as computational refocusing and image-based rendering. However, there are still gaps that have been barely studied, such as the light field demosaicing process. In this paper, we propose a depth-assisted demosaicing method for light field data. First, we exploit the sampling geometry of the light field data with respect to the scene content using the ray-tracing technique and develop a sampling model of light field capture. Then we carry out the demosaicing process in a layered object space with object-space sampling adjacencies rather than pixel placement. Finally, we compare our results with state-of-art approaches and discuss about the potential research directions of the proposed sampling model to show the significance of our approach.

Place, publisher, year, edition, pages
IEEE, 2019
Keywords
Lenses, Cameras, Image color analysis, Three-dimensional displays, Microoptics, Interpolation, Two dimensional displays, Light field, demosaicing, object space, ray-tracing technique
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:miun:diva-37690 (URN)10.1109/ICIP.2019.8803441 (DOI)2-s2.0-85076819023 (Scopus ID)978-1-5386-6249-6 (ISBN)
Conference
2019 IEEE International Conference on Image Processing (ICIP), Taipei, Taiwan, 22-25 September, 2019
Available from: 2019-11-15 Created: 2019-11-15 Last updated: 2020-01-15Bibliographically approved
Li, Y., Olsson, R. & Sjöström, M. (2018). An analysis of demosaicing for plenoptic capture based on ray optics. In: Proceedings of 3DTV Conference 2018: . Paper presented at 3D at any scale and any perspective, 3-5 June 2018, Stockholm – Helsinki – Stockholm. , Article ID 8478476.
Open this publication in new window or tab >>An analysis of demosaicing for plenoptic capture based on ray optics
2018 (English)In: Proceedings of 3DTV Conference 2018, 2018, article id 8478476Conference paper, Published paper (Refereed)
Abstract [en]

The plenoptic camera is gaining more and more attention as it capturesthe 4D light field of a scene with a single shot and enablesa wide range of post-processing applications. However, the preprocessing steps for captured raw data, such as demosaicing, have been overlooked. Most existing decoding pipelines for plenoptic cameras still apply demosaicing schemes which are developed for conventional cameras. In this paper, we analyze the sampling pattern of microlens-based plenoptic cameras by ray-tracing techniques and ray phase space analysis. The goal of this work is to demonstrate guidelines and principles for demosaicing the plenoptic captures by taking the unique microlens array design into account. We show that the sampling of the plenoptic camera behaves differently from that of a conventional camera and the desired demosaicing scheme is depth-dependent.

Keywords
Light field, plenoptic camera, depth, image demosaicing
National Category
Other Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:miun:diva-33618 (URN)10.1109/3DTV.2018.8478476 (DOI)000454903900008 ()2-s2.0-85056161198 (Scopus ID)978-1-5386-6125-3 (ISBN)
Conference
3D at any scale and any perspective, 3-5 June 2018, Stockholm – Helsinki – Stockholm
Available from: 2018-05-15 Created: 2018-05-15 Last updated: 2019-02-15Bibliographically approved
Li, Y., Scrofani, G., Sjöström, M. & Martinez-Corraly, M. (2018). Area-Based Depth Estimation for Monochromatic Feature-Sparse Orthographic Capture. In: 2018 26th European Signal Processing Conference (EUSIPCO): . Paper presented at EUSIPCO 2018, 26th European Signal Processing Conference, Rome, Italy, September 3-7, 2018 (pp. 206-210). IEEE conference proceedings, Article ID 8553336.
Open this publication in new window or tab >>Area-Based Depth Estimation for Monochromatic Feature-Sparse Orthographic Capture
2018 (English)In: 2018 26th European Signal Processing Conference (EUSIPCO), IEEE conference proceedings, 2018, p. 206-210, article id 8553336Conference paper, Published paper (Refereed)
Abstract [en]

With the rapid development of light field technology, depth estimation has been highlighted as one of the critical problems in the field, and a number of approaches have been proposed to extract the depth of the scene. However, depthestimation by stereo matching becomes difficult and unreliable when the captured images lack both color and feature information. In this paper, we propose a scheme that extracts robust depth from monochromatic, feature-sparse scenes recorded in orthographic sub-aperture images. Unlike approaches which relyon the rich color and texture information across the sub-aperture views, our approach is based on depth from focus techniques. First, we superimpose shifted sub-aperture images on top of anarbitrarily chosen central image. To focus on different depths, the shift amount is varied based on the micro-lens array properties. Next, an area-based depth estimation approach is applied tofind the best match among the focal stack and generate the dense depth map. This process is repeated for each sub-aperture image. Finally, occlusions are handled by merging depth maps generated from different central images followed by a voting process. Results show that the proposed scheme is more suitable than conventional depth estimation approaches in the context of orthographic captures that have insufficient color and feature information, such as microscopic fluorescence imaging.

Place, publisher, year, edition, pages
IEEE conference proceedings, 2018
Keywords
Depth estimation, integral imaging, orthographic views, depth from focus
National Category
Computer Sciences
Identifiers
urn:nbn:se:miun:diva-34418 (URN)000455614900042 ()2-s2.0-85059811493 (Scopus ID)
Conference
EUSIPCO 2018, 26th European Signal Processing Conference, Rome, Italy, September 3-7, 2018
Available from: 2018-09-14 Created: 2018-09-14 Last updated: 2019-03-19Bibliographically approved
Wang, C., Wang, X., Li, Y., Xia, Z. & Zhang, C. (2018). Quaternion polar harmonic Fourier moments for color images. Information Sciences, 450, 141-156
Open this publication in new window or tab >>Quaternion polar harmonic Fourier moments for color images
Show others...
2018 (English)In: Information Sciences, ISSN 0020-0255, E-ISSN 1872-6291, Vol. 450, p. 141-156Article in journal (Refereed) Published
Abstract [en]

This paper proposes quaternion polar harmonic Fourier moments (QPHFM) for color image processing and analyzes the properties of QPHFM. After extending Chebyshev–Fourier moments (CHFM) to quaternion Chebyshev-Fourier moments (QCHFM), comparison experiments, including image reconstruction and color image object recognition, on the performance of QPHFM and quaternion Zernike moments (QZM), quaternion pseudo-Zernike moments (QPZM), quaternion orthogonal Fourier-Mellin moments (QOFMM), QCHFM, and quaternion radial harmonic Fourier moments (QRHFM) are carried out. Experimental results show QPHFM can achieve an ideal performance in image reconstruction and invariant object recognition in noise-free and noisy conditions. In addition, this paper discusses the importance of phase information of quaternion orthogonal moments in image reconstruction. 

Keywords
Image reconstruction, Moment invariant, Object recognition, Orthogonal moment, Phase, Quaternion polar harmonic Fourier moments
National Category
Information Systems, Social aspects
Identifiers
urn:nbn:se:miun:diva-33500 (URN)10.1016/j.ins.2018.03.040 (DOI)000432646100008 ()2-s2.0-85044451202 (Scopus ID)
Available from: 2018-04-16 Created: 2018-04-16 Last updated: 2018-06-10Bibliographically approved
Organisations

Search in DiVA

Show all publications