miun.sePublications
Change search
Refine search result
1 - 7 of 7
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Li, Yongwei
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Systems and Technology.
    Computational Light Field Photography: Depth Estimation, Demosaicing, and Super-Resolution2020Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    The transition of camera technology from film-based cameras to digital cameras has been witnessed in the past twenty years, along with impressive technological advances in processing massively digitized media content. Today, a new evolution emerged -- the migration from 2D content to immersive perception. This rising trend has a profound and long-term impact to our society, fostering technologies such as teleconferencing and remote surgery. The trend is also reflected in the scientific research community, and more intention has been drawn to the light field and its applications.

     

    The purpose of this dissertation is to develop a better understanding of light field structure by analyzing its sampling behavior and to addresses three problems concerning the light field processing pipeline: 1) How to address the depth estimation problem when there is limited color and texture information. 2) How to improve the rendered image quality by using the inherent depth information. 3) How to solve the interdependence conflict of demosaicing and depth estimation.

     

    The first problem is solved by a hybrid depth estimation approach that combines advantages of correspondence matching and depth-from-focus, where occlusion is handled by involving multiple depth maps in a voting scheme. The second problem is divided into two specific tasks -- demosaicing and super-resolution, where depth-assisted light field analysis is employed to surpass the competence of traditional image processing. The third problem is tackled with an inferential graph model that encodes the connections between demosaicing and depth estimation explicitly, and jointly performs a global optimization for both tasks.

     

    The proposed depth estimation approach shows a noticeable improvement in point clouds and depth maps, compared with references methods. Furthermore, the objective metrics and visual quality are compared with classical sensor-based demosaicing and multi-image super-resolution to show the effectiveness of the proposed depth-assisted light field processing methods. Finally, a multi-task graph model is proposed to challenge the performance of the sequential light field image processing pipeline. The proposed method is validated with various kinds of light fields, and outperforms the state-of-the-art in both demosaicing and depth estimation tasks.

     

    The works presented in this dissertation raise a novel view of the light field data structure in general, and provide tools to solve image processing problems in specific. The impact of the outcome can be manifold: To support scientific research with light field microscopes, to stabilize the performance of range cameras for industrial applications, as well as to provide individuals with a high-quality immersive experience.

    Download full text (pdf)
    Computational Light Field Photography: Depth Estimation, Demosaicing, and Super-Resolution
  • 2.
    Li, Yongwei
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Systems and Technology.
    Olsson, Roger
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Systems and Technology.
    Sjöström, Mårten
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Systems and Technology.
    An analysis of demosaicing for plenoptic capture based on ray optics2018In: Proceedings of 3DTV Conference 2018, 2018, article id 8478476Conference paper (Refereed)
    Abstract [en]

    The plenoptic camera is gaining more and more attention as it capturesthe 4D light field of a scene with a single shot and enablesa wide range of post-processing applications. However, the preprocessing steps for captured raw data, such as demosaicing, have been overlooked. Most existing decoding pipelines for plenoptic cameras still apply demosaicing schemes which are developed for conventional cameras. In this paper, we analyze the sampling pattern of microlens-based plenoptic cameras by ray-tracing techniques and ray phase space analysis. The goal of this work is to demonstrate guidelines and principles for demosaicing the plenoptic captures by taking the unique microlens array design into account. We show that the sampling of the plenoptic camera behaves differently from that of a conventional camera and the desired demosaicing scheme is depth-dependent.

    Download full text (pdf)
    fulltext
  • 3.
    Li, Yongwei
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Systems and Technology.
    Pla, Filiberto
    Sjöström, Mårten
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Systems and Technology.
    A Collaborative Graph Model for Light Field Demosaicing and Depth EstimationManuscript (preprint) (Other academic)
  • 4.
    Li, Yongwei
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Systems and Technology.
    Scrofani, Gabriele
    Department of Optics, University of Valencia, Burjassot, Spain.
    Sjöström, Mårten
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Systems and Technology.
    Martinez-Corraly, M.
    Department of Optics, University of Valencia, Burjassot, Spain.
    Area-Based Depth Estimation for Monochromatic Feature-Sparse Orthographic Capture2018In: 2018 26th European Signal Processing Conference (EUSIPCO), IEEE conference proceedings, 2018, p. 206-210, article id 8553336Conference paper (Refereed)
    Abstract [en]

    With the rapid development of light field technology, depth estimation has been highlighted as one of the critical problems in the field, and a number of approaches have been proposed to extract the depth of the scene. However, depthestimation by stereo matching becomes difficult and unreliable when the captured images lack both color and feature information. In this paper, we propose a scheme that extracts robust depth from monochromatic, feature-sparse scenes recorded in orthographic sub-aperture images. Unlike approaches which relyon the rich color and texture information across the sub-aperture views, our approach is based on depth from focus techniques. First, we superimpose shifted sub-aperture images on top of anarbitrarily chosen central image. To focus on different depths, the shift amount is varied based on the micro-lens array properties. Next, an area-based depth estimation approach is applied tofind the best match among the focal stack and generate the dense depth map. This process is repeated for each sub-aperture image. Finally, occlusions are handled by merging depth maps generated from different central images followed by a voting process. Results show that the proposed scheme is more suitable than conventional depth estimation approaches in the context of orthographic captures that have insufficient color and feature information, such as microscopic fluorescence imaging.

    Download full text (pdf)
    fulltext
  • 5.
    Li, Yongwei
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Systems and Technology.
    Sjöström, Mårten
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Systems and Technology.
    Depth-Assisted Demosaicing for Light Field Data in Layered Object Space2019In: 2019 IEEE International Conference on Image Processing (ICIP), IEEE, 2019, p. 3746-3750, article id 8803441Conference paper (Refereed)
    Abstract [en]

    Light field technology, which emerged as a solution to the increasing demands of visually immersive experience, has shown its extraordinary potential for scene content representation and reconstruction. Unlike conventional photography that maps the 3D scenery onto a 2D plane by a projective transformation, light field preserves both the spatial and angular information, enabling further processing steps such as computational refocusing and image-based rendering. However, there are still gaps that have been barely studied, such as the light field demosaicing process. In this paper, we propose a depth-assisted demosaicing method for light field data. First, we exploit the sampling geometry of the light field data with respect to the scene content using the ray-tracing technique and develop a sampling model of light field capture. Then we carry out the demosaicing process in a layered object space with object-space sampling adjacencies rather than pixel placement. Finally, we compare our results with state-of-art approaches and discuss about the potential research directions of the proposed sampling model to show the significance of our approach.

    Download full text (pdf)
    fulltext
  • 6.
    Li, Yongwei
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Systems and Technology.
    Sjöström, Mårten
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Systems and Technology.
    Depth-Assisted Light Field Super-Resolution in Layered Object SpaceManuscript (preprint) (Other academic)
  • 7.
    Wang, Chunpeng
    et al.
    Qilu University of Technology (Shandong Academy of Sciences), Jinan, China; Dalian University of Technology, Dalian, China.
    Wang, Xingyuan
    Dalian Maritime University, Dalian, China; Dalian University of Technology, Dalian, China.
    Li, Yongwei
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Systems and Technology.
    Xia, Zhiqiu
    Dalian University of Technology, Dalian, China.
    Zhang, Chuan
    Dalian University of Technology, Dalian, China.
    Quaternion polar harmonic Fourier moments for color images2018In: Information Sciences, ISSN 0020-0255, E-ISSN 1872-6291, Vol. 450, p. 141-156Article in journal (Refereed)
    Abstract [en]

    This paper proposes quaternion polar harmonic Fourier moments (QPHFM) for color image processing and analyzes the properties of QPHFM. After extending Chebyshev–Fourier moments (CHFM) to quaternion Chebyshev-Fourier moments (QCHFM), comparison experiments, including image reconstruction and color image object recognition, on the performance of QPHFM and quaternion Zernike moments (QZM), quaternion pseudo-Zernike moments (QPZM), quaternion orthogonal Fourier-Mellin moments (QOFMM), QCHFM, and quaternion radial harmonic Fourier moments (QRHFM) are carried out. Experimental results show QPHFM can achieve an ideal performance in image reconstruction and invariant object recognition in noise-free and noisy conditions. In addition, this paper discusses the importance of phase information of quaternion orthogonal moments in image reconstruction. 

1 - 7 of 7
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf