miun.sePublications
Change search
Link to record
Permanent link

Direct link
BETA
Muddala, Suryanarayana M.
Alternative names
Publications (9 of 9) Show all publications
Muddala, S., Olsson, R. & Sjöström, M. (2016). Spatio-Temporal Consistent Depth-Image Based Rendering Using Layered Depth Image and Inpainting. EURASIP Journal on Image and Video Processing, 9(1), 1-19
Open this publication in new window or tab >>Spatio-Temporal Consistent Depth-Image Based Rendering Using Layered Depth Image and Inpainting
2016 (English)In: EURASIP Journal on Image and Video Processing, ISSN 1687-5176, E-ISSN 1687-5281, Vol. 9, no 1, p. 1-19Article in journal (Refereed) Published
Abstract [en]

Depth-image-based rendering (DIBR) is a commonly used method for synthesizing additional views using video-plus-depth (V+D) format. A critical issue with DIBR based view synthesis is the lack of information behind foreground objects. This lack is manifested as disocclusions, holes, next to the foreground objects in rendered virtual views as a consequence of the virtual camera “seeing” behind the foreground object. The disocclusions are larger in the extrapolation case, i.e. the single camera case. Texture synthesis methods (inpainting methods) aim to fill these disocclusions by producing plausible texture content. However, virtual views inevitably exhibit both spatial and temporal inconsistencies at the filled disocclusion areas, depending on the scene content. In this paper we propose a layered depth image (LDI) approach that improves the spatio-temporal consistency. In the process of LDI generation, depth information is used to classify the foreground and background in order to form a static scene sprite from a set of neighboring frames. Occlusions in the LDI are then identified and filled using inpainting, such that no disocclusions appear when the LDI data is rendered to a virtual view. In addition to the depth information, optical flow is computed to extract the stationary parts of the scene and to classify the occlusions in the inpainting process. Experimental results demonstrate that spatio-temporal inconsistencies are significantly reduced using the proposed method. Furthermore, subjective and objective qualities are improved compared to state-of-the-art reference methods.

Place, publisher, year, edition, pages
Springer: , 2016
Keywords
view synthesis; depth-image based rendering; image inpainting; texture synthesis; hole lling; disocclusions; layered depth image; temporal consistency
National Category
Signal Processing Media and Communication Technology
Identifiers
urn:nbn:se:miun:diva-26904 (URN)10.1186/s13640-016-0109-6 (DOI)000391585200001 ()2-s2.0-84959266613 (Scopus ID)STC (Local ID)STC (Archive number)STC (OAI)
Available from: 2016-01-22 Created: 2016-01-22 Last updated: 2018-01-10Bibliographically approved
Muddala, S. M., Sjöström, M. & Olsson, R. (2016). Virtual View Synthesis Using Layered Depth Image Generation and Depth-Based Inpainting for Filling Disocclusions and Translucent Disocclusions. Journal of Visual Communication and Image Representation, 38, 351-366
Open this publication in new window or tab >>Virtual View Synthesis Using Layered Depth Image Generation and Depth-Based Inpainting for Filling Disocclusions and Translucent Disocclusions
2016 (English)In: Journal of Visual Communication and Image Representation, ISSN 1047-3203, E-ISSN 1095-9076, Vol. 38, p. 351-366Article in journal (Refereed) Published
Abstract [en]

View synthesis is an efficient solution to produce content for 3DTV and FTV. However, proper handling of the disocclusions is a major challenge in the view synthesis. Inpainting methods offer solutions for handling disocclusions, though limitations in foreground-background classification causes the holes to be filled with inconsistent textures. Moreover, the state-of-the art methods fail to identify and fill disocclusions in intermediate distances between foreground and background through which background may be visible in the virtual view (translucent disocclusions). Aiming at improved rendering quality, we introduce a layered depth image (LDI) in the original camera view, in which we identify and fill occluded background so that when the LDI data is rendered to a virtual view, no disocclusions appear but views with consistent data are produced also handling translucent disocclusions. Moreover, the proposed foreground-background classification and inpainting fills the disocclusions with neighboring background texture consistently. Based on the objective and subjective evaluations, the proposed method outperforms the state-of-the art methods at the disocclusions.

Keywords
View synthesis, depth-image based rendering, image inpainting, texture synthesis, hole lling, disocclusions, translucent disocclusions, layered depth image
National Category
Information Systems Media and Communication Technology
Identifiers
urn:nbn:se:miun:diva-27124 (URN)10.1016/j.jvcir.2016.02.017 (DOI)000377149100030 ()2-s2.0-84977648945 (Scopus ID)STC (Local ID)STC (Archive number)STC (OAI)
Available from: 2016-02-24 Created: 2016-02-24 Last updated: 2018-01-10Bibliographically approved
Muddala, S. M. (2015). Free View Rendering for 3D Video: Edge-Aided Rendering and Depth-Based Image Inpainting. (Doctoral dissertation). Sundsvall: Mid Sweden University,
Open this publication in new window or tab >>Free View Rendering for 3D Video: Edge-Aided Rendering and Depth-Based Image Inpainting
2015 (English)Doctoral thesis, monograph (Other academic)
Abstract [en]

Three Dimensional Video (3DV) has become increasingly popular with the success of 3D cinema. Moreover, emerging display technology offers an immersive experience to the viewer without the necessity of any visual aids such as 3D glasses. 3DV applications, Three Dimensional Television (3DTV) and Free Viewpoint Television (FTV) are auspicious technologies for living room environments by providing immersive experience and look around facilities. In order to provide such an experience, these technologies require a number of camera views captured from different viewpoints. However, the capture and transmission of the required number of views is not a feasible solution, and thus view rendering is employed as an efficient solution to produce the necessary number of views. Depth-image-based rendering (DIBR) is a commonly used rendering method. Although DIBR is a simple approach that can produce the desired number of views, inherent artifacts are major issues in the view rendering. Despite much effort to tackle the rendering artifacts over the years, rendered views still contain visible artifacts.

This dissertation addresses three problems in order to improve 3DV quality: 1) How to improve the rendered view quality using a direct approach without dealing each artifact specifically. 2) How to handle disocclusions (a.k.a. holes) in the rendered views in a visually plausible manner using inpainting. 3) How to reduce spatial inconsistencies in the rendered view. The first problem is tackled by an edge-aided rendering method that uses a direct approach with one-dimensional interpolation, which is applicable when the virtual camera distance is small. The second problem is addressed by using a depth-based inpainting method in the virtual view, which reconstructs the missing texture with background data at the disocclusions. The third problem is undertaken by a rendering method that firstly inpaint occlusions as a layered depth image (LDI) in the original view, and then renders a spatially consistent virtual view.

Objective assessments of proposed methods show improvements over the state-of-the-art rendering methods. Visual inspection shows slight improvements for intermediate views rendered from multiview videos-plus-depth, and the proposed methods outperforms other view rendering methods in the case of rendering from single view video-plus-depth. Results confirm that the proposed methods are capable of reducing rendering artifacts and producing spatially consistent virtual views.

In conclusion, the view rendering methods proposed in this dissertation can support the production of high quality virtual views based on a limited number of input views. When used to create a multi-scopic presentation, the outcome of this dissertation can benefit 3DV technologies to improve the immersive experience.

Place, publisher, year, edition, pages
Sundsvall: Mid Sweden University,, 2015. p. 125
Series
Mid Sweden University doctoral thesis, ISSN 1652-893X ; 226
Keywords
3DV, 3DTV, FTV, view rendering, depth-image-based rendering, hole-filling, disocclusion filling, inpainting, texture synthesis, view synthesis, layered depth image
National Category
Computer Systems Signal Processing Other Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:miun:diva-25097 (URN)STC (Local ID)978-91-88025-30-2 (ISBN)STC (Archive number)STC (OAI)
Public defence
2015-06-18, L111, Holmgatan10, Sundsvall, 10:00 (English)
Opponent
Supervisors
Available from: 2015-06-15 Created: 2015-06-08 Last updated: 2016-12-23Bibliographically approved
Muddala, S. M., Sjöström, M. & Olsson, R. (2014). Depth-Based Inpainting For Disocclusion Filling. In: 3DTV-Conference: . Paper presented at 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video, 3DTV-CON 2014; Budapest; Hungary; 2 July 2014 through 4 July 2014; Category numberCFP1455B-ART; Code 107089 (pp. Art. no. 6874752). IEEE Computer Society
Open this publication in new window or tab >>Depth-Based Inpainting For Disocclusion Filling
2014 (English)In: 3DTV-Conference, IEEE Computer Society, 2014, p. Art. no. 6874752-Conference paper, Published paper (Refereed)
Abstract [en]

Depth-based inpainting methods can solve disocclusion problems occurring in depth-image-based rendering. However, inpainting in this context suffers from artifacts along foreground objects due to foreground pixels in the patch matching. In this paper, we address the disocclusion problem by a refined depth-based inpainting method. The novelty is in classifying the foreground and background by using available local depth information. Thereby, the foreground information is excluded from both the source region and the target patch. In the proposed inpainting method, the local depth constraints imply inpainting only the background data and preserving the foreground object boundaries. The results from the proposed method are compared with those from the state-of-the art inpainting methods. The experimental results demonstrate improved objective quality and a better visual quality along the object boundaries.

Place, publisher, year, edition, pages
IEEE Computer Society, 2014
Keywords
View synthesis, depth-image based rendering, image inpainting, disocclusions
National Category
Media Engineering Signal Processing
Identifiers
urn:nbn:se:miun:diva-22511 (URN)10.1109/3DTV.2014.6874752 (DOI)000345738600042 ()2-s2.0-84906568727 (Scopus ID)STC (Local ID)978-1-4799-4758-4 (ISBN)STC (Archive number)STC (OAI)
Conference
3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video, 3DTV-CON 2014; Budapest; Hungary; 2 July 2014 through 4 July 2014; Category numberCFP1455B-ART; Code 107089
Available from: 2014-07-16 Created: 2014-07-16 Last updated: 2017-08-22Bibliographically approved
Muddala, S. M., Olsson, R. & Sjöström, M. (2013). Depth-Included Curvature Inpainting for Disocclusion Filling in View Synthesis. International Journal On Advances in Telecommunications, 6(3&4), 132-142
Open this publication in new window or tab >>Depth-Included Curvature Inpainting for Disocclusion Filling in View Synthesis
2013 (English)In: International Journal On Advances in Telecommunications, ISSN 1942-2601, E-ISSN 1942-2601, Vol. 6, no 3&4, p. 132-142Article in journal (Refereed) Published
Abstract [en]

Depth-image-based-rendering (DIBR) is the commonly used for generating additional views for 3DTV and FTV using 3D video formats such as video plus depth (V+D) and multi view-video-plus-depth (MVD). The synthesized views suffer from artifacts mainly with disocclusions when DIBR is used. Depth-based inpainting methods can solve these problems plausibly. In this paper, we analyze the influence of the depth information at various steps of the depth-included curvature inpainting method. The depth-based inpainting method relies on the depth information at every step of the inpainting process: boundary extraction for missing areas, data term computation for structure propagation and in the patch matching to find best data. The importance of depth at each step is evaluated using objective metrics and visual comparison. Our evaluation demonstrates that depth information in each step plays a key role. Moreover, to what degree depth can be used in each step of the inpainting process depends on the depth distribution.

Place, publisher, year, edition, pages
International Academy, Research and Industry Association (IARIA), 2013
Keywords
3D; video plus depth; multiview video plus depth; 3D warping; depth-image-based rendering; image inpainting; disocclusion filling.
National Category
Signal Processing Other Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:miun:diva-20403 (URN)STC (Local ID)STC (Archive number)STC (OAI)
Available from: 2013-12-05 Created: 2013-12-02 Last updated: 2017-12-06Bibliographically approved
Muddala, S. M., Olsson, R. & Sjöström, M. (2013). Disocclusion Handling Using Depth-Based Inpainting. In: Proceedings of MMEDIA 2013, The Fifth InternationalConferences on Advances in Multimedia, Venice, Italy, 2013: . Paper presented at Fifth International Conferences on Advances in Multimedia; MMEDIA 2013; Apr 21-26 2013; Venice, Italy (pp. 136-141). International Academy, Research and Industry Association (IARIA)
Open this publication in new window or tab >>Disocclusion Handling Using Depth-Based Inpainting
2013 (English)In: Proceedings of MMEDIA 2013, The Fifth InternationalConferences on Advances in Multimedia, Venice, Italy, 2013, International Academy, Research and Industry Association (IARIA), 2013, p. 136-141Conference paper, Oral presentation with published abstract (Refereed)
Abstract [en]

Depth image based rendering (DIBR) plays an important role in producing virtual views using 3D-video formats such as video plus depth (V+D) and multi view-videoplus-depth (MVD). Pixel regions with non-defined values (due to disoccluded areas) are exposed when DIBR is used. In this paper, we propose a depth-based inpainting method aimed to handle Disocclusions in DIBR from V+D and MVD. Our proposed method adopts the curvature driven diffusion (CDD) model as a data term, to which we add a depth constraint. In addition, we add depth to further guide a directional priority term in the exemplar based texture synthesis. Finally, we add depth in the patch-matching step to prioritize background texture when inpainting. The proposed method is evaluated by comparing inpainted virtual views with corresponding views produced by three state-of-the-art inpainting methods as references. The evaluation shows the proposed method yielding an increased objective quality compared to the reference methods, and visual inspection further indicate an improved visual quality.

Place, publisher, year, edition, pages
International Academy, Research and Industry Association (IARIA), 2013
Keywords
video plus depth, warping, depth-image-based rendering, inpainting
National Category
Signal Processing Other Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:miun:diva-18890 (URN)2-s2.0-84905842575 (Scopus ID)STC (Local ID)STC (Archive number)STC (OAI)
Conference
Fifth International Conferences on Advances in Multimedia; MMEDIA 2013; Apr 21-26 2013; Venice, Italy
Available from: 2013-05-04 Created: 2013-05-04 Last updated: 2017-08-22Bibliographically approved
Muddala, S. M., Sjöström, M., Olsson, R. & Tourancheau, S. (2013). Edge-aided virtual view rendering for multiview video plus depth. In: Proceedings of SPIE Volume 8650, Burlingame, CA, USA, 2013: 3D Image Processing (3DIP) and Applications 2013. Paper presented at 3D Image Processing (3DIP) and Applications 2013, 3-7 Feb 2013; Burlingame, Ca, USA, Conference 8650 (pp. Art. no. 86500E). SPIE - International Society for Optical Engineering
Open this publication in new window or tab >>Edge-aided virtual view rendering for multiview video plus depth
2013 (English)In: Proceedings of SPIE Volume 8650, Burlingame, CA, USA, 2013: 3D Image Processing (3DIP) and Applications 2013, SPIE - International Society for Optical Engineering, 2013, p. Art. no. 86500E-Conference paper, Published paper (Other academic)
Abstract [en]

Depth-Image-Based Rendering (DIBR) of virtual views is a fundamental method in three dimensional 3-D video applications to produce dierent perspectives from texture and depth information, in particular the multi-viewplus-depth (MVD) format. Artifacts are still present in virtual views as a consequence of imperfect rendering using existing DIBR methods. In this paper, we propose an alternative DIBR method for MVD. In the proposed method we introduce an edge pixel and interpolate pixel values in the virtual view using the actual projected coordinates from two adjacent views, by which cracks and disocclusions are automatically lled. In particular, we propose a method to merge pixel information from two adjacent views in the virtual view before the interpolation; we apply a weighted averaging of projected pixels within the range of one pixel in the virtual view. We compared virtual view images rendered by the proposed method to the corresponding view images rendered by state-of-theart methods. Objective metrics demonstrated an advantage of the proposed method for most investigated media contents. Subjective test results showed preference to dierent methods depending on media content, and the test could not demonstrate a signicant dierence between the proposed method and state-of-the-art methods.

Place, publisher, year, edition, pages
SPIE - International Society for Optical Engineering, 2013
Keywords
View rendering, 3DTV, multiview plus depth (MVD), depth-image-based-rendering (DIBR), warping
National Category
Signal Processing
Identifiers
urn:nbn:se:miun:diva-18474 (URN)10.1117/12.2004116 (DOI)000322110500012 ()2-s2.0-84878267120 (Scopus ID)STC (Local ID)978-081949423-8 (ISBN)STC (Archive number)STC (OAI)
Conference
3D Image Processing (3DIP) and Applications 2013, 3-7 Feb 2013; Burlingame, Ca, USA, Conference 8650
Available from: 2013-02-12 Created: 2013-02-12 Last updated: 2017-08-22
Muddala, S. M. (2013). View Rendering for 3DTV. (Licentiate dissertation). Sundsvall: Mid Sweden University
Open this publication in new window or tab >>View Rendering for 3DTV
2013 (English)Licentiate thesis, comprehensive summary (Other academic)
Abstract [en]

Advancements in three dimensional (3D) technologies are rapidly increasing. Three Dimensional Television (3DTV) aims at creating 3D experience for the home user. Moreover, multiview autostereoscopic displays provide a depth impression without the requirement for any special glasses and can be viewed from multiple locations. One of the key issues in the 3DTV processing chain is the content generation from the available input data format video plus depth and multiview video plus depth. This data allows for the possibility of producing virtual views using depth-image-based rendering. Although depth-image-based rendering is an efficient method, it is known for appearance of artifacts such as cracks, corona and empty regions in rendered images. While several approaches have tackled the problem, reducing the artifacts in rendered images is still an active field of research.

 

Two problems are addressed in this thesis in order to achieve a better 3D video quality in the context of view rendering: firstly, how to improve the quality of rendered views using a direct approach (i.e. without applying specific processing steps for each artifact), and secondly, how to fill the large missing areas in a visually plausible manner using neighbouring details from around the missing regions. This

thesis introduces a new depth-image-based rendering and depth-based texture inpainting in order to address these two problems. The first problem is solved by an edge-aided rendering method that relies on the principles of forward warping and one dimensional interpolation. The other problem is addressed by using the depth-included curvature inpainting method that uses appropriate depth level texture details around disocclusions.

 

The proposed edge-aided rendering method and depth-included curvature inpainting methods are evaluated and compared with the state-of-the-art methods. The results show an increase in the objective quality and the visual gain over reference methods. The quality gain is encouraging as the edge-aided rendering method omits the specific processing steps to remove the rendering artifacts. Moreover, the results show that large disocclusions can be effectively filled using the depth-included curvature inpainting approach. Overall, the proposed approaches improve the content generation for 3DTV and additionally, for free view point television.

Place, publisher, year, edition, pages
Sundsvall: Mid Sweden University, 2013. p. 49
Series
Mid Sweden University licentiate thesis, ISSN 1652-8948 ; 101
Keywords
3DTV, view rendering, depth-image-based rendering, disocclusion filling, inpainting.
National Category
Other Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:miun:diva-19194 (URN)STC (Local ID)9789187103773 (ISBN)STC (Archive number)STC (OAI)
Presentation
2013-06-11, L111, Holmgatan10, Sundsvall, 10:15 (English)
Opponent
Supervisors
Available from: 2013-06-13 Created: 2013-06-12 Last updated: 2016-10-20Bibliographically approved
Muddala, S. M., Sjöström, M. & Olsson, R. (2012). Edge-preserving depth-image-based rendering method. In: 2012 International Conference on 3D Imaging, IC3D 2012 - Proceedings: . Paper presented at 2012 2nd International Conference on 3D Imaging, IC3D 2012; Liege; Belgium; 3 December 2012 through 5 December 2012; Category numberCFP12IC3-ART; Code 100759 (pp. Art. no. 6615113).
Open this publication in new window or tab >>Edge-preserving depth-image-based rendering method
2012 (English)In: 2012 International Conference on 3D Imaging, IC3D 2012 - Proceedings, 2012, p. Art. no. 6615113-Conference paper, Published paper (Refereed)
Abstract [en]

Distributionof future 3DTV is likely to use supplementary depth information to a videosequence. New virtual views may then be rendered in order to adjust todifferent 3D displays. All depth-imaged-based rendering (DIBR) methods sufferfrom artifacts in the resulting images, which are corrected by differentpost-processing. The proposed method is based on fundamental principles of3D-warping. The novelty lies in how the virtual view sample values are obtainedfrom one-dimensional interpolation, where edges are preserved by introducing specificedge-pixels with information about both foreground and background data. Thisavoids fully the post-processing of filling cracks and holes. We comparedrendered virtual views of our method and of the View Synthesis ReferenceSoftware (VSRS) and analyzed the results based on typical artifacts. Theproposed method obtained better quality for photographic images and similarquality for synthetic images.

Keywords
3D, video plus depth, warping, depth-image-based view rendering.
National Category
Signal Processing
Identifiers
urn:nbn:se:miun:diva-18051 (URN)10.1109/IC3D.2012.6615113 (DOI)2-s2.0-84887863563 (Scopus ID)STC (Local ID)978-147991580-4 (ISBN)STC (Archive number)STC (OAI)
Conference
2012 2nd International Conference on 3D Imaging, IC3D 2012; Liege; Belgium; 3 December 2012 through 5 December 2012; Category numberCFP12IC3-ART; Code 100759
Available from: 2012-12-20 Created: 2012-12-20 Last updated: 2017-08-22
Organisations

Search in DiVA

Show all publications