miun.sePublications
Change search
Refine search result
123 1 - 50 of 121
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Ahmad, Waqas
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Systems and Technology.
    Ghafoor, Mubeen
    COMSATS University Islamabad, Pakistan.
    Tariq, Syed Ali
    COMSATS University Islamabad, Pakistan.
    Hassan, Ali
    COMSATS University Islamabad, Pakistan.
    Sjöström, Mårten
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Systems and Technology.
    Olsson, Roger
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Systems and Technology.
    Computationally Efficient Light Field Image Compression Using a Multiview HEVC Framework2019In: IEEE Access, E-ISSN 2169-3536, Vol. 7, p. 143002-143014Article in journal (Refereed)
    Abstract [en]

    The acquisition of the spatial and angular information of a scene using light eld (LF) technologies supplement a wide range of post-processing applications, such as scene reconstruction, refocusing, virtual view synthesis, and so forth. The additional angular information possessed by LF data increases the size of the overall data captured while offering the same spatial resolution. The main contributor to the size of captured data (i.e., angular information) contains a high correlation that is exploited by state-of-the-art video encoders by treating the LF as a pseudo video sequence (PVS). The interpretation of LF as a single PVS restricts the encoding scheme to only utilize a single-dimensional angular correlation present in the LF data. In this paper, we present an LF compression framework that efciently exploits the spatial and angular correlation using a multiview extension of high-efciency video coding (MV-HEVC). The input LF views are converted into multiple PVSs and are organized hierarchically. The rate-allocation scheme takes into account the assigned organization of frames and distributes quality/bits among them accordingly. Subsequently, the reference picture selection scheme prioritizes the reference frames based on the assigned quality. The proposed compression scheme is evaluated by following the common test conditions set by JPEG Pleno. The proposed scheme performs 0.75 dB better compared to state-of-the-art compression schemes and 2.5 dB better compared to the x265-based JPEG Pleno anchor scheme. Moreover, an optimized motionsearch scheme is proposed in the framework that reduces the computational complexity (in terms of the sum of absolute difference [SAD] computations) of motion estimation by up to 87% with a negligible loss in visual quality (approximately 0.05 dB).

  • 2.
    Ahmad, Waqas
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Systems and Technology.
    Olsson, Roger
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Systems and Technology.
    Sjöström, Mårten
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Systems and Technology.
    Interpreting Plenoptic Images as Multi-View Sequences for Improved Compression2017Data set
    Abstract [en]

    The paper is written in the response to ICIP 2017, Grand challenge on plenoptic image compression. The input image format and compression rates set out by the competition are followed to estimate the results.

  • 3.
    Ahmad, Waqas
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Systems and Technology.
    Olsson, Roger
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Systems and Technology.
    Sjöström, Mårten
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Systems and Technology.
    Interpreting Plenoptic Images as Multi-View Sequences for Improved Compression2017In: ICIP 2017, IEEE, 2017, p. 4557-4561Conference paper (Refereed)
    Abstract [en]

    Over the last decade, advancements in optical devices have made it possible for new novel image acquisition technologies to appear. Angular information for each spatial point is acquired in addition to the spatial information of the scene that enables 3D scene reconstruction and various post-processing effects. Current generation of plenoptic cameras spatially multiplex the angular information, which implies an increase in image resolution to retain the level of spatial information gathered by conventional cameras. In this work, the resulting plenoptic image is interpreted as a multi-view sequence that is efficiently compressed using the multi-view extension of high efficiency video coding (MV-HEVC). A novel two dimensional weighted prediction and rate allocation scheme is proposed to adopt the HEVC compression structure to the plenoptic image properties. The proposed coding approach is a response to ICIP 2017 Grand Challenge: Light field Image Coding. The proposed scheme outperforms all ICME contestants, and improves on the JPEG-anchor of ICME with an average PSNR gain of 7.5 dB and the HEVC-anchor of ICIP 2017 Grand Challenge with an average PSNR gain of 2.4 dB.

  • 4.
    Ahmad, Waqas
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Systems and Technology.
    Olsson, Roger
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Systems and Technology.
    Sjöström, Mårten
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Systems and Technology.
    Towards a generic compression solution for densely and sparsely sampled light field data2018In: Proceedings of 25TH IEEE International Conference On Image Processing, 2018, p. 654-658, article id 8451051Conference paper (Refereed)
    Abstract [en]

    Light field (LF) acquisition technologies capture the spatial and angular information present in scenes. The angular information paves the way for various post-processing applications such as scene reconstruction, refocusing, and synthetic aperture. The light field is usually captured by a single plenoptic camera or by multiple traditional cameras. The former captures a dense LF, while the latter captures a sparse LF. This paper presents a generic compression scheme that efficiently compresses both densely and sparsely sampled LFs. A plenoptic image is converted into sub-aperture images, and each sub-aperture image is interpreted as a frame of a multiview sequence. In comparison, each view of the multi-camera system is treated as a frame of a multi-view sequence. The multi-view extension of high efficiency video coding (MVHEVC) is used to encode the pseudo multi-view sequence.This paper proposes an adaptive prediction and rate allocation scheme that efficiently compresses LF data irrespective of the acquisition technology used.

  • 5.
    Ahmad, Waqas
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Systems and Technology.
    Palmieri, Luca
    Christian-Albrechts-Universität, Kiel, Germany.
    Koch, Reinhard
    Christian-Albrechts-Universität, Kiel, Germany.
    Sjöström, Mårten
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Systems and Technology.
    Matching Light Field Datasets From Plenoptic Cameras 1.0 And 2.02018In: Proceedings of the 2018 3DTV Conference, 2018, article id 8478611Conference paper (Refereed)
    Abstract [en]

    The capturing of angular and spatial information of the scene using single camera is made possible by new emerging technology referred to as plenoptic camera. Both angular and spatial information, enable various post-processing applications, e.g. refocusing, synthetic aperture, super-resolution, and 3D scene reconstruction. In the past, multiple traditional cameras were used to capture the angular and spatial information of the scene. However, recently with the advancement in optical technology, plenoptic cameras have been introduced to capture the scene information. In a plenoptic camera, a lenslet array is placed between the main lens and the image sensor that allows multiplexing of the spatial and angular information onto a single image, also referred to as plenoptic image. The placement of the lenslet array relative to the main lens and the image sensor, results in two different optical design sof a plenoptic camera, also referred to as plenoptic 1.0 and plenoptic 2.0. In this work, we present a novel dataset captured with plenoptic 1.0 (Lytro Illum) and plenoptic 2.0(Raytrix R29) cameras for the same scenes under the same conditions. The dataset provides the benchmark contents for various research and development activities for plenoptic images.

  • 6.
    Ahmad, Waqas
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Systems and Technology.
    Palmieri, Luca
    University of Padova, Italy.
    Koch, Reinhard
    Christian-Albrechts-University of Kiel, Germany.
    Sjöström, Mårten
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Systems and Technology.
    The Plenoptic Dataset2018Data set
    Abstract [en]

    The dataset is captured using two different plenoptic cameras, namely Illum from Lytro (based on plenoptic 1.0 model) and R29 from Raytrix (based on plenoptic 2.0 model). The scenes selected for the dataset were captured under controlled conditions. The cameras were mounted onto a multi-camera rig that was mechanically controlled to move the cameras with millimeter precision. In this way, both cameras captured the scene from the same viewpoint.

  • 7.
    Ahmad, Waqas
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Systems and Technology.
    Sjöström, Mårten
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Systems and Technology.
    Olsson, Roger
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Systems and Technology.
    Compression scheme for sparsely sampled light field data based on pseudo multi-view sequences2018In: OPTICS, PHOTONICS, AND DIGITAL TECHNOLOGIES FOR IMAGING APPLICATIONS V Proceedings of SPIE - The International Society for Optical Engineering, SPIE - International Society for Optical Engineering, 2018, Vol. 10679, article id 106790MConference paper (Refereed)
    Abstract [en]

    With the advent of light field acquisition technologies, the captured information of the scene is enriched by having both angular and spatial information. The captured information provides additional capabilities in the post processing stage, e.g. refocusing, 3D scene reconstruction, synthetic aperture etc. Light field capturing devices are classified in two categories. In the first category, a single plenoptic camera is used to capture a densely sampled light field, and in second category, multiple traditional cameras are used to capture a sparsely sampled light field. In both cases, the size of captured data increases with the additional angular information. The recent call for proposal related to compression of light field data by JPEG, also called “JPEG Pleno”, reflects the need of a new and efficient light field compression solution. In this paper, we propose a compression solution for sparsely sampled light field data. In a multi-camera system, each view depicts the scene from a single perspective. We propose to interpret each single view as a frame of pseudo video sequence. In this way, complete MxN views of multi-camera system are treated as M pseudo video sequences, where each pseudo video sequence contains N frames. The central pseudo video sequence is taken as base View and first frame in all the pseudo video sequences is taken as base Picture Order Count (POC). The frame contained in base view and base POC is labeled as base frame. The remaining frames are divided into three predictor levels. Frames placed in each successive level can take prediction from previously encoded frames. However, the frames assigned with last prediction level are not used for prediction of other frames. Moreover, the rate-allocation for each frame is performed by taking into account its predictor level, its frame distance and view wise decoding distance relative to the base frame. The multi-view extension of high efficiency video coding (MV-HEVC) is used to compress the pseudo multi-view sequences. The MV-HEVC compression standard enables the frames to take prediction in both direction (horizontal and vertical d), and MV-HEVC parameters are used to implement the proposed 2D prediction and rate allocation scheme. A subset of four light field images from Stanford dataset are compressed, using the proposed compression scheme on four bitrates in order to cover the low to high bit-rates scenarios. The comparison is made with state-of-art reference encoder HEVC and its real-time implementation X265. The 17x17 grid is converted into a single pseudo sequence of 289 frames by following the order explained in JPEG Pleno call for proposal and given as input to the both reference schemes. The rate distortion analysis shows that the proposed compression scheme outperforms both reference schemes in all tested bitrate scenarios for all test images. The average BD-PSNR gain is 1.36 dB over HEVC and 2.15 dB over X265.

  • 8.
    Ahmad, Waqas
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Systems and Technology.
    Vagharshakyan, Suren
    Tampere University of Technology, Finland.
    Sjöström, Mårten
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Systems and Technology.
    Gotchev, Atanas
    Tampere University of Technology, Finland.
    Bregovic, Robert
    Tampere University of Technology, Finland.
    Olsson, Roger
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Systems and Technology.
    Shearlet Transform Based Prediction Scheme for Light Field Compression2018Conference paper (Refereed)
    Abstract [en]

    Light field acquisition technologies capture angular and spatial information ofthe scene. The spatial and angular information enables various post processingapplications, e.g. 3D scene reconstruction, refocusing, synthetic aperture etc at theexpense of an increased data size. In this paper, we present a novel prediction tool forcompression of light field data acquired with multiple camera system. The captured lightfield (LF) can be described using two plane parametrization as, L(u, v, s, t), where (u, v)represents each view image plane coordinates and (s, t) represents the coordinates of thecapturing plane. In the proposed scheme, the captured LF is uniformly decimated by afactor d in both directions (in s and t coordinates), resulting in a sparse set of views alsoreferred to as key views. The key views are converted into a pseudo video sequence andcompressed using high efficiency video coding (HEVC). The shearlet transform basedreconstruction approach, presented in [1], is used at the decoder side to predict thedecimated views with the help of the key views.Four LF images (Truck, Bunny from Stanford dataset, Set2 and Set9 from High DensityCamera Array dataset) are used in the experiments. Input LF views are converted into apseudo video sequence and compressed with HEVC to serve as anchor. Rate distortionanalysis shows the average PSNR gain of 0.98 dB over the anchor scheme. Moreover, inlow bit-rates, the compression efficiency of the proposed scheme is higher compared tothe anchor and on the other hand the performance of the anchor is better in high bit-rates.Different compression response of the proposed and anchor scheme is a consequence oftheir utilization of input information. In the high bit-rate scenario, high quality residualinformation enables the anchor to achieve efficient compression. On the contrary, theshearlet transform relies on key views to predict the decimated views withoutincorporating residual information. Hence, it has inherit reconstruction error. In the lowbit-rate scenario, the bit budget of the proposed compression scheme allows the encoderto achieve high quality for the key views. The HEVC anchor scheme distributes the samebit budget among all the input LF views that results in degradation of the overall visualquality. The sensitivity of human vision system toward compression artifacts in low-bitratecases favours the proposed compression scheme over the anchor scheme.

  • 9. Blas, A.
    et al.
    Hancock, S.
    Koscielniak, S.
    Lindroos, M.
    Sjöström, Mårten
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Evaluation of vector signal analyzer for beam transfer function measurements in PS Booster1999Report (Other scientific)
  • 10.
    Boström, Lena
    et al.
    Mid Sweden University, Faculty of Human Sciences, Department of Education.
    Sjöström, Mårten
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Systems and Technology.
    Karlsson, Håkan
    Mid Sweden University, Faculty of Human Sciences, Department of Education.
    Sundgren, Marcus
    Mid Sweden University, Faculty of Human Sciences, Department of Education.
    Andersson, Mattias
    Olsson, Roger
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Systems and Technology.
    Åhlander, Jimmy
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Systems and Technology.
    Digital visualisering i skolan: Mittuniversitetets slutrapport från förstudien2018Report (Other academic)
    Abstract [sv]

    Den här studiens syfte har varit tvåfaldigt, nämligen att testa alternativa lärmetoder via ett digitalt läromedel i matematik i en kvasiexperimentell studie samt att tillämpa metoder av användarupplevelser för interaktiva visualiseringar, och därigenom öka kunskapen kring hur upplevd kvalitet beror på använd teknik. Pilotstudien sätter också fokus på flera angelägna områden inom skolutveckling både regionalt och nationellt samt viktiga aspekter när det gäller kopplingen teknik, pedagogik och utvärderingsmetoder inom “den tekniska delen”. Det förra handlar om sjunkande matematikresultat i skolan, praktiknära skolforskning, stärkt digital kompetens, visualisering och lärande samt forskning om visualisering och utvärdering. Den senare svarar på frågor om vilka tekniska lösningar som tidigare använts och med vilket syfte har de skapats samt hur visualiseringar har utvärderats enligt läroböcker och i forskningslitteratur.

     

    När det gäller elevernas resultat, en av de stora forskningsfrågorna i studien, så fann vi inga signifikanta skillnader mellan traditionell undervisning och undervisning med visualiseringsläromedlet (3D). Beträffande elevers attityder till matematikmomentet kan konstateras att i kontrollgruppen för årskurs 6 förbättrades attityden signifikans, men inte i klass 8. Gällande flickors och pojkars resultat och attityder kan vi konstatera att flickorna i båda klasserna hade bättre förkunskaper än pojkarna samt att i årskurs 6 var flickorna mer positiva till matematikmomentet än pojkarna i kontrollgruppen. Därutöver kan vi inte skönja några signifikanta skillnader. Andra viktiga rön i studien var att provkonstruktionen inte var optimal samt att tiden för provgenomförande har stor betydelse när på dagen det genomfördes. Andra resultat resultaten i den kvalitativa analysen pekar på positiva attityder och beteenden från eleverna vid arbetet med det visuella läromedlet. Elevernas samarbete och kommunikation förbättrades under lektionerna. Vidare pekade lärarna på att med 3D-läromedlet gavs större möjligheter till att stimulera flera sinnen under lärprocessen. En tydlig slutsats är att 3D-läromedlet är ett viktigt komplement i undervisningen, men kan inte användas helt självt.

     

    Vi kan varken sälla oss till de forskare som anser att 3D-visualisering är överlägset som läromedel för elevers resultat eller till de forskare som varnar för dess effekter för elevers kognitiva överbelastning.  Våra resultat ligger mer i linje med de slutsatser Skolforskningsinstitutet (2017) drar, nämligen att undervisning med digitala läromedel i matematik kan ha positiva effekter, men en lika effektiv undervisning kan möjligen designas på andra sätt. Däremot pekar resultaten i vår studie på ett flertal störningsmoment som kan ha påverkat möjliga resultat och behovet av god teknologin och välutvecklade programvaror.

     

    I studien har vi analyserat resultaten med hjälp av två övergripande ramverk för integrering av teknikstöd i lärande, SAMR och TPACK. Det förra ramverket bidrog med en taxonomi vid diskussionen av hur väl teknikens möjligheter tagits tillvara av läromedel och i läraktiviteter, det senare för en diskussion om de didaktiska frågeställningarna med fokus på teknikens roll. Båda aspekterna är högaktuella med tanke på den ökande digitaliseringen i skolan.

     

    Utifrån tidigare forskning och denna pilotstudie förstår vi att det är viktigt att designa forskningsmetoderna noggrant. En randomisering av grupper vore önskvärt. Prestandamått kan också vara svåra att välja. Tester där personer får utvärdera användbarhet (usability) och användarupplevelse (user experience, UX) baserade på både kvalitativa och kvantitativa metoder blir viktiga för själva användandet av tekniken, men det måste till ytterligare utvärderingar för att koppla tekniken och visualiseringen till kvaliteten i lärandet och undervisningen. Flera metoder behövs således och det blir viktigt med samarbete mellan olika ämnen och discipliner.

  • 11.
    Brunnström, Kjell
    et al.
    RISE Research Institute of Sweden AB.
    Dima, Elijs
    Andersson, Mattias
    Sjöström, Mårten
    quresh, tahir
    HIAB.
    Johanson, Mathias
    Alkit Communications AB.
    Quality of Experience of hand controller latency in a Virtual Reality simulator2019In: Human Vision and Electronic Imaging 2019 / [ed] Damon Chandler, Mark McCourt and Jeffrey Mulligan, 2019, Springfield, VA, United States, 2019, article id 3068450Conference paper (Refereed)
    Abstract [en]

    In this study, we investigate a VR simulator of a forestry crane used for loading logs onto a truck, mainly looking at Quality of Experience (QoE) aspects that may be relevant for task completion, but also whether there are any discomfort related symptoms experienced during task execution. A QoE test has been designed to capture both the general subjective experience of using the simulator and to study task performance. Moreover, a specific focus has been to study the effects of latency on the subjective experience, with regards to delays in the crane control interface. A formal subjective study has been performed where we have added controlled delays to the hand controller (joystick) signals. The added delays ranged from 0 ms to 800 ms. We found no significant effects of delays on the task performance on any scales up to 200 ms. A significant negative effect was found for 800 ms added delay. The Symptoms reported in the Simulator Sickness Questionnaire (SSQ) was significantly higher for all the symptom groups, but a majority of the participants reported only slight symptoms. Two out of thirty test persons stopped the test before finishing due to their symptoms.

  • 12.
    Brunnström, Kjell
    et al.
    Acreo AB, Kista, Sweden.
    Sedano, Iñigo
    Tecnalia Research & Innovation, Bilbao, Spain.
    Wang, Kun
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Barkowsky, Markus
    IRCCyN, Nantes; France.
    Kihl, Maria
    Lund University.
    Andrén, Börje
    Acreo AB, Kista, Sweden.
    Le Callet, Patrick
    IRCCyN, Nantes; France.
    Sjöström, Mårten
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Aurelius, Andreas
    Acreo AB, Kista, Sweden.
    2D no-reference video quality model development and 3D video transmission quality2012In: Proceedings of the Sixth International Workshop on Video Processing and Quality Metrics for Consumer Electronics VPQM-2012, 2012Conference paper (Other academic)
    Abstract [en]

    This presentation will target two different topics in video quality assessment. First, we discuss 2D no-reference video quality model development. Further, we discuss how to find suitable quality for 3D video transmission. No-reference metrics are the only practical option for monitoring of 2D video quality in live networks. In order to decrease the development time, it might be possible to use full-reference metrics for this purpose. In this work, we have evaluated six full-reference objective metrics in three different databases. We show statistically that VQM performs the best. Further, we use these results to develop a lightweight no-reference model. We have also investigated users' experience of stereoscopic 3D video quality by performing the rating of two subjective assessment datasets, targeting in one dataset efficient transmission in the transmission error free case and error concealment in the other. Among other results, it was shown that, based on the same level of quality of experience, spatial down-sampling may lead to better bitrate efficiency while temporal down-sampling will be worse. When network impairments occur, traditional error 2D concealment methods need to be reinvestigated as they were outperformed switching to 2D presentation.

  • 13.
    Brunnström, Kjell
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Systems and Technology. RISE Acreo AB.
    Sjöström, Mårten
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Systems and Technology.
    Imran, Muhammad
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design. HIAB AB.
    Pettersson, Magnus
    HIAB AB.
    Johanson, Mathias
    Alkit Communications AB, Mölndal.
    Quality Of Experience For A Virtual Reality Simulator2018In: IS and T International Symposium on Electronic Imaging Science and Technology 2018, 2018Conference paper (Refereed)
    Abstract [en]

    In this study, we investigate a VR simulator of a forestrycrane used for loading logs onto a truck, mainly looking at Qualityof Experience (QoE) aspects that may be relevant for taskcompletion, but also whether there are any discomfort relatedsymptoms experienced during task execution. The QoE test hasbeen designed to capture both the general subjective experience ofusing the simulator and to study task completion rate. Moreover, aspecific focus has been to study the effects of latency on thesubjective experience, with regards both to delays in the cranecontrol interface as well as lag in the visual scene rendering in thehead mounted display (HMD). Two larger formal subjectivestudies have been performed: one with the VR-system as it is andone where we have added controlled delay to the display updateand to the joystick signals. The baseline study shows that mostpeople are more or less happy with the VR-system and that it doesnot have strong effects on any symptoms as listed in the SSQ. In thedelay study we found significant effects on Comfort Quality andImmersion Quality for higher Display delay (30 ms), but verysmall impact of joystick delay. Furthermore, the Display delay hadstrong influence on the symptoms in the SSQ, as well as causingtest subjects to decide not to continue with the completeexperiments, and this was also found to be connected to the longerDisplay delays (≥ 20 ms).

  • 14.
    Conti, Caroline
    et al.
    University of Lisbon, Portugal.
    Soares, Luis Ducla
    University of Lisbon, Portugal.
    Nunes, Paulo
    University of Lisbon, Portugal.
    Perra, Cristian
    University of Cagliari, Italy.
    Assunção, Pedro Amado
    Institute de Telecomunicacoes and Politecenico de Leiria, Portugal.
    Sjöström, Mårten
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Systems and Technology.
    Li, Yun
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Systems and Technology.
    Olsson, Roger
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Systems and Technology.
    Jennehag, Ulf
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Systems and Technology.
    Light Field Image Compression2018In: 3D Visual Content Creation, Coding and Delivery / [ed] Assunção, Pedro Amado, Gotchev, Atanas, Cham: Springer, 2018, p. 143-176Chapter in book (Refereed)
  • 15.
    Damghanian, Mitra
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information and Communication systems.
    Olsson, Roger
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information and Communication systems.
    Sjöström, Mårten
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information and Communication systems.
    Depth and Angular Resolution in Plenoptic Cameras2015In: 2015 IEEE International Conference On Image Processing (ICIP), September 2015, IEEE, 2015, p. 3044-3048, article id 7351362Conference paper (Refereed)
    Abstract [en]

    We present a model-based approach to extract the depth and angular resolution in a plenoptic camera. Obtained results for the depth and angular resolution are validated against Zemax ray tracing results. The provided model-based approach gives the location and number of the resolvable depth planes in a plenoptic camera as well as the angular resolution with regards to disparity in pixels. The provided model-based approach is straightforward compared to practical measurements and can reflect on the plenoptic camera parameters such as the microlens f-number in contrast with the principal-ray-model approach. Easy and accurate quantification of different resolution terms forms the basis for designing the capturing setup and choosing a reasonable system configuration for plenoptic cameras. Results from this work will accelerate customization of the plenoptic cameras for particular applications without the need for expensive measurements.

  • 16.
    Damghanian, Mitra
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Olsson, Roger
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Sjöström, Mårten
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Extraction of the lateral resolution in a plenoptic camera using the SPC model2012In: 2012 International Conference on 3D Imaging, IC3D 2012 - Proceedings, IEEE conference proceedings, 2012, p. Art. no. 6615137-Conference paper (Refereed)
    Abstract [en]

    Established capturing properties like image resolution need to be described thoroughly in complex multidimensional capturing setups such as plenoptic cameras (PC), as these introduce a trade-off between resolution and features such as field of view, depth of field, and signal to noise ratio. Models, methods and metrics that assist exploring and formulating this trade-off are highly beneficial for study as well as design of complex capturing systems. This work presents how the important high-level property lateral resolution is extracted from our previously proposed Sampling Pattern Cube (SPC) model. The SPC carries ray information as well as focal properties of the capturing system it models. The proposed operator extracts the lateral resolution from the SPC model throughout an arbitrary number of depth planes resulting in a depth-resolution profile. We have validated the resolution operator by comparing the achieved lateral resolution with previous results from more simple models and from wave optics based Monte Carlo simulations. The lateral resolution predicted by the SPC model agrees with the results from wave optics based numerical simulations and strengthens the conclusion that the SPC fills the gap between ray-based models and wave optics based models, by including the focal information of the system as a model parameter. The SPC is proven a simple yet efficient model for extracting the depth-based lateral resolution as a high-level property of complex plenoptic capturing system.

  • 17.
    Damghanian, Mitra
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information and Communication systems.
    Olsson, Roger
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information and Communication systems.
    Sjöström, Mårten
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information and Communication systems.
    Performance analysis in Lytro camera: Empirical and model based approaches to assess refocusing quality2014In: ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings, IEEE conference proceedings, 2014, p. 559-563Conference paper (Refereed)
    Abstract [en]

    In this paper we investigate the performance of Lytro camera in terms of its refocusing quality. The refocusing quality of the camera is related to the spatial resolution and the depth of field as the contributing parameters. We quantify the spatial resolution profile as a function of depth using empirical and model based approaches. The depth of field is then determined by thresholding the spatial resolution profile. In the model based approach, the previously proposed sampling pattern cube (SPC) model for representation and evaluation of the plenoptic capturing systems is utilized. For the experimental resolution measurements, camera evaluation results are extracted from images rendered by the Lytro full reconstruction rendering method. Results from both the empirical and model based approaches assess the refocusing quality of the Lytro camera consistently, highlighting the usability of the model based approaches for performance analysis of complex capturing systems.

  • 18.
    Damghanian, Mitra
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Olsson, Roger
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Sjöström, Mårten
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    The Sampling Pattern Cube: A Representation and Evaluation Tool for Optical Capturing Systems2012In: Advanced Concepts for Intelligent Vision Systems / [ed] Blanc-Talon, Jacques, Philips, Wilfried, Popescu, Dan, Scheunders, Paul, Zemcík, Pavel, Berlin / Heidelberg: Springer Berlin/Heidelberg, 2012, , p. 12p. 120-131Conference paper (Refereed)
    Abstract [en]

    Knowledge about how the light field is sampled through a camera system gives the required information to investigate interesting camera parameters. We introduce a simple and handy model to look into the sampling behavior of a camera system. We have applied this model to single lens system as well as plenoptic cameras. We have investigated how camera parameters of interest are interpreted in our proposed model-based representation. This model also enables us to make comparisons between capturing systems or to investigate how variations in an optical capturing system affect its sampling behavior.

  • 19.
    Damghanian, Mitra
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information and Communication systems.
    Olsson, Roger
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information and Communication systems.
    Sjöström, Mårten
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information and Communication systems.
    Erdmann, Arne
    Raytrix Gmbh.
    Perwass, Christian
    Raytrix Gmbh.
    Spatial resolution in a multi-focus plenoptic camera2014In: IEEE International Conference on Image Processing, ICIP 2014, IEEE conference proceedings, 2014, p. 1932-1936, article id 7025387Conference paper (Refereed)
    Abstract [en]

    Evaluation of the state of the art plenoptic cameras is necessary for design and application purposes. In this work, spatial resolution is investigated in a multi-focus plenoptic camera using two approaches: empirical and model-based. The Raytrix R29 plenoptic camera is studied which utilizes three types of micro lenses with different focal lengths in a hexagonal array structure to increase the depth of field. The modelbased approach utilizes the previously proposed sampling pattern cube (SPC) model for representation and evaluation of the plenoptic capturing systems. For the experimental resolution measurements, spatial resolution values are extracted from images reconstructed by the provided Raytrix reconstruction method. Both the measurement and the SPC model based approaches demonstrate a gradual variation of the resolution values in a wide depth range for the multi focus R29 camera. Moreover, the good agreement between the results from the model-based approach and those from the empirical approach confirms suitability of the SPC model in evaluating high-level camera parameters such as the spatial resolution in a complex capturing system as R29 multi-focus plenoptic camera.

  • 20.
    Damghanian, Mitra
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information and Communication systems.
    Olsson, Roger
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information and Communication systems.
    Sjöström, Mårten
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information and Communication systems.
    Navarro Fructuoso, Hector
    Department of Optics, University of Valencia, Spain.
    Martinez Corral, Manuel
    Department of Optics, University of Valencia, Spain.
    Investigating the lateral resolution in a plenoptic capturing system using the SPC model2013In: Proceedings of SPIE - The International Society for Optical Engineering: Digital photography IX, SPIE - International Society for Optical Engineering, 2013, p. 86600T-Conference paper (Refereed)
    Abstract [en]

    Complex multidimensional capturing setups such as plenoptic cameras (PC) introduce a trade-off between various system properties. Consequently, established capturing properties, like image resolution, need to be described thoroughly for these systems. Therefore models and metrics that assist exploring and formulating this trade-off are highly beneficial for studying as well as designing of complex capturing systems. This work demonstrates the capability of our previously proposed sampling pattern cube (SPC) model to extract the lateral resolution for plenoptic capturing systems. The SPC carries both ray information as well as focal properties of the capturing system it models. The proposed operator extracts the lateral resolution from the SPC model throughout an arbitrary number of depth planes giving a depth-resolution profile. This operator utilizes focal properties of the capturing system as well as the geometrical distribution of the light containers which are the elements in the SPC model. We have validated the lateral resolution operator for different capturing setups by comparing the results with those from Monte Carlo numerical simulations based on the wave optics model. The lateral resolution predicted by the SPC model agrees with the results from the more complex wave optics model better than both the ray based model and our previously proposed lateral resolution operator. This agreement strengthens the conclusion that the SPC fills the gap between ray-based models and the real system performance, by including the focal information of the system as a model parameter. The SPC is proven a simple yet efficient model for extracting the lateral resolution as a high-level property of complex plenoptic capturing systems.

  • 21.
    Dima, Elijs
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Systems and Technology.
    Brunnström, Kjell
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Systems and Technology. RISE Research Institutes of Sweden, Division ICT - Acreo.
    Sjöström, Mårten
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Systems and Technology.
    Andersson, Mattias
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Systems and Technology.
    Edlund, Joakim
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Systems and Technology.
    Johanson, Mathias
    Alkit Communications AB.
    Qureshi, Tahir
    HIAB AB.
    View Position Impact on QoE in an Immersive Telepresence System for Remote Operation2019In: 2019 Eleventh International Conference on Quality of Multimedia Experience (QoMEX), IEEE, 2019, p. 1-3Conference paper (Refereed)
    Abstract [en]

    In this paper, we investigate how different viewing positions affect a user's Quality of Experience (QoE) and performance in an immersive telepresence system. A QoE experiment has been conducted with 27 participants to assess the general subjective experience and the performance of remotely operating a toy excavator. Two view positions have been tested, an overhead and a ground-level view, respectively, which encourage reliance on stereoscopic depth cues to different extents for accurate operation. Results demonstrate a significant difference between ground and overhead views: the ground view increased the perceived difficulty of the task, whereas the overhead view increased the perceived accomplishment as well as the objective performance of the task. The perceived helpfulness of the overhead view was also significant according to the participants.

  • 22.
    Dima, Elijs
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Systems and Technology.
    Gao, Yuan
    Institute of Computer Science, Christian-Albrechts University of Kiel, Germany.
    Sjöström, Mårten
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Systems and Technology.
    Olsson, Roger
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Systems and Technology.
    Koch, Reinhard
    Institute of Computer Science, Christian-Albrechts University of Kiel, Germany.
    Esquivel, Sandro
    Institute of Computer Science, Christian-Albrechts University of Kiel, Germany.
    Estimation and Post-Capture Compensation of Synchronization Error in Unsynchronized Multi-Camera SystemsManuscript (preprint) (Other academic)
  • 23.
    Dima, Elijs
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information and Communication systems.
    Sjöström, Mårten
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information and Communication systems.
    Olsson, Roger
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information and Communication systems.
    Assessment of Multi-Camera Calibration Algorithms for Two-Dimensional Camera Arrays Relative to Ground Truth Position and Direction2016In: 3DTV-Conference, IEEE Computer Society, 2016, article id 7548887Conference paper (Refereed)
    Abstract [en]

    Camera calibration methods are commonly evaluated on cumulative reprojection error metrics, on disparate one-dimensional da-tasets. To evaluate calibration of cameras in two-dimensional arrays, assessments need to be made on two-dimensional datasets with constraints on camera parameters. In this study, accuracy of several multi-camera calibration methods has been evaluated on camera parameters that are affecting view projection the most. As input data, we used a 15-viewpoint two-dimensional dataset with intrinsic and extrinsic parameter constraints and extrinsic ground truth. The assessment showed that self-calibration methods using structure-from-motion reach equal intrinsic and extrinsic parameter estimation accuracy with standard checkerboard calibration algorithm, and surpass a well-known self-calibration toolbox, BlueCCal. These results show that self-calibration is a viable approach to calibrating two-dimensional camera arrays, but improvements to state-of-art multi-camera feature matching are necessary to make BlueCCal as accurate as other self-calibration methods for two-dimensional camera arrays.

  • 24.
    Dima, Elijs
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Systems and Technology.
    Sjöström, Mårten
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Systems and Technology.
    Olsson, Roger
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Systems and Technology.
    Modeling Depth Uncertainty of Desynchronized Multi-Camera Systems2017In: 2017 International Conference on 3D Immersion (IC3D), IEEE, 2017Conference paper (Refereed)
    Abstract [en]

    Accurately recording motion from multiple perspectives is relevant for recording and processing immersive multi-media and virtual reality content. However, synchronization errors between multiple cameras limit the precision of scene depth reconstruction and rendering. In order to quantify this limit, a relation between camera de-synchronization, camera parameters, and scene element motion has to be identified. In this paper, a parametric ray model describing depth uncertainty is derived and adapted for the pinhole camera model. A two-camera scenario is simulated to investigate the model behavior and how camera synchronization delay, scene element speed, and camera positions affect the system's depth uncertainty. Results reveal a linear relation between synchronization error, element speed, and depth uncertainty. View convergence is shown to affect mean depth uncertainty up to a factor of 10. Results also show that depth uncertainty must be assessed on the full set of camera rays instead of a central subset.

  • 25.
    Dima, Elijs
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Systems and Technology.
    Sjöström, Mårten
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Systems and Technology.
    Olsson, Roger
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Systems and Technology.
    Kjellqvist, Martin
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Systems and Technology.
    Litwic, Lukasz
    Ericsson AB.
    Zhang, Zhi
    Ericsson AB.
    Rasmusson, Lennart
    Observit AB.
    Flodén, Lars
    Observit AB.
    LIFE: A Flexible Testbed For Light Field Evaluation2018Conference paper (Refereed)
    Abstract [en]

    Recording and imaging the 3D world has led to the use of light fields. Capturing, distributing and presenting light field data is challenging, and requires an evaluation platform. We define a framework for real-time processing, and present the design and implementation of a light field evaluation system. In order to serve as a testbed, the system is designed to be flexible, scalable, and able to model various end-to-end light field systems. This flexibility is achieved by encapsulating processes and devices in discrete framework systems. The modular capture system supports multiple camera types, general-purpose data processing, and streaming to network interfaces. The cloud system allows for parallel transcoding and distribution of streams. The presentation system encapsulates rendering and display specifics. The real-time ability was tested in a latency measurement; the capture and presentation systems process and stream frames within a 40 ms limit.

  • 26. Djukic, D.
    et al.
    Sjöström, Mårten
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Dutoit, B.
    Preisach-type hysteresis modelling in Bi-2223 tapes1997In: Applied Superconductivity 1997.: Proceedings of EUCAS 1997 Third European Conference on Applied Superconductivity, Vol. 2, 1997, p. 1409-1412Conference paper (Other scientific)
  • 27.
    Domanski, Marek
    et al.
    Poznan University, Poland.
    Grajek, Tomasz
    Poznan University, Poland.
    Conti, Caroline
    University of Lisbon, Portugal.
    Debono, Carl James
    University of Malta, Malta.
    de Faria, Sérgio M. M.
    Institute de Telecomunicacôes and Politecico de Leiria, Portugal.
    Kovacs, Peter
    Holografika, Budapest, Hungary.
    Lucas, Luis F.R.
    Institute de Telecomunicacôes and Politecico de Leiria, Portugal.
    Nunes, Paulo
    University of Lisbon, Portugal.
    Perra, Cristian
    University of Cagliari, Italy.
    Rodrigues, Nuno M.M.
    Institute de Telecomunicacôes and Politecico de Leiria, Portugal.
    Sjöström, Mårten
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Systems and Technology.
    Soares, Luis Ducla
    University of Lisbon, Portugal.
    Stankiewicz, Olgierd
    Poznan university, Poland.
    Emerging Imaging Technologies: Trends and Challenges2018In: 3D Visual Content Creation, Coding and Delivery / [ed] Assunção, Pedro Amado, Gotchev, Atanas, Cham: Springer, 2018, p. 5-39Chapter in book (Refereed)
  • 28. Dutoit, B.
    et al.
    Sjöström, Mårten
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Stavrev, S.
    Bi(2223) Ag Sheathed Tape Ic and Exponent n Characterization and Modelling under DC Applied Magnetic Field1999In: IEEE Transactions on Applied Superconductivity, ISSN 1051-8223, Vol. 9, no 2, p. 809-812Article in journal (Refereed)
  • 29. Dutoit, B.
    et al.
    Stavrev, S.
    Sjöström, Mårten
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Bi(2223) Ag sheathed tape characterisation under DC applied magnetic field1998In: Proceedings of the Seventeenth International Cryogenic Engineering Conference: ICEC 17, Bristol, UK: IOP Publishing , 1998, p. 419-422Conference paper (Other scientific)
  • 30.
    Eriksson, Magnus
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information and Communication systems.
    Rahman, S. M. Hasibur
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information and Communication systems.
    Fraile, Francisco
    Universitat Politècnica de València, Spain .
    Sjöström, Mårten
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information and Communication systems.
    Efficient Interactive Multicast over DVB-T2: Utilizing Dynamic SFNs and PARPS2013In: IEEE International Symposium on Broadband Multimedia Systems and Broadcasting, BMSB, IEEE conference proceedings, 2013, p. Art. no. 6621700-Conference paper (Refereed)
    Abstract [en]

    In the terrestrial digital TV systems DVB-T/H/T2, broadcasting is employed, meaning that all TV programs are sent over all transmitters, also where there are no viewers. This is inefficient utilization of spectrum and transmitter equipment. Applying interactive multicasting over DVB-T2 is a novel approach that would substantially reduce the spectrum required to deliver a certain amount of TV programs. Further gain would be achieved by Dynamic single-frequency network (DSFN) formations, which can be implemented using the concept of PARPS (Packet and Resource Plan Scheduling). A Zipf-law heterogeneous program selection model is suggested. For a system of four coordinated transmitters, and certain assumptions, IP multicasting over non-continuous transmission DSFN gives 1740% increase in multiuser system spectral efficiency (MSSE) in (users∙bit/s)/Hz/site as compared to broadcasting over SFN.

  • 31. Grilli, F.
    et al.
    Sjöström, Mårten
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Prediction of resistive and hysteretic losses in a multi-layer high-Tc superconducting cable2004In: Superconductors Science and Technology, ISSN 0953-2048, E-ISSN 1361-6668, Vol. 17, no 3, p. 409-416Article in journal (Refereed)
    Abstract [en]

    In this work, a model of a multi-layer high-Tc superconducting (HTS) cable that computes the current distribution across layers as well as the AC loss is presented. Analyzed is the case of a four-layer cable, but the developed method can be applied to a cable with an arbitrary number of layers. The cable is modelled by an equivalent circuit consisting of the following elements: nonlinear resitances, linear self and mutual inductances, as well as nonlinear, hysteretic inductances. The first take into account the typical current-voltage relation for superconductors, the second introduce coupling among the layers and depend on the geometrical parameters of the cable, the third describe the hysteretic behaviour of superconductors. In the presented analysis, the geometrical dimensions of the cable are fixed, except for the pitch length and the winding orientation of the layers. These free parameters are varied in order to partition the current across the layers such that the AC loss in the superconductor is minimized. The presented model allows to evaluate rapidly the current distribution across the different layers and to compute the corresponding AC loss. The rapidity of the computation allows calculating the losses for many different configurations within a reasonable time. The model has so firstly been used for finding the pitch lengths giving an optimal current distribution across the layers and for computing the corresponding AC loss. Secondly, the model has been refined taking into account the effects of the magnetic self-field, which, especially at high currents, can sensibly reduce the transport capacity of the cable, in particular in the outer layers.

  • 32.
    Jaldemark, Jimmy
    et al.
    Mid Sweden University, Faculty of Human Sciences, Institution of education.
    Anderson, Karen
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Lindberg, J. Ola
    Mid Sweden University, Faculty of Human Sciences, Institution of education.
    Persson Slumpi, Thomas
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Sefyrin, Johanna
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Snyder, Kristen
    Mid Sweden University, Faculty of Human Sciences, Institution of education.
    Sjöström, Mårten
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Slutrapport delprojekt 3.5.1 Forskning och forskarskolan i e-lärande2011Other (Other (popular science, discussion, etc.))
  • 33.
    Karlsson, Linda
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Olsson, Roger
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Sjöström, Mårten
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Temporal filter with bilinear interpolation for ROI video coding2006Report (Other academic)
    Abstract [en]

    In videoconferencing and video over the mobile phone, themain visual information is found within limited regions ofthe video. This enables improved perceived quality byregion-of-interest coding. In this paper we introduce atemporal preprocessing filter that reuses values of theprevious frame, by which changes in the background areonly allowed for every second frame. This reduces the bitrateby 10-25% or gives an increase in average PSNR of0.29-0.98 dB. Further processing of the video sequence isnecessary for an improved re-allocation of the resources.Motion of the ROI causes absence of necessary backgrounddata at the ROI border. We conceal this by using a bilinearinterpolation between the current and previous frame at thetransition from background to ROI. This results in animprovement in average PSNR of 0.44 – 1.05 dB in thetransition area with a minor decrease in average PSNRwithin the ROI.

  • 34.
    Karlsson, Linda
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Sjöström, Mårten
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    A preprocessing approach to ROI Video Coding using Variable Gaussian Filters and Variance in Intensity2005In: Proceedings Elmar - International Symposium Electronics in Marine, Zagreb, Croatia: IEEE conference proceedings, 2005, p. 65-68, article id 1505643Conference paper (Refereed)
    Abstract [en]

    In applications involving video over mobile phones or Internet, the limited quality depending on the transmission rate can be further improved by region-of-interest (ROI) coding. In this paper we present a preprocessing method using variable Gaussian filters controlled by a quality map indicating the distance to the ROI border that seeks to smooth the border effects between ROI and non-ROI. According to subjective tests the reduction of border effects increases the percieved quality, compared to using only one low pass filter. It also introduces a small improvement of the PSNR of the intensity component within the ROI after compression. With the compressed original sequence as a reference, the average PSNR was increased by 1.25 dB and 2.3 dB for 100 kbit/s and 150 kbit/s, respectively. Furthermore, in order to reduce computational complexity, a modified quality map is introduced using variance in intensity to exclude pixels, which are not visibly affected by the Gaussian filters. No change in quality is noticed when using less than 76% of the pixels.

  • 35.
    Karlsson, Linda
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Sjöström, Mårten
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Improved ROI Video Coding using Variable Gaussian Pre-Filters and Variance in Intensity2005In: IEEE International Conference on Image Processing 2005, ICIP 2005: Vol. 2, 2005, p. 1817-1820, article id 1530054Conference paper (Refereed)
    Abstract [en]

    In applications involving video over mobile phones or Internet, the limited quality depending on the transmission rate can be further improved by region-of-interest (ROI) coding. In this paper we present a preprocessing method using variable Gaussian filters controlled by a quality map indicating the distance to the ROI border. The border effects are reduced introducing a small improvement of the PSNR of the intensity component within the ROI after compression, compared to using only one low pass filter. With the compressed original sequence as a reference, the average PSNR was increased by 1.25 dB and 2.3 dB for 100 kbit/s and 150 kbit/s, respectively. A modified quality map is introduced using variance to exclude pixels, which are not visibly affected by the Gaussian filters, reducing computational complexity. Using less than 76% of the pixels gives no noticeable change in quality.

  • 36.
    Karlsson, Linda
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Sjöström, Mårten
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Multiview plus depth scalable coding in the depth domain2009In: 3DTV-CON 2009 - 3rd 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video, Proceedings, IEEE conference proceedings, 2009, p. 5069631-Conference paper (Refereed)
    Abstract [en]

    Three dimensional (3D) TV is a growing area that provides an extra dimension at the cost of spatial resolution. The multi-view plus depth representation provides a lower bit rate when it is encoded than multi-view and higher resolution than a 2D-plus-depth sequence. Scalable video coding provides adaption to the conditions at the receiver. In this paper we propose a scheme that combines scalability in both the view and depth domain. The center view data is preserved, whereas the data of the side views are extracted in layers depending on distance to the camera. This allows a decrease in bit rate of 16-39 % for the colour part of a 3-view MV depending number of pixels in the first enhancement layer if one layer is extracted. Each additional layer increases the visual quality and PSNR compared only using center view data.

  • 37.
    Karlsson, Linda
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Sjöström, Mårten
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Region-of-interest 3D video coding based on depth images2008In: 2008 3DTV Conference - True Vision - Capture, Transmission and Display of 3D Video, IEEE conference proceedings, 2008, p. 121-124Conference paper (Refereed)
    Abstract [en]

    Three dimensional (3D) TV is becoming a mature technology due to the progress within areas such as display and network technology among others. However, 3D video demands a higher bandwidth in order to transmit the information needed to render or directly display several different views at the receiver. The 2D plus depth representation requires less bit rate than most 3D video representations, although the necessary views have to be rendered at the receiver. In this paper we propose to combine the 2D plus depth representation with region-of-interest (ROI) video coding to ensure a higher quality at parts of the sequence that are of interest to the viewer. These include objects close to the viewer as well as faces. This allows either the bit rate to be reduced by 12-28 % or the quality within the ROI to be increased by 0.57 - 1.5 dB, when a fixed bit rate is applied.

  • 38.
    Karlsson, Linda
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Sjöström, Mårten
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Olsson, Roger
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Spatio-Temporal Filter for ROI Video Coding2006In: Proceedings of the 14th European Signal Processing Conference (EUSIPCO 2006) Florence, Italy 4-8.Sept. 2006, 2006Conference paper (Other academic)
    Abstract [en]

    Reallocating resources within a video sequence to the regions-of-interest increases the perceived quality at limited bandwidths. In this paper we combine a spatial filter with a temporal filter, which are both codec and standard independent. This spatio-temporal filter removes resources from both the motion vectors and the prediction error with a computational complexity lower than the spatial filter by itself. This decreases the bit rate by 30-50% compared to coding the original sequence using H.264. The released bits can be used by the codec to increase the PSNR of the ROI by 1.58 - 4.61 dB, which is larger than for the spatial and temporal filters by themselves.

  • 39.
    Karlsson, Linda Sofia
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Sjöström, Mårten
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    A Spatio-Temporal Filter for Region-of-Interest Video CodingManuscript (preprint) (Other academic)
    Abstract [en]

    Region of interest (ROI) video coding increases the quality in regions interesting to the viewer at the expense of quality in the background. This enables a high perceived quality at low bit rates. A successfully detected ROI can be used to control the bit-allocation in the encoding. In this paper we present a filter that is independent of codec and standard. It is applied in both the spatial and the temporal domains. The filter’s ability to reduce the number of bits necessary to encode the background is analyzed theoretically and where these bits are re-allocated. The computational complexity of the algorithms is also determined. The quality is evaluated using PSNR of the ROI and subjective tests. Test showed that the spatio-temporal filter has a better coding efficiency than using only spatial or only temporal filtering. The filter successfully re-allocates the bits from the background to the foreground.

  • 40.
    Karlsson, Linda Sofia
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Sjöström, Mårten
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Layer assignment based on depth data distribution for multiview-plus-depth scalable video coding2011In: IEEE transactions on circuits and systems for video technology (Print), ISSN 1051-8215, E-ISSN 1558-2205, Vol. 21, no 6, p. 742-754Article in journal (Refereed)
    Abstract [en]

    Three dimensional (3D) video is experiencing a rapid growth in a number of areas including 3D cinema, 3DTV and mobile phones. Several problems must to be addressed to display captured 3D video at another location. One problem is how torepresent the data. The multiview plus depth representation of a scene requires a lower bit rate than transmitting all views required by an application and provides more information than a 2D-plus-depth sequence. Another problem is how to handle transmission in a heterogeneous network. Scalable video coding enables adaption of a 3D video sequence to the conditions at the receiver. In this paper we present a scheme that combines scalability based on the position in depth of the data and the distance to the center view. The general scheme preserves the center view data, whereas the data of the remaining views are extracted in enhancement layers depending on distance to the viewer and the center camera. The data is assigned into enhancement layers within a view based on depth data distribution. Strategies concerning the layer assignment between adjacent views are proposed. In general each extracted enhancement layer increases the visual quality and PSNR compared to only using center view data. The bit-rate per layer can be further decreased if depth data is distributed over the enhancement layers. The choice of strategy to assign layers between adjacent views depends on whether quality of the fore-most objects in the scene or the quality of the views close to the center is important.

  • 41.
    Li, Yongwei
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Systems and Technology.
    Olsson, Roger
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Systems and Technology.
    Sjöström, Mårten
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Systems and Technology.
    An analysis of demosaicing for plenoptic capture based on ray optics2018In: Proceedings of 3DTV Conference 2018, 2018, article id 8478476Conference paper (Refereed)
    Abstract [en]

    The plenoptic camera is gaining more and more attention as it capturesthe 4D light field of a scene with a single shot and enablesa wide range of post-processing applications. However, the preprocessing steps for captured raw data, such as demosaicing, have been overlooked. Most existing decoding pipelines for plenoptic cameras still apply demosaicing schemes which are developed for conventional cameras. In this paper, we analyze the sampling pattern of microlens-based plenoptic cameras by ray-tracing techniques and ray phase space analysis. The goal of this work is to demonstrate guidelines and principles for demosaicing the plenoptic captures by taking the unique microlens array design into account. We show that the sampling of the plenoptic camera behaves differently from that of a conventional camera and the desired demosaicing scheme is depth-dependent.

  • 42.
    Li, Yongwei
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Systems and Technology.
    Scrofani, Gabriele
    Department of Optics, University of Valencia, Burjassot, Spain.
    Sjöström, Mårten
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Systems and Technology.
    Martinez-Corraly, M.
    Department of Optics, University of Valencia, Burjassot, Spain.
    Area-Based Depth Estimation for Monochromatic Feature-Sparse Orthographic Capture2018In: 2018 26th European Signal Processing Conference (EUSIPCO), IEEE conference proceedings, 2018, p. 206-210, article id 8553336Conference paper (Refereed)
    Abstract [en]

    With the rapid development of light field technology, depth estimation has been highlighted as one of the critical problems in the field, and a number of approaches have been proposed to extract the depth of the scene. However, depthestimation by stereo matching becomes difficult and unreliable when the captured images lack both color and feature information. In this paper, we propose a scheme that extracts robust depth from monochromatic, feature-sparse scenes recorded in orthographic sub-aperture images. Unlike approaches which relyon the rich color and texture information across the sub-aperture views, our approach is based on depth from focus techniques. First, we superimpose shifted sub-aperture images on top of anarbitrarily chosen central image. To focus on different depths, the shift amount is varied based on the micro-lens array properties. Next, an area-based depth estimation approach is applied tofind the best match among the focal stack and generate the dense depth map. This process is repeated for each sub-aperture image. Finally, occlusions are handled by merging depth maps generated from different central images followed by a voting process. Results show that the proposed scheme is more suitable than conventional depth estimation approaches in the context of orthographic captures that have insufficient color and feature information, such as microscopic fluorescence imaging.

  • 43.
    Li, Yun
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information and Communication systems.
    Olsson, Roger
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information and Communication systems.
    Sjöström, Mårten
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information and Communication systems.
    Compression of Unfocused Plenoptic Images using a Displacement Intra prediction2016In: 2016 IEEE International Conference on Multimedia and Expo Workshop, ICMEW 2016, IEEE Signal Processing Society, 2016, article id 7574673Conference paper (Refereed)
    Abstract [en]

    Plenoptic images are one type of light field contents produced by using a combination of a conventional camera and an additional optical component in the form of microlens arrays, which are positioned in front of the image sensor surface. This camera setup can capture a sub-sampling of the light field with high spatial fidelity over a small range, and with a more coarsely sampled angle range. The earliest applications that leverage on the plenoptic image content is image refocusing, non-linear distribution of out-of-focus areas, SNR vs. resolution trade-offs, and 3D-image creation. All functionalities are provided by using post-processing methods. In this work, we evaluate a compression method that we previously proposed for a different type of plenoptic image (focused or plenoptic camera 2.0 contents) than the unfocused or plenoptic camera 1.0 that is used in this Grand Challenge. The method is an extension of the state-of-the-art video compression standard HEVC where we have brought the capability of bi-directional inter-frame prediction into the spatial prediction. The method is evaluated according to the scheme set out by the Grand Challenge, and the results show a high compression efficiency compared with JPEG, i.e., up to 6 dB improvements for the tested images.

  • 44.
    Li, Yun
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Sjöström, Mårten
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Jennehag, Ulf
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Olsson, Roger
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    A Scalable Coding Approach for High Quality Depth Image Compression2012In: 3DTV-Conference, IEEE conference proceedings, 2012, p. Art. no. 6365469-Conference paper (Refereed)
    Abstract [en]

    The distortion by using traditional video encoders (e.g. H.264) on the depth discontinuity can introduce disturbing effects on the synthesized view. The proposed scheme aims at preserving the most significantdepth transition for a better view synthesis. Furthermore, it has a scalable structure. The scheme extracts edge contours from a depth image and represents them by chain code. The chain code and the sampleddepth values on each side of the edge contour are encoded by differential and arithmetic coding. The depthimage is reconstructed by diffusion of edge samples and uniform sub-samples from the low quality depthimage. At low bit rates, the proposed scheme outperforms HEVC intra at the edges in the synthesized views, which correspond to the significant discontinuities in the depth image. The overall quality is also better with the proposed scheme at low bit rates for contents with distinct depth transition. © 2012 IEEE.

  • 45.
    Li, Yun
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information and Communication systems.
    Sjöström, Mårten
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information and Communication systems.
    Jennehag, Ulf
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information and Communication systems.
    Olsson, Roger
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information and Communication systems.
    Depth Image Post-processing Method by Diffusion2013In: Proceedings of SPIE-The International Society for Optical Engineering: 3D Image Processing (3DIP) and Applications, SPIE - International Society for Optical Engineering, 2013, p. Art. no. 865003-Conference paper (Refereed)
    Abstract [en]

    Multi-view three-dimensional television relies on view synthesis to reduce the number of views being transmitted.  Arbitrary views can be synthesized by utilizing corresponding depth images with textures. The depth images obtained from stereo pairs or range cameras may contain erroneous values, which entail artifacts in a rendered view. Post-processing of the data may then be utilized to enhance the depth image with the purpose to reach a better quality of synthesized views. We propose a Partial Differential Equation (PDE)-based interpolation method for a reconstruction of the smooth areas in depth images, while preserving significant edges. We modeled the depth image by adjusting thresholds for edge detection and a uniform sparse sampling factor followed by the second order PDE interpolation. The objective results show that a depth image processed by the proposed method can achieve a better quality of synthesized views than the original depth image. Visual inspection confirmed the results.

  • 46.
    Li, Yun
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information and Communication systems.
    Sjöström, Mårten
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information and Communication systems.
    Jennehag, Ulf
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information and Communication systems.
    Olsson, Roger
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information and Communication systems.
    Depth Map Compression with Diffusion Modes in 3D-HEVC2013In: MMEDIA 2013 - 5th International Conferences on Advances in Multimedia / [ed] Philip Davies, David Newell, International Academy, Research and Industry Association (IARIA), 2013, p. 125-129Conference paper (Refereed)
    Abstract [en]

    For three-dimensional television, multiple views can be generated by using the Multi-view Video plus Depth (MVD) format. The depth maps of this format can be compressed efficiently by the 3D extension of High Efficiency Video Coding (3D-HEVC), which has explored the correlations between its two components, texture and associated depth map. In this paper, we introduce two modes for depth map coding into HEVC, where the modes use diffusion. The framework for inter-component prediction of Depth Modeling Modes (DMM) is utilized for the proposed modes. They detect edges from textures and then diffuse an entire block from known adjacent blocks by using Laplace equation constrained by the detected edges. The experimental results show that depth maps can be compressed more efficiently with the proposed diffusion modes, where the bit rate saving can reach 1.25 percentage of the total depth bit rate with a constant quality of synthesized views.

  • 47.
    Li, Yun
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information and Communication systems.
    Sjöström, Mårten
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information and Communication systems.
    Jennehag, Ulf
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information and Communication systems.
    Olsson, Roger
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information and Communication systems.
    Tourancheau, Sylvain
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information and Communication systems.
    Subjective Evaluation of an Edge-based Depth Image Compression Scheme2013In: Proceedings of SPIE - The International Society for Optical Engineering: Stereoscopic Displays and Applications XXIV, SPIE - International Society for Optical Engineering, 2013, p. Art. no. 86480D-Conference paper (Refereed)
    Abstract [en]

    Multi-view three-dimensional television requires many views, which may be synthesized from two-dimensional images with accompanying pixel-wise depth information. This depth image, which typically consists of smooth areas and sharp transitions at object borders, must be consistent with the acquired scene in order for synthesized views to be of good quality. We have previously proposed a depth image coding scheme that preserves significant edges and encodes smooth areas between these. An objective evaluation considering the structural similarity (SSIM) index for synthesized views demonstrated an advantage to the proposed scheme over the High Efficiency Video Coding (HEVC) intra mode in certain cases. However, there were some discrepancies between the outcomes from the objective evaluation and from our visual inspection, which motivated this study of subjective tests. The test was conducted according to ITU-R BT.500-13 recommendation with Stimulus-comparison methods. The results from the subjective test showed that the proposed scheme performs slightly better than HEVC with statistical significance at majority of the tested bit rates for the given contents.

  • 48.
    Li, Yun
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information and Communication systems.
    Sjöström, Mårten
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information and Communication systems.
    Olsson, Roger
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information and Communication systems.
    Coding of plenoptic images by using a sparse set and disparities2015In: Proceedings - IEEE International Conference on Multimedia and Expo, IEEE conference proceedings, 2015, p. -Art. no. 7177510Conference paper (Refereed)
    Abstract [en]

    A focused plenoptic camera not only captures the spatial information of a scene but also the angular information. The capturing results in a plenoptic image consisting of multiple microlens images and with a large resolution. In addition, the microlens images are similar to their neighbors. Therefore, an efficient compression method that utilizes this pattern of similarity can reduce coding bit rate and further facilitate the usage of the images. In this paper, we propose an approach for coding of focused plenoptic images by using a representation, which consists of a sparse plenoptic image set and disparities. Based on this representation, a reconstruction method by using interpolation and inpainting is devised to reconstruct the original plenoptic image. As a consequence, instead of coding the original image directly, we encode the sparse image set plus the disparity maps and use the reconstructed image as a prediction reference to encode the original image. The results show that the proposed scheme performs better than HEVC intra with more than 5 dB PSNR or over 60 percent bit rate reduction.

  • 49.
    Li, Yun
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information and Communication systems.
    Sjöström, Mårten
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information and Communication systems.
    Olsson, Roger
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information and Communication systems.
    Jennehag, Ulf
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information and Communication systems.
    Coding of focused plenoptic contents by displacement intra prediction2016In: IEEE transactions on circuits and systems for video technology (Print), ISSN 1051-8215, E-ISSN 1558-2205, Vol. 26, no 7, p. 1308-1319, article id 7137669Article in journal (Refereed)
    Abstract [en]

    A light field is commonly described by a two-plane representation with four dimensions. Refocused three-dimensional contents can be rendered from light field images. A method for capturing these images is by using cameras with microlens arrays. A dense sampling of the light field results in large amounts of redundant data. Therefore, an efficient compression is vital for a practical use of these data. In this paper, we propose a displacement intra prediction scheme with a maximum of two hypotheses for the compression of plenoptic contents from focused plenoptic cameras. The proposed scheme is further implemented into HEVC. The work is aiming at coding plenoptic captured contents efficiently without knowing underlying camera geometries. In addition, the theoretical analysis of the displacement intra prediction for plenoptic images is explained; the relationship between the compressed captured images and their rendered quality is also analyzed. Evaluation results show that plenoptic contents can be efficiently compressed by the proposed scheme. Bit rate reduction up to 60 percent over HEVC is obtained for plenoptic images, and more than 30 percent is achieved for the tested video sequences.

  • 50.
    Li, Yun
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information and Communication systems.
    Sjöström, Mårten
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information and Communication systems.
    Olsson, Roger
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information and Communication systems.
    Jennehag, Ulf
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information and Communication systems.
    Efficient Intra Prediction Scheme For Light Field Image Compression2014In: ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings, IEEE conference proceedings, 2014, p. Art. no. 6853654-Conference paper (Refereed)
    Abstract [en]

    Interactive photo-realistic graphics can be rendered by using light field datasets. One way of capturing the dataset is by using light field cameras with microlens arrays. The captured images contain repetitive patterns resulted from adjacent mi-crolenses. These images don't resemble the appearance of a natural scene. This dissimilarity leads to problems in light field image compression by using traditional image and video encoders, which are optimized for natural images and video sequences. In this paper, we introduce the full inter-prediction scheme in HEVC into intra-prediction for the compression of light field images. The proposed scheme is capable of performing both unidirectional and bi-directional prediction within an image. The evaluation results show that above 3 dB quality improvements or above 50 percent bit-rate saving can be achieved in terms of BD-PSNR for the proposed scheme compared to the original HEVC intra-prediction for light field images.

123 1 - 50 of 121
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf