miun.sePublications
Change search
Refine search result
1 - 14 of 14
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Barkowsky, Marcus
    et al.
    LUNAM Université, Université de Nantes, IRCCyN UMR CNRS 6597, Rue Christian Pauc, 44306 Nantes, France.
    Sedano, Iñigo
    TECNALIA, ICT - European Software Institute, Parque Tecnológico de Bizkaia Edificio 202 E-48170 Zamudio, Spain.
    Brunnström, Kjell
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information and Communication systems. Dept. of Netlab, Acreo Swedish ICT AB, Sweden, .
    Leszczuk, Mikołaj
    AGH University of Science and Technology, al. Mickiewicza 30, PL-30059 Kraków, Poland.
    Staelens, Nicolas
    Ghent University - iMinds, Department of Information Technology, Ghent, Belgium.
    Hybrid video quality prediction: Re-viewing video quality measurement for widening application scope2015In: Multimedia tools and applications, ISSN 1380-7501, E-ISSN 1573-7721, Vol. 74, no 2, p. 323-343Article in journal (Refereed)
    Abstract [en]

    A tremendous number of objective video quality measurement algorithms have been developed during the last two decades. Most of them either measure a very limited aspect of the perceived video quality or they measure broad ranges of quality with limited prediction accuracy. This paper lists several perceptual artifacts that may be computationally measured in an isolated algorithm and some of the modeling approaches that have been proposed to predict the resulting quality from those algorithms. These algorithms usually have a very limited application scope but have been verified carefully. The paper continues with a review of some standardized and well-known video quality measurement algorithms that are meant for a wide range of applications, thus have a larger scope. Their individual artifacts prediction accuracy is usually lower but some of them were validated to perform sufficiently well for standardization. Several difficulties and shortcomings in developing a general purpose model with high prediction performance are identified such as a common objective quality scale or the behavior of individual indicators when confronted with stimuli that are out of their prediction scope. The paper concludes with a systematic framework approach to tackle the development of a hybrid video quality measurement in a joint research collaboration.

  • 2.
    Brunnström, Kjell
    et al.
    Acreo AB, Kista, Sweden.
    Sedano, Iñigo
    Tecnalia Research & Innovation, Bilbao, Spain.
    Wang, Kun
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Barkowsky, Markus
    IRCCyN, Nantes; France.
    Kihl, Maria
    Lund University.
    Andrén, Börje
    Acreo AB, Kista, Sweden.
    Le Callet, Patrick
    IRCCyN, Nantes; France.
    Sjöström, Mårten
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Aurelius, Andreas
    Acreo AB, Kista, Sweden.
    2D no-reference video quality model development and 3D video transmission quality2012In: Proceedings of the Sixth International Workshop on Video Processing and Quality Metrics for Consumer Electronics VPQM-2012, 2012Conference paper (Other academic)
    Abstract [en]

    This presentation will target two different topics in video quality assessment. First, we discuss 2D no-reference video quality model development. Further, we discuss how to find suitable quality for 3D video transmission. No-reference metrics are the only practical option for monitoring of 2D video quality in live networks. In order to decrease the development time, it might be possible to use full-reference metrics for this purpose. In this work, we have evaluated six full-reference objective metrics in three different databases. We show statistically that VQM performs the best. Further, we use these results to develop a lightweight no-reference model. We have also investigated users' experience of stereoscopic 3D video quality by performing the rating of two subjective assessment datasets, targeting in one dataset efficient transmission in the transmission error free case and error concealment in the other. Among other results, it was shown that, based on the same level of quality of experience, spatial down-sampling may lead to better bitrate efficiency while temporal down-sampling will be worse. When network impairments occur, traditional error 2D concealment methods need to be reinvestigated as they were outperformed switching to 2D presentation.

  • 3.
    Brunnström, Kjell
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Systems and Technology. RISE Acreo AB.
    Sjöström, Mårten
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Systems and Technology.
    Imran, Muhammad
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design. HIAB AB.
    Pettersson, Magnus
    HIAB AB.
    Johanson, Mathias
    Alkit Communications AB, Mölndal.
    Quality Of Experience For A Virtual Reality Simulator2018In: IS and T International Symposium on Electronic Imaging Science and Technology 2018, 2018Conference paper (Refereed)
    Abstract [en]

    In this study, we investigate a VR simulator of a forestrycrane used for loading logs onto a truck, mainly looking at Qualityof Experience (QoE) aspects that may be relevant for taskcompletion, but also whether there are any discomfort relatedsymptoms experienced during task execution. The QoE test hasbeen designed to capture both the general subjective experience ofusing the simulator and to study task completion rate. Moreover, aspecific focus has been to study the effects of latency on thesubjective experience, with regards both to delays in the cranecontrol interface as well as lag in the visual scene rendering in thehead mounted display (HMD). Two larger formal subjectivestudies have been performed: one with the VR-system as it is andone where we have added controlled delay to the display updateand to the joystick signals. The baseline study shows that mostpeople are more or less happy with the VR-system and that it doesnot have strong effects on any symptoms as listed in the SSQ. In thedelay study we found significant effects on Comfort Quality andImmersion Quality for higher Display delay (30 ms), but verysmall impact of joystick delay. Furthermore, the Display delay hadstrong influence on the symptoms in the SSQ, as well as causingtest subjects to decide not to continue with the completeexperiments, and this was also found to be connected to the longerDisplay delays (≥ 20 ms).

  • 4.
    Damghanian, Mitra
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information and Communication systems.
    Olsson, Roger
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information and Communication systems.
    Sjöström, Mårten
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information and Communication systems.
    Depth and Angular Resolution in Plenoptic Cameras2015In: 2015 IEEE International Conference On Image Processing (ICIP), September 2015, IEEE, 2015, p. 3044-3048, article id 7351362Conference paper (Refereed)
    Abstract [en]

    We present a model-based approach to extract the depth and angular resolution in a plenoptic camera. Obtained results for the depth and angular resolution are validated against Zemax ray tracing results. The provided model-based approach gives the location and number of the resolvable depth planes in a plenoptic camera as well as the angular resolution with regards to disparity in pixels. The provided model-based approach is straightforward compared to practical measurements and can reflect on the plenoptic camera parameters such as the microlens f-number in contrast with the principal-ray-model approach. Easy and accurate quantification of different resolution terms forms the basis for designing the capturing setup and choosing a reasonable system configuration for plenoptic cameras. Results from this work will accelerate customization of the plenoptic cameras for particular applications without the need for expensive measurements.

  • 5.
    Dima, Elijs
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Systems and Technology.
    Brunnström, Kjell
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Systems and Technology. RISE Research Institutes of Sweden, Division ICT - Acreo.
    Sjöström, Mårten
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Systems and Technology.
    Andersson, Mattias
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Systems and Technology.
    Edlund, Joakim
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Systems and Technology.
    Johanson, Mathias
    Alkit Communications AB.
    Qureshi, Tahir
    HIAB AB.
    View Position Impact on QoE in an Immersive Telepresence System for Remote Operation2019In: 2019 Eleventh International Conference on Quality of Multimedia Experience (QoMEX), IEEE, 2019, p. 1-3Conference paper (Refereed)
    Abstract [en]

    In this paper, we investigate how different viewing positions affect a user's Quality of Experience (QoE) and performance in an immersive telepresence system. A QoE experiment has been conducted with 27 participants to assess the general subjective experience and the performance of remotely operating a toy excavator. Two view positions have been tested, an overhead and a ground-level view, respectively, which encourage reliance on stereoscopic depth cues to different extents for accurate operation. Results demonstrate a significant difference between ground and overhead views: the ground view increased the perceived difficulty of the task, whereas the overhead view increased the perceived accomplishment as well as the objective performance of the task. The perceived helpfulness of the overhead view was also significant according to the participants.

  • 6.
    Dima, Elijs
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information and Communication systems.
    Sjöström, Mårten
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information and Communication systems.
    Olsson, Roger
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information and Communication systems.
    Assessment of Multi-Camera Calibration Algorithms for Two-Dimensional Camera Arrays Relative to Ground Truth Position and Direction2016In: 3DTV-Conference, IEEE Computer Society, 2016, article id 7548887Conference paper (Refereed)
    Abstract [en]

    Camera calibration methods are commonly evaluated on cumulative reprojection error metrics, on disparate one-dimensional da-tasets. To evaluate calibration of cameras in two-dimensional arrays, assessments need to be made on two-dimensional datasets with constraints on camera parameters. In this study, accuracy of several multi-camera calibration methods has been evaluated on camera parameters that are affecting view projection the most. As input data, we used a 15-viewpoint two-dimensional dataset with intrinsic and extrinsic parameter constraints and extrinsic ground truth. The assessment showed that self-calibration methods using structure-from-motion reach equal intrinsic and extrinsic parameter estimation accuracy with standard checkerboard calibration algorithm, and surpass a well-known self-calibration toolbox, BlueCCal. These results show that self-calibration is a viable approach to calibrating two-dimensional camera arrays, but improvements to state-of-art multi-camera feature matching are necessary to make BlueCCal as accurate as other self-calibration methods for two-dimensional camera arrays.

  • 7.
    Li, Yun
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information and Communication systems.
    Olsson, Roger
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information and Communication systems.
    Sjöström, Mårten
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information and Communication systems.
    Compression of Unfocused Plenoptic Images using a Displacement Intra prediction2016In: 2016 IEEE International Conference on Multimedia and Expo Workshop, ICMEW 2016, IEEE Signal Processing Society, 2016, article id 7574673Conference paper (Refereed)
    Abstract [en]

    Plenoptic images are one type of light field contents produced by using a combination of a conventional camera and an additional optical component in the form of microlens arrays, which are positioned in front of the image sensor surface. This camera setup can capture a sub-sampling of the light field with high spatial fidelity over a small range, and with a more coarsely sampled angle range. The earliest applications that leverage on the plenoptic image content is image refocusing, non-linear distribution of out-of-focus areas, SNR vs. resolution trade-offs, and 3D-image creation. All functionalities are provided by using post-processing methods. In this work, we evaluate a compression method that we previously proposed for a different type of plenoptic image (focused or plenoptic camera 2.0 contents) than the unfocused or plenoptic camera 1.0 that is used in this Grand Challenge. The method is an extension of the state-of-the-art video compression standard HEVC where we have brought the capability of bi-directional inter-frame prediction into the spatial prediction. The method is evaluated according to the scheme set out by the Grand Challenge, and the results show a high compression efficiency compared with JPEG, i.e., up to 6 dB improvements for the tested images.

  • 8.
    Muddala, Suryanarayana M.
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information and Communication systems.
    Sjöström, Mårten
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information and Communication systems.
    Olsson, Roger
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information and Communication systems.
    Virtual View Synthesis Using Layered Depth Image Generation and Depth-Based Inpainting for Filling Disocclusions and Translucent Disocclusions2016In: Journal of Visual Communication and Image Representation, ISSN 1047-3203, E-ISSN 1095-9076, Vol. 38, p. 351-366Article in journal (Refereed)
    Abstract [en]

    View synthesis is an efficient solution to produce content for 3DTV and FTV. However, proper handling of the disocclusions is a major challenge in the view synthesis. Inpainting methods offer solutions for handling disocclusions, though limitations in foreground-background classification causes the holes to be filled with inconsistent textures. Moreover, the state-of-the art methods fail to identify and fill disocclusions in intermediate distances between foreground and background through which background may be visible in the virtual view (translucent disocclusions). Aiming at improved rendering quality, we introduce a layered depth image (LDI) in the original camera view, in which we identify and fill occluded background so that when the LDI data is rendered to a virtual view, no disocclusions appear but views with consistent data are produced also handling translucent disocclusions. Moreover, the proposed foreground-background classification and inpainting fills the disocclusions with neighboring background texture consistently. Based on the objective and subjective evaluations, the proposed method outperforms the state-of-the art methods at the disocclusions.

  • 9.
    Muddala, Suryanarayana
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information and Communication systems.
    Olsson, Roger
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information and Communication systems.
    Sjöström, Mårten
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information and Communication systems.
    Spatio-Temporal Consistent Depth-Image Based Rendering Using Layered Depth Image and Inpainting2016In: EURASIP Journal on Image and Video Processing, ISSN 1687-5176, E-ISSN 1687-5281, Vol. 9, no 1, p. 1-19Article in journal (Refereed)
    Abstract [en]

    Depth-image-based rendering (DIBR) is a commonly used method for synthesizing additional views using video-plus-depth (V+D) format. A critical issue with DIBR based view synthesis is the lack of information behind foreground objects. This lack is manifested as disocclusions, holes, next to the foreground objects in rendered virtual views as a consequence of the virtual camera “seeing” behind the foreground object. The disocclusions are larger in the extrapolation case, i.e. the single camera case. Texture synthesis methods (inpainting methods) aim to fill these disocclusions by producing plausible texture content. However, virtual views inevitably exhibit both spatial and temporal inconsistencies at the filled disocclusion areas, depending on the scene content. In this paper we propose a layered depth image (LDI) approach that improves the spatio-temporal consistency. In the process of LDI generation, depth information is used to classify the foreground and background in order to form a static scene sprite from a set of neighboring frames. Occlusions in the LDI are then identified and filled using inpainting, such that no disocclusions appear when the LDI data is rendered to a virtual view. In addition to the depth information, optical flow is computed to extract the stationary parts of the scene and to classify the occlusions in the inpainting process. Experimental results demonstrate that spatio-temporal inconsistencies are significantly reduced using the proposed method. Furthermore, subjective and objective qualities are improved compared to state-of-the-art reference methods.

  • 10.
    Paudyal, Pradip
    et al.
    Department of Engineering, Universit`a degli Studi Roma TRE, Italy.
    Battisti, Federica
    Department of Engineering, Universit`a degli Studi Roma TRE, Italy.
    Sjöström, Mårten
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Systems and Technology.
    Olsson, Roger
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Systems and Technology.
    Carli, Marco
    Department of Engineering, Universit`a degli Studi Roma TRE, Italy.
    Towards the Perceptual Quality Evaluation of Compressed Light Field Images2017In: IEEE transactions on broadcasting, ISSN 0018-9316, E-ISSN 1557-9611, Vol. 63, no 3, p. 507-522, article id 7938323Article in journal (Refereed)
    Abstract [en]

    Evaluation of perceived quality of light field images,as well as testing new processing tools, or even assessing the effectiveness of objective quality metrics, relies on the availabilityof test dataset and corresponding quality ratings. This article presents SMART light field image quality dataset. The dataset consists of source images (raw data without optical corrections), compressed images, and annotated subjective quality scores. Furthermore, analysis of perceptual effects of compression on SMART dataset is presented. Next, the impact of image content on the perceived quality is studied with the help of image quality attributes. Finally, the performances of 2D image quality metrics when applied to light field images are analyzed.

  • 11.
    Paudyal, Pradip
    et al.
    Universita’ degli Studi Roma TRE, Italy.
    Olsson, Roger
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information and Communication systems.
    Sjöström, Mårten
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information and Communication systems.
    Battisti, Federica
    Universita’ degli Studi Roma TRE, Italy.
    Carli, Marco
    Universita’ degli Studi Roma TRE, Italy.
    SMART: a Light Field image quality dataset2016In: Proceedings of the 7th International Conference on Multimedia Systems, MMSys 2016, Association for Computing Machinery (ACM), 2016, p. 374-379, article id 2910623Conference paper (Refereed)
    Abstract [en]

    In this article, the design of a Light Field image datasetis presented. The availability of an image dataset is use-ful for design, testing, and benchmarking Light Field imageprocessing algorithms. As rst step, the image content se-lection criteria have been dened based on selected imagequality key-attributes, i.e. spatial information, colorfulness,texture key features, depth of eld, etc. Next, image sceneshave been selected and captured by using the Lytro IllumLight Field camera. Performed analysis shows that the con-sidered set of images is sucient for addressing a wide rangeof attributes relevant to assess Light Field image quality.

  • 12.
    Schwarz, Sebastian
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information and Communication systems.
    Sjöström, Mårten
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information and Communication systems.
    Olsson, Roger
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information and Communication systems.
    Depth or disparity map upscaling2016Patent (Other (popular science, discussion, etc.))
    Abstract [en]

    Method and arrangement for increasing the resolution of a depth or disparity map related to multi view video. The method comprises deriving a high resolution depth map based on a low resolution depth map and a masked texture image edge map. The masked texture image edge map comprises information on edges in a high resolution texture image, which edges have a correspondence in the low resolution depth map. The texture image and the depth map are associated with the same frame.

  • 13.
    Schwarz, Sebastian
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information and Communication systems.
    Sjöström, Mårten
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information and Communication systems.
    Olsson, Roger
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information and Communication systems.
    Multivariate Sensitivity Analysis of Time-of-Flight Sensor Fusion2014In: 3D Research, ISSN 2092-6731, Vol. 5, no 3Article in journal (Refereed)
    Abstract [en]

    Obtaining three-dimensional scenery data is an essential task in computer vision, with diverse applications in various areas such as manufacturing and quality control, security and surveillance, or user interaction and entertainment. Dedicated Time-of-Flight sensors can provide detailed scenery depth in real-time and overcome short-comings of traditional stereo analysis. Nonetheless, they do not provide texture information and have limited spatial resolution. Therefore such sensors are typically combined with high resolution video sensors. Time-of-Flight Sensor Fusion is a highly active field of research. Over the recent years, there have been multiple proposals addressing important topics such as texture-guided depth upsampling and depth data denoising. In this article we take a step back and look at the underlying principles of ToF sensor fusion. We derive the ToF sensor fusion error model and evaluate its sensitivity to inaccuracies in camera calibration and depth measurements. In accordance with our findings, we propose certain courses of action to ensure high quality fusion results. With this multivariate sensitivity analysis of the ToF sensor fusion model, we provide an important guideline for designing, calibrating and running a sophisticated Time-of-Flight sensor fusion capture systems.

  • 14.
    Schwarz, Sebastian
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information and Communication systems.
    Sjöström, Mårten
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information and Communication systems.
    Olsson, Roger
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information and Communication systems.
    Time-of-Flight Sensor Fusion with Depth Measurement Reliability Weighting2014In: 3DTV-Conference, IEEE Computer Society, 2014, p. Art. no. 6874759-Conference paper (Refereed)
    Abstract [en]

    Accurate scene depth capture is essential for the success of three-dimensional television (3DTV), e.g. for high quality view synthesis in autostereoscopic multiview displays. Unfortunately, scene depth is not easily obtained and often of limited quality. Dedicated Time-of-Flight (ToF) sensors can deliver reliable depth readings where traditional methods, such as stereovision analysis, fail. However, since ToF sensors provide only limited spatial resolution and suffer from sensor noise, sophisticated upsampling methods are sought after. A multitude of ToF solutions have been proposed over the recent years. Most of them achieve ToF super-resolution (TSR) by sensor fusion between ToF and additional sources, e.g. video. We recently proposed a weighted error energy minimization approach for ToF super-resolution, incorporating texture, sensor noise and temporal information. For this article, we take a closer look at the sensor noise weighting related to the Time-of-Flight active brightness signal. We determine a depth measurement reliability function based on optimizing free parameters to test data and verifying it with independent test cases. In the presented doubleweighted TSR proposal, depth readings are weighted into the upsampling process with regard to their reliability, removing erroneous influences in the final result. Our evaluations prove the desired effect of depth measurement reliability weighting, decreasing the depth upsampling error by almost 40% in comparison to competing proposals.

1 - 14 of 14
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf