miun.sePublications
Change search
Link to record
Permanent link

Direct link
BETA
Sjöström, Mårten
Publications (10 of 119) Show all publications
Brunnström, K., Dima, E., Andersson, M., Sjöström, M., quresh, t. & Johanson, M. (2019). Quality of Experience of hand controller latency in a Virtual Reality simulator. In: Damon Chandler, Mark McCourt and Jeffrey Mulligan, 2019 (Ed.), Human Vision and Electronic Imaging 2019: . Paper presented at Human Vision and Electronic Imaging 2019. Springfield, VA, United States, Article ID 3068450.
Open this publication in new window or tab >>Quality of Experience of hand controller latency in a Virtual Reality simulator
Show others...
2019 (English)In: Human Vision and Electronic Imaging 2019 / [ed] Damon Chandler, Mark McCourt and Jeffrey Mulligan, 2019, Springfield, VA, United States, 2019, article id 3068450Conference paper, Published paper (Refereed)
Abstract [en]

In this study, we investigate a VR simulator of a forestry crane used for loading logs onto a truck, mainly looking at Quality of Experience (QoE) aspects that may be relevant for task completion, but also whether there are any discomfort related symptoms experienced during task execution. A QoE test has been designed to capture both the general subjective experience of using the simulator and to study task performance. Moreover, a specific focus has been to study the effects of latency on the subjective experience, with regards to delays in the crane control interface. A formal subjective study has been performed where we have added controlled delays to the hand controller (joystick) signals. The added delays ranged from 0 ms to 800 ms. We found no significant effects of delays on the task performance on any scales up to 200 ms. A significant negative effect was found for 800 ms added delay. The Symptoms reported in the Simulator Sickness Questionnaire (SSQ) was significantly higher for all the symptom groups, but a majority of the participants reported only slight symptoms. Two out of thirty test persons stopped the test before finishing due to their symptoms.

Place, publisher, year, edition, pages
Springfield, VA, United States: , 2019
Series
Electronic Imaging, ISSN 2470-1173
Keywords
Quality of Experience, Virtual Reality, Simulator, QoE, Delay
National Category
Communication Systems Telecommunications Media Engineering
Identifiers
urn:nbn:se:miun:diva-35609 (URN)
Conference
Human Vision and Electronic Imaging 2019
Funder
Knowledge Foundation, 20160194
Available from: 2019-02-08 Created: 2019-02-08 Last updated: 2019-02-08
Li, Y., Olsson, R. & Sjöström, M. (2018). An analysis of demosaicing for plenoptic capture based on ray optics. In: Proceedings of 3DTV Conference 2018: . Paper presented at 3D at any scale and any perspective, 3-5 June 2018, Stockholm – Helsinki – Stockholm. , Article ID 8478476.
Open this publication in new window or tab >>An analysis of demosaicing for plenoptic capture based on ray optics
2018 (English)In: Proceedings of 3DTV Conference 2018, 2018, article id 8478476Conference paper, Published paper (Refereed)
Abstract [en]

The plenoptic camera is gaining more and more attention as it capturesthe 4D light field of a scene with a single shot and enablesa wide range of post-processing applications. However, the preprocessing steps for captured raw data, such as demosaicing, have been overlooked. Most existing decoding pipelines for plenoptic cameras still apply demosaicing schemes which are developed for conventional cameras. In this paper, we analyze the sampling pattern of microlens-based plenoptic cameras by ray-tracing techniques and ray phase space analysis. The goal of this work is to demonstrate guidelines and principles for demosaicing the plenoptic captures by taking the unique microlens array design into account. We show that the sampling of the plenoptic camera behaves differently from that of a conventional camera and the desired demosaicing scheme is depth-dependent.

Keywords
Light field, plenoptic camera, depth, image demosaicing
National Category
Other Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:miun:diva-33618 (URN)10.1109/3DTV.2018.8478476 (DOI)000454903900008 ()2-s2.0-85056161198 (Scopus ID)978-1-5386-6125-3 (ISBN)
Conference
3D at any scale and any perspective, 3-5 June 2018, Stockholm – Helsinki – Stockholm
Available from: 2018-05-15 Created: 2018-05-15 Last updated: 2019-02-15Bibliographically approved
Li, Y., Scrofani, G., Sjöström, M. & Martinez-Corraly, M. (2018). Area-Based Depth Estimation for Monochromatic Feature-Sparse Orthographic Capture. In: 2018 26th European Signal Processing Conference (EUSIPCO): . Paper presented at EUSIPCO 2018, 26th European Signal Processing Conference, Rome, Italy, September 3-7, 2018 (pp. 206-210). IEEE conference proceedings, Article ID 8553336.
Open this publication in new window or tab >>Area-Based Depth Estimation for Monochromatic Feature-Sparse Orthographic Capture
2018 (English)In: 2018 26th European Signal Processing Conference (EUSIPCO), IEEE conference proceedings, 2018, p. 206-210, article id 8553336Conference paper, Published paper (Refereed)
Abstract [en]

With the rapid development of light field technology, depth estimation has been highlighted as one of the critical problems in the field, and a number of approaches have been proposed to extract the depth of the scene. However, depthestimation by stereo matching becomes difficult and unreliable when the captured images lack both color and feature information. In this paper, we propose a scheme that extracts robust depth from monochromatic, feature-sparse scenes recorded in orthographic sub-aperture images. Unlike approaches which relyon the rich color and texture information across the sub-aperture views, our approach is based on depth from focus techniques. First, we superimpose shifted sub-aperture images on top of anarbitrarily chosen central image. To focus on different depths, the shift amount is varied based on the micro-lens array properties. Next, an area-based depth estimation approach is applied tofind the best match among the focal stack and generate the dense depth map. This process is repeated for each sub-aperture image. Finally, occlusions are handled by merging depth maps generated from different central images followed by a voting process. Results show that the proposed scheme is more suitable than conventional depth estimation approaches in the context of orthographic captures that have insufficient color and feature information, such as microscopic fluorescence imaging.

Place, publisher, year, edition, pages
IEEE conference proceedings, 2018
Keywords
Depth estimation, integral imaging, orthographic views, depth from focus
National Category
Computer Sciences
Identifiers
urn:nbn:se:miun:diva-34418 (URN)000455614900042 ()2-s2.0-85059811493 (Scopus ID)
Conference
EUSIPCO 2018, 26th European Signal Processing Conference, Rome, Italy, September 3-7, 2018
Available from: 2018-09-14 Created: 2018-09-14 Last updated: 2019-03-19Bibliographically approved
Ahmad, W., Sjöström, M. & Olsson, R. (2018). Compression scheme for sparsely sampled light field data based on pseudo multi-view sequences. In: OPTICS, PHOTONICS, AND DIGITAL TECHNOLOGIES FOR IMAGING APPLICATIONS V Proceedings of SPIE - The International Society for Optical Engineering: . Paper presented at SPIE Photonics Europe 2018 Strasbourg, France, 22-26 April 2018. SPIE - International Society for Optical Engineering, 10679, Article ID 106790M.
Open this publication in new window or tab >>Compression scheme for sparsely sampled light field data based on pseudo multi-view sequences
2018 (English)In: OPTICS, PHOTONICS, AND DIGITAL TECHNOLOGIES FOR IMAGING APPLICATIONS V Proceedings of SPIE - The International Society for Optical Engineering, SPIE - International Society for Optical Engineering, 2018, Vol. 10679, article id 106790MConference paper, Published paper (Refereed)
Abstract [en]

With the advent of light field acquisition technologies, the captured information of the scene is enriched by having both angular and spatial information. The captured information provides additional capabilities in the post processing stage, e.g. refocusing, 3D scene reconstruction, synthetic aperture etc. Light field capturing devices are classified in two categories. In the first category, a single plenoptic camera is used to capture a densely sampled light field, and in second category, multiple traditional cameras are used to capture a sparsely sampled light field. In both cases, the size of captured data increases with the additional angular information. The recent call for proposal related to compression of light field data by JPEG, also called “JPEG Pleno”, reflects the need of a new and efficient light field compression solution. In this paper, we propose a compression solution for sparsely sampled light field data. In a multi-camera system, each view depicts the scene from a single perspective. We propose to interpret each single view as a frame of pseudo video sequence. In this way, complete MxN views of multi-camera system are treated as M pseudo video sequences, where each pseudo video sequence contains N frames. The central pseudo video sequence is taken as base View and first frame in all the pseudo video sequences is taken as base Picture Order Count (POC). The frame contained in base view and base POC is labeled as base frame. The remaining frames are divided into three predictor levels. Frames placed in each successive level can take prediction from previously encoded frames. However, the frames assigned with last prediction level are not used for prediction of other frames. Moreover, the rate-allocation for each frame is performed by taking into account its predictor level, its frame distance and view wise decoding distance relative to the base frame. The multi-view extension of high efficiency video coding (MV-HEVC) is used to compress the pseudo multi-view sequences. The MV-HEVC compression standard enables the frames to take prediction in both direction (horizontal and vertical d), and MV-HEVC parameters are used to implement the proposed 2D prediction and rate allocation scheme. A subset of four light field images from Stanford dataset are compressed, using the proposed compression scheme on four bitrates in order to cover the low to high bit-rates scenarios. The comparison is made with state-of-art reference encoder HEVC and its real-time implementation X265. The 17x17 grid is converted into a single pseudo sequence of 289 frames by following the order explained in JPEG Pleno call for proposal and given as input to the both reference schemes. The rate distortion analysis shows that the proposed compression scheme outperforms both reference schemes in all tested bitrate scenarios for all test images. The average BD-PSNR gain is 1.36 dB over HEVC and 2.15 dB over X265.

Place, publisher, year, edition, pages
SPIE - International Society for Optical Engineering, 2018
Series
Proceedings of SPIE, ISSN 0277-786X, E-ISSN 1996-756X
Keywords
Light field, MV-HEVC, Compression, Plenoptic, Multi-Camera
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:miun:diva-33352 (URN)10.1117/12.2315597 (DOI)000452663000017 ()2-s2.0-85052527607 (Scopus ID)
Conference
SPIE Photonics Europe 2018 Strasbourg, France, 22-26 April 2018
Available from: 2018-03-26 Created: 2018-03-26 Last updated: 2019-01-08Bibliographically approved
Boström, L., Sjöström, M., Karlsson, H., Sundgren, M., Andersson, M., Olsson, R. & Åhlander, J. (2018). Digital visualisering i skolan: Mittuniversitetets slutrapport från förstudien. Sundsvall: Mittuniversitetet
Open this publication in new window or tab >>Digital visualisering i skolan: Mittuniversitetets slutrapport från förstudien
Show others...
2018 (Swedish)Report (Other academic)
Abstract [sv]

Den här studiens syfte har varit tvåfaldigt, nämligen att testa alternativa lärmetoder via ett digitalt läromedel i matematik i en kvasiexperimentell studie samt att tillämpa metoder av användarupplevelser för interaktiva visualiseringar, och därigenom öka kunskapen kring hur upplevd kvalitet beror på använd teknik. Pilotstudien sätter också fokus på flera angelägna områden inom skolutveckling både regionalt och nationellt samt viktiga aspekter när det gäller kopplingen teknik, pedagogik och utvärderingsmetoder inom “den tekniska delen”. Det förra handlar om sjunkande matematikresultat i skolan, praktiknära skolforskning, stärkt digital kompetens, visualisering och lärande samt forskning om visualisering och utvärdering. Den senare svarar på frågor om vilka tekniska lösningar som tidigare använts och med vilket syfte har de skapats samt hur visualiseringar har utvärderats enligt läroböcker och i forskningslitteratur.

 

När det gäller elevernas resultat, en av de stora forskningsfrågorna i studien, så fann vi inga signifikanta skillnader mellan traditionell undervisning och undervisning med visualiseringsläromedlet (3D). Beträffande elevers attityder till matematikmomentet kan konstateras att i kontrollgruppen för årskurs 6 förbättrades attityden signifikans, men inte i klass 8. Gällande flickors och pojkars resultat och attityder kan vi konstatera att flickorna i båda klasserna hade bättre förkunskaper än pojkarna samt att i årskurs 6 var flickorna mer positiva till matematikmomentet än pojkarna i kontrollgruppen. Därutöver kan vi inte skönja några signifikanta skillnader. Andra viktiga rön i studien var att provkonstruktionen inte var optimal samt att tiden för provgenomförande har stor betydelse när på dagen det genomfördes. Andra resultat resultaten i den kvalitativa analysen pekar på positiva attityder och beteenden från eleverna vid arbetet med det visuella läromedlet. Elevernas samarbete och kommunikation förbättrades under lektionerna. Vidare pekade lärarna på att med 3D-läromedlet gavs större möjligheter till att stimulera flera sinnen under lärprocessen. En tydlig slutsats är att 3D-läromedlet är ett viktigt komplement i undervisningen, men kan inte användas helt självt.

 

Vi kan varken sälla oss till de forskare som anser att 3D-visualisering är överlägset som läromedel för elevers resultat eller till de forskare som varnar för dess effekter för elevers kognitiva överbelastning.  Våra resultat ligger mer i linje med de slutsatser Skolforskningsinstitutet (2017) drar, nämligen att undervisning med digitala läromedel i matematik kan ha positiva effekter, men en lika effektiv undervisning kan möjligen designas på andra sätt. Däremot pekar resultaten i vår studie på ett flertal störningsmoment som kan ha påverkat möjliga resultat och behovet av god teknologin och välutvecklade programvaror.

 

I studien har vi analyserat resultaten med hjälp av två övergripande ramverk för integrering av teknikstöd i lärande, SAMR och TPACK. Det förra ramverket bidrog med en taxonomi vid diskussionen av hur väl teknikens möjligheter tagits tillvara av läromedel och i läraktiviteter, det senare för en diskussion om de didaktiska frågeställningarna med fokus på teknikens roll. Båda aspekterna är högaktuella med tanke på den ökande digitaliseringen i skolan.

 

Utifrån tidigare forskning och denna pilotstudie förstår vi att det är viktigt att designa forskningsmetoderna noggrant. En randomisering av grupper vore önskvärt. Prestandamått kan också vara svåra att välja. Tester där personer får utvärdera användbarhet (usability) och användarupplevelse (user experience, UX) baserade på både kvalitativa och kvantitativa metoder blir viktiga för själva användandet av tekniken, men det måste till ytterligare utvärderingar för att koppla tekniken och visualiseringen till kvaliteten i lärandet och undervisningen. Flera metoder behövs således och det blir viktigt med samarbete mellan olika ämnen och discipliner.

Place, publisher, year, edition, pages
Sundsvall: Mittuniversitetet, 2018. p. 60
National Category
Pedagogical Work
Identifiers
urn:nbn:se:miun:diva-35376 (URN)
Available from: 2018-12-31 Created: 2018-12-31 Last updated: 2019-01-07Bibliographically approved
Domanski, M., Grajek, T., Conti, C., Debono, C. J., de Faria, S. M. M., Kovacs, P., . . . Stankiewicz, O. (2018). Emerging Imaging Technologies: Trends and Challenges. In: Assunção, Pedro Amado, Gotchev, Atanas (Ed.), 3D Visual Content Creation, Coding and Delivery: (pp. 5-39). Cham: Springer
Open this publication in new window or tab >>Emerging Imaging Technologies: Trends and Challenges
Show others...
2018 (English)In: 3D Visual Content Creation, Coding and Delivery / [ed] Assunção, Pedro Amado, Gotchev, Atanas, Cham: Springer, 2018, p. 5-39Chapter in book (Refereed)
Place, publisher, year, edition, pages
Cham: Springer, 2018
Series
Signals and Communication Technology, ISSN 1860-4862
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:miun:diva-34379 (URN)978-3-319-77842-6 (ISBN)
Available from: 2018-09-13 Created: 2018-09-13 Last updated: 2018-10-04Bibliographically approved
Dima, E., Sjöström, M., Olsson, R., Kjellqvist, M., Litwic, L., Zhang, Z., . . . Flodén, L. (2018). LIFE: A Flexible Testbed For Light Field Evaluation. In: : . Paper presented at 2018 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON), Stockholm – Helsinki – Stockholm, 3-5 June 2018. , Article ID 8478550.
Open this publication in new window or tab >>LIFE: A Flexible Testbed For Light Field Evaluation
Show others...
2018 (English)Conference paper, Published paper (Refereed)
Abstract [en]

Recording and imaging the 3D world has led to the use of light fields. Capturing, distributing and presenting light field data is challenging, and requires an evaluation platform. We define a framework for real-time processing, and present the design and implementation of a light field evaluation system. In order to serve as a testbed, the system is designed to be flexible, scalable, and able to model various end-to-end light field systems. This flexibility is achieved by encapsulating processes and devices in discrete framework systems. The modular capture system supports multiple camera types, general-purpose data processing, and streaming to network interfaces. The cloud system allows for parallel transcoding and distribution of streams. The presentation system encapsulates rendering and display specifics. The real-time ability was tested in a latency measurement; the capture and presentation systems process and stream frames within a 40 ms limit.

Keywords
Multiview, 3DTV, Light field, Distributed surveillance, 360 video
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:miun:diva-33620 (URN)000454903900016 ()2-s2.0-85056147245 (Scopus ID)978-1-5386-6125-3 (ISBN)
Conference
2018 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON), Stockholm – Helsinki – Stockholm, 3-5 June 2018
Projects
LIFE Project
Funder
Knowledge Foundation, 20140200
Available from: 2018-05-15 Created: 2018-05-15 Last updated: 2019-02-15Bibliographically approved
Conti, C., Soares, L. D., Nunes, P., Perra, C., Assunção, P. A., Sjöström, M., . . . Jennehag, U. (2018). Light Field Image Compression. In: Assunção, Pedro Amado, Gotchev, Atanas (Ed.), 3D Visual Content Creation, Coding and Delivery: (pp. 143-176). Cham: Springer
Open this publication in new window or tab >>Light Field Image Compression
Show others...
2018 (English)In: 3D Visual Content Creation, Coding and Delivery / [ed] Assunção, Pedro Amado, Gotchev, Atanas, Cham: Springer, 2018, p. 143-176Chapter in book (Refereed)
Place, publisher, year, edition, pages
Cham: Springer, 2018
Series
Signals and Communication Technology, ISSN 1860-4862
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:miun:diva-34382 (URN)978-3-319-77842-6 (ISBN)
Available from: 2018-09-13 Created: 2018-09-13 Last updated: 2018-10-04Bibliographically approved
Ahmad, W., Palmieri, L., Koch, R. & Sjöström, M. (2018). Matching Light Field Datasets From Plenoptic Cameras 1.0 And 2.0. In: Proceedings of the 2018 3DTV Conference: . Paper presented at 2018 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON), Stockholm – Helsinki – Stockholm, 3-5 June 2018. , Article ID 8478611.
Open this publication in new window or tab >>Matching Light Field Datasets From Plenoptic Cameras 1.0 And 2.0
2018 (English)In: Proceedings of the 2018 3DTV Conference, 2018, article id 8478611Conference paper, Published paper (Refereed)
Abstract [en]

The capturing of angular and spatial information of the scene using single camera is made possible by new emerging technology referred to as plenoptic camera. Both angular and spatial information, enable various post-processing applications, e.g. refocusing, synthetic aperture, super-resolution, and 3D scene reconstruction. In the past, multiple traditional cameras were used to capture the angular and spatial information of the scene. However, recently with the advancement in optical technology, plenoptic cameras have been introduced to capture the scene information. In a plenoptic camera, a lenslet array is placed between the main lens and the image sensor that allows multiplexing of the spatial and angular information onto a single image, also referred to as plenoptic image. The placement of the lenslet array relative to the main lens and the image sensor, results in two different optical design sof a plenoptic camera, also referred to as plenoptic 1.0 and plenoptic 2.0. In this work, we present a novel dataset captured with plenoptic 1.0 (Lytro Illum) and plenoptic 2.0(Raytrix R29) cameras for the same scenes under the same conditions. The dataset provides the benchmark contents for various research and development activities for plenoptic images.

Keywords
Plenoptic, Light-field, Dataset
Identifiers
urn:nbn:se:miun:diva-33764 (URN)000454903900022 ()2-s2.0-85056150148 (Scopus ID)978-1-5386-6125-3 (ISBN)
Conference
2018 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON), Stockholm – Helsinki – Stockholm, 3-5 June 2018
Available from: 2018-06-13 Created: 2018-06-13 Last updated: 2019-02-15Bibliographically approved
Dima, E., Sjöström, M. & Olsson, R. (2018). Modeling Depth Uncertainty of Desynchronized Multi-Camera Systems. In: 2017 International Conference on 3D Immersion (IC3D): . Paper presented at 2017 International Conference on 3D Immersion (IC3D 2017), Brussels, Belgium, 11th-12th December 2017. IEEE
Open this publication in new window or tab >>Modeling Depth Uncertainty of Desynchronized Multi-Camera Systems
2018 (English)In: 2017 International Conference on 3D Immersion (IC3D), IEEE, 2018Conference paper, Published paper (Refereed)
Abstract [en]

Accurately recording motion from multiple perspectives is relevant for recording and processing immersive multi-media and virtual reality content. However, synchronization errors between multiple cameras limit the precision of scene depth reconstruction and rendering. In order to quantify this limit, a relation between camera de-synchronization, camera parameters, and scene element motion has to be identified. In this paper, a parametric ray model describing depth uncertainty is derived and adapted for the pinhole camera model. A two-camera scenario is simulated to investigate the model behavior and how camera synchronization delay, scene element speed, and camera positions affect the system's depth uncertainty. Results reveal a linear relation between synchronization error, element speed, and depth uncertainty. View convergence is shown to affect mean depth uncertainty up to a factor of 10. Results also show that depth uncertainty must be assessed on the full set of camera rays instead of a central subset.

Place, publisher, year, edition, pages
IEEE, 2018
Keywords
Camera synchronization, Synchronization error, Depth estimation error, Multi-camera system
National Category
Signal Processing Media Engineering
Identifiers
urn:nbn:se:miun:diva-31841 (URN)10.1109/IC3D.2017.8251891 (DOI)000427148600001 ()2-s2.0-85049401578 (Scopus ID)978-1-5386-4655-7 (ISBN)
Conference
2017 International Conference on 3D Immersion (IC3D 2017), Brussels, Belgium, 11th-12th December 2017
Projects
LIFE project
Funder
Knowledge Foundation, 20140200
Available from: 2017-10-13 Created: 2017-10-13 Last updated: 2018-09-28
Organisations

Search in DiVA

Show all publications