miun.sePublikasjoner
Endre søk
Link to record
Permanent link

Direct link
BETA
Sjöström, Mårten
Publikasjoner (10 av 122) Visa alla publikasjoner
Ahmad, W., Ghafoor, M., Tariq, S. A., Hassan, A., Sjöström, M. & Olsson, R. (2019). Computationally Efficient Light Field Image Compression Using a Multiview HEVC Framework. IEEE Access, 7, 143002-143014
Åpne denne publikasjonen i ny fane eller vindu >>Computationally Efficient Light Field Image Compression Using a Multiview HEVC Framework
Vise andre…
2019 (engelsk)Inngår i: IEEE Access, E-ISSN 2169-3536, Vol. 7, s. 143002-143014Artikkel i tidsskrift (Fagfellevurdert) Published
Abstract [en]

The acquisition of the spatial and angular information of a scene using light eld (LF) technologies supplement a wide range of post-processing applications, such as scene reconstruction, refocusing, virtual view synthesis, and so forth. The additional angular information possessed by LF data increases the size of the overall data captured while offering the same spatial resolution. The main contributor to the size of captured data (i.e., angular information) contains a high correlation that is exploited by state-of-the-art video encoders by treating the LF as a pseudo video sequence (PVS). The interpretation of LF as a single PVS restricts the encoding scheme to only utilize a single-dimensional angular correlation present in the LF data. In this paper, we present an LF compression framework that efciently exploits the spatial and angular correlation using a multiview extension of high-efciency video coding (MV-HEVC). The input LF views are converted into multiple PVSs and are organized hierarchically. The rate-allocation scheme takes into account the assigned organization of frames and distributes quality/bits among them accordingly. Subsequently, the reference picture selection scheme prioritizes the reference frames based on the assigned quality. The proposed compression scheme is evaluated by following the common test conditions set by JPEG Pleno. The proposed scheme performs 0.75 dB better compared to state-of-the-art compression schemes and 2.5 dB better compared to the x265-based JPEG Pleno anchor scheme. Moreover, an optimized motionsearch scheme is proposed in the framework that reduces the computational complexity (in terms of the sum of absolute difference [SAD] computations) of motion estimation by up to 87% with a negligible loss in visual quality (approximately 0.05 dB).

Emneord
Compression, light field, MV-HEVC, plenoptic
HSV kategori
Identifikatorer
urn:nbn:se:miun:diva-37489 (URN)10.1109/ACCESS.2019.2944765 (DOI)
Tilgjengelig fra: 2019-10-07 Laget: 2019-10-07 Sist oppdatert: 2019-10-11bibliografisk kontrollert
Li, Y. & Sjöström, M. (2019). Depth-Assisted Demosaicing for Light Field Data in Layered Object Space. In: 2019 IEEE International Conference on Image Processing (ICIP): . Paper presented at 2019 IEEE International Conference on Image Processing (ICIP), Taipei, Taiwan, 22-25 September, 2019 (pp. 3746-3750). IEEE
Åpne denne publikasjonen i ny fane eller vindu >>Depth-Assisted Demosaicing for Light Field Data in Layered Object Space
2019 (engelsk)Inngår i: 2019 IEEE International Conference on Image Processing (ICIP), IEEE, 2019, s. 3746-3750Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

Light field technology, which emerged as a solution to the increasing demands of visually immersive experience, has shown its extraordinary potential for scene content representation and reconstruction. Unlike conventional photography that maps the 3D scenery onto a 2D plane by a projective transformation, light field preserves both the spatial and angular information, enabling further processing steps such as computational refocusing and image-based rendering. However, there are still gaps that have been barely studied, such as the light field demosaicing process. In this paper, we propose a depth-assisted demosaicing method for light field data. First, we exploit the sampling geometry of the light field data with respect to the scene content using the ray-tracing technique and develop a sampling model of light field capture. Then we carry out the demosaicing process in a layered object space with object-space sampling adjacencies rather than pixel placement. Finally, we compare our results with state-of-art approaches and discuss about the potential research directions of the proposed sampling model to show the significance of our approach.

sted, utgiver, år, opplag, sider
IEEE, 2019
Emneord
Lenses, Cameras, Image color analysis, Three-dimensional displays, Microoptics, Interpolation, Two dimensional displays, Light field, demosaicing, object space, ray-tracing technique
HSV kategori
Identifikatorer
urn:nbn:se:miun:diva-37690 (URN)10.1109/ICIP.2019.8803441 (DOI)978-1-5386-6249-6 (ISBN)
Konferanse
2019 IEEE International Conference on Image Processing (ICIP), Taipei, Taiwan, 22-25 September, 2019
Tilgjengelig fra: 2019-11-15 Laget: 2019-11-15 Sist oppdatert: 2019-11-15bibliografisk kontrollert
Brunnström, K., Dima, E., Andersson, M., Sjöström, M., quresh, t. & Johanson, M. (2019). Quality of Experience of hand controller latency in a Virtual Reality simulator. In: Damon Chandler, Mark McCourt and Jeffrey Mulligan, 2019 (Ed.), Human Vision and Electronic Imaging 2019: . Paper presented at Human Vision and Electronic Imaging 2019. Springfield, VA, United States, Article ID 3068450.
Åpne denne publikasjonen i ny fane eller vindu >>Quality of Experience of hand controller latency in a Virtual Reality simulator
Vise andre…
2019 (engelsk)Inngår i: Human Vision and Electronic Imaging 2019 / [ed] Damon Chandler, Mark McCourt and Jeffrey Mulligan, 2019, Springfield, VA, United States, 2019, artikkel-id 3068450Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

In this study, we investigate a VR simulator of a forestry crane used for loading logs onto a truck, mainly looking at Quality of Experience (QoE) aspects that may be relevant for task completion, but also whether there are any discomfort related symptoms experienced during task execution. A QoE test has been designed to capture both the general subjective experience of using the simulator and to study task performance. Moreover, a specific focus has been to study the effects of latency on the subjective experience, with regards to delays in the crane control interface. A formal subjective study has been performed where we have added controlled delays to the hand controller (joystick) signals. The added delays ranged from 0 ms to 800 ms. We found no significant effects of delays on the task performance on any scales up to 200 ms. A significant negative effect was found for 800 ms added delay. The Symptoms reported in the Simulator Sickness Questionnaire (SSQ) was significantly higher for all the symptom groups, but a majority of the participants reported only slight symptoms. Two out of thirty test persons stopped the test before finishing due to their symptoms.

sted, utgiver, år, opplag, sider
Springfield, VA, United States: , 2019
Serie
Electronic Imaging, ISSN 2470-1173
Emneord
Quality of Experience, Virtual Reality, Simulator, QoE, Delay
HSV kategori
Identifikatorer
urn:nbn:se:miun:diva-35609 (URN)
Konferanse
Human Vision and Electronic Imaging 2019
Forskningsfinansiär
Knowledge Foundation, 20160194
Tilgjengelig fra: 2019-02-08 Laget: 2019-02-08 Sist oppdatert: 2019-10-11bibliografisk kontrollert
Dima, E., Brunnström, K., Sjöström, M., Andersson, M., Edlund, J., Johanson, M. & Qureshi, T. (2019). View Position Impact on QoE in an Immersive Telepresence System for Remote Operation. In: 2019 Eleventh International Conference on Quality of Multimedia Experience (QoMEX): . Paper presented at Quality of Multimedia Experience (QoMEX), Berlin, Germany, 5-7, 2019 (pp. 1-3). IEEE
Åpne denne publikasjonen i ny fane eller vindu >>View Position Impact on QoE in an Immersive Telepresence System for Remote Operation
Vise andre…
2019 (engelsk)Inngår i: 2019 Eleventh International Conference on Quality of Multimedia Experience (QoMEX), IEEE, 2019, s. 1-3Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

In this paper, we investigate how different viewing positions affect a user's Quality of Experience (QoE) and performance in an immersive telepresence system. A QoE experiment has been conducted with 27 participants to assess the general subjective experience and the performance of remotely operating a toy excavator. Two view positions have been tested, an overhead and a ground-level view, respectively, which encourage reliance on stereoscopic depth cues to different extents for accurate operation. Results demonstrate a significant difference between ground and overhead views: the ground view increased the perceived difficulty of the task, whereas the overhead view increased the perceived accomplishment as well as the objective performance of the task. The perceived helpfulness of the overhead view was also significant according to the participants.

sted, utgiver, år, opplag, sider
IEEE, 2019
Emneord
quality of experience, augmented telepresence, head mounted display, viewpoint, remote operation, camera view
HSV kategori
Identifikatorer
urn:nbn:se:miun:diva-36256 (URN)10.1109/QoMEX.2019.8743147 (DOI)000482562000001 ()2-s2.0-85068638935 (Scopus ID)978-1-5386-8212-8 (ISBN)
Konferanse
Quality of Multimedia Experience (QoMEX), Berlin, Germany, 5-7, 2019
Forskningsfinansiär
Knowledge Foundation, 20160194
Tilgjengelig fra: 2019-06-10 Laget: 2019-06-10 Sist oppdatert: 2019-10-16bibliografisk kontrollert
Li, Y., Olsson, R. & Sjöström, M. (2018). An analysis of demosaicing for plenoptic capture based on ray optics. In: Proceedings of 3DTV Conference 2018: . Paper presented at 3D at any scale and any perspective, 3-5 June 2018, Stockholm – Helsinki – Stockholm. , Article ID 8478476.
Åpne denne publikasjonen i ny fane eller vindu >>An analysis of demosaicing for plenoptic capture based on ray optics
2018 (engelsk)Inngår i: Proceedings of 3DTV Conference 2018, 2018, artikkel-id 8478476Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

The plenoptic camera is gaining more and more attention as it capturesthe 4D light field of a scene with a single shot and enablesa wide range of post-processing applications. However, the preprocessing steps for captured raw data, such as demosaicing, have been overlooked. Most existing decoding pipelines for plenoptic cameras still apply demosaicing schemes which are developed for conventional cameras. In this paper, we analyze the sampling pattern of microlens-based plenoptic cameras by ray-tracing techniques and ray phase space analysis. The goal of this work is to demonstrate guidelines and principles for demosaicing the plenoptic captures by taking the unique microlens array design into account. We show that the sampling of the plenoptic camera behaves differently from that of a conventional camera and the desired demosaicing scheme is depth-dependent.

Emneord
Light field, plenoptic camera, depth, image demosaicing
HSV kategori
Identifikatorer
urn:nbn:se:miun:diva-33618 (URN)10.1109/3DTV.2018.8478476 (DOI)000454903900008 ()2-s2.0-85056161198 (Scopus ID)978-1-5386-6125-3 (ISBN)
Konferanse
3D at any scale and any perspective, 3-5 June 2018, Stockholm – Helsinki – Stockholm
Tilgjengelig fra: 2018-05-15 Laget: 2018-05-15 Sist oppdatert: 2019-02-15bibliografisk kontrollert
Li, Y., Scrofani, G., Sjöström, M. & Martinez-Corraly, M. (2018). Area-Based Depth Estimation for Monochromatic Feature-Sparse Orthographic Capture. In: 2018 26th European Signal Processing Conference (EUSIPCO): . Paper presented at EUSIPCO 2018, 26th European Signal Processing Conference, Rome, Italy, September 3-7, 2018 (pp. 206-210). IEEE conference proceedings, Article ID 8553336.
Åpne denne publikasjonen i ny fane eller vindu >>Area-Based Depth Estimation for Monochromatic Feature-Sparse Orthographic Capture
2018 (engelsk)Inngår i: 2018 26th European Signal Processing Conference (EUSIPCO), IEEE conference proceedings, 2018, s. 206-210, artikkel-id 8553336Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

With the rapid development of light field technology, depth estimation has been highlighted as one of the critical problems in the field, and a number of approaches have been proposed to extract the depth of the scene. However, depthestimation by stereo matching becomes difficult and unreliable when the captured images lack both color and feature information. In this paper, we propose a scheme that extracts robust depth from monochromatic, feature-sparse scenes recorded in orthographic sub-aperture images. Unlike approaches which relyon the rich color and texture information across the sub-aperture views, our approach is based on depth from focus techniques. First, we superimpose shifted sub-aperture images on top of anarbitrarily chosen central image. To focus on different depths, the shift amount is varied based on the micro-lens array properties. Next, an area-based depth estimation approach is applied tofind the best match among the focal stack and generate the dense depth map. This process is repeated for each sub-aperture image. Finally, occlusions are handled by merging depth maps generated from different central images followed by a voting process. Results show that the proposed scheme is more suitable than conventional depth estimation approaches in the context of orthographic captures that have insufficient color and feature information, such as microscopic fluorescence imaging.

sted, utgiver, år, opplag, sider
IEEE conference proceedings, 2018
Emneord
Depth estimation, integral imaging, orthographic views, depth from focus
HSV kategori
Identifikatorer
urn:nbn:se:miun:diva-34418 (URN)000455614900042 ()2-s2.0-85059811493 (Scopus ID)
Konferanse
EUSIPCO 2018, 26th European Signal Processing Conference, Rome, Italy, September 3-7, 2018
Tilgjengelig fra: 2018-09-14 Laget: 2018-09-14 Sist oppdatert: 2019-03-19bibliografisk kontrollert
Ahmad, W., Sjöström, M. & Olsson, R. (2018). Compression scheme for sparsely sampled light field data based on pseudo multi-view sequences. In: OPTICS, PHOTONICS, AND DIGITAL TECHNOLOGIES FOR IMAGING APPLICATIONS V Proceedings of SPIE - The International Society for Optical Engineering: . Paper presented at SPIE Photonics Europe 2018 Strasbourg, France, 22-26 April 2018. SPIE - International Society for Optical Engineering, 10679, Article ID 106790M.
Åpne denne publikasjonen i ny fane eller vindu >>Compression scheme for sparsely sampled light field data based on pseudo multi-view sequences
2018 (engelsk)Inngår i: OPTICS, PHOTONICS, AND DIGITAL TECHNOLOGIES FOR IMAGING APPLICATIONS V Proceedings of SPIE - The International Society for Optical Engineering, SPIE - International Society for Optical Engineering, 2018, Vol. 10679, artikkel-id 106790MKonferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

With the advent of light field acquisition technologies, the captured information of the scene is enriched by having both angular and spatial information. The captured information provides additional capabilities in the post processing stage, e.g. refocusing, 3D scene reconstruction, synthetic aperture etc. Light field capturing devices are classified in two categories. In the first category, a single plenoptic camera is used to capture a densely sampled light field, and in second category, multiple traditional cameras are used to capture a sparsely sampled light field. In both cases, the size of captured data increases with the additional angular information. The recent call for proposal related to compression of light field data by JPEG, also called “JPEG Pleno”, reflects the need of a new and efficient light field compression solution. In this paper, we propose a compression solution for sparsely sampled light field data. In a multi-camera system, each view depicts the scene from a single perspective. We propose to interpret each single view as a frame of pseudo video sequence. In this way, complete MxN views of multi-camera system are treated as M pseudo video sequences, where each pseudo video sequence contains N frames. The central pseudo video sequence is taken as base View and first frame in all the pseudo video sequences is taken as base Picture Order Count (POC). The frame contained in base view and base POC is labeled as base frame. The remaining frames are divided into three predictor levels. Frames placed in each successive level can take prediction from previously encoded frames. However, the frames assigned with last prediction level are not used for prediction of other frames. Moreover, the rate-allocation for each frame is performed by taking into account its predictor level, its frame distance and view wise decoding distance relative to the base frame. The multi-view extension of high efficiency video coding (MV-HEVC) is used to compress the pseudo multi-view sequences. The MV-HEVC compression standard enables the frames to take prediction in both direction (horizontal and vertical d), and MV-HEVC parameters are used to implement the proposed 2D prediction and rate allocation scheme. A subset of four light field images from Stanford dataset are compressed, using the proposed compression scheme on four bitrates in order to cover the low to high bit-rates scenarios. The comparison is made with state-of-art reference encoder HEVC and its real-time implementation X265. The 17x17 grid is converted into a single pseudo sequence of 289 frames by following the order explained in JPEG Pleno call for proposal and given as input to the both reference schemes. The rate distortion analysis shows that the proposed compression scheme outperforms both reference schemes in all tested bitrate scenarios for all test images. The average BD-PSNR gain is 1.36 dB over HEVC and 2.15 dB over X265.

sted, utgiver, år, opplag, sider
SPIE - International Society for Optical Engineering, 2018
Serie
Proceedings of SPIE, ISSN 0277-786X, E-ISSN 1996-756X
Emneord
Light field, MV-HEVC, Compression, Plenoptic, Multi-Camera
HSV kategori
Identifikatorer
urn:nbn:se:miun:diva-33352 (URN)10.1117/12.2315597 (DOI)000452663000017 ()2-s2.0-85052527607 (Scopus ID)
Konferanse
SPIE Photonics Europe 2018 Strasbourg, France, 22-26 April 2018
Tilgjengelig fra: 2018-03-26 Laget: 2018-03-26 Sist oppdatert: 2019-01-08bibliografisk kontrollert
Boström, L., Sjöström, M., Karlsson, H., Sundgren, M., Andersson, M., Olsson, R. & Åhlander, J. (2018). Digital visualisering i skolan: Mittuniversitetets slutrapport från förstudien. Sundsvall: Mittuniversitetet
Åpne denne publikasjonen i ny fane eller vindu >>Digital visualisering i skolan: Mittuniversitetets slutrapport från förstudien
Vise andre…
2018 (svensk)Rapport (Annet vitenskapelig)
Abstract [sv]

Den här studiens syfte har varit tvåfaldigt, nämligen att testa alternativa lärmetoder via ett digitalt läromedel i matematik i en kvasiexperimentell studie samt att tillämpa metoder av användarupplevelser för interaktiva visualiseringar, och därigenom öka kunskapen kring hur upplevd kvalitet beror på använd teknik. Pilotstudien sätter också fokus på flera angelägna områden inom skolutveckling både regionalt och nationellt samt viktiga aspekter när det gäller kopplingen teknik, pedagogik och utvärderingsmetoder inom “den tekniska delen”. Det förra handlar om sjunkande matematikresultat i skolan, praktiknära skolforskning, stärkt digital kompetens, visualisering och lärande samt forskning om visualisering och utvärdering. Den senare svarar på frågor om vilka tekniska lösningar som tidigare använts och med vilket syfte har de skapats samt hur visualiseringar har utvärderats enligt läroböcker och i forskningslitteratur.

 

När det gäller elevernas resultat, en av de stora forskningsfrågorna i studien, så fann vi inga signifikanta skillnader mellan traditionell undervisning och undervisning med visualiseringsläromedlet (3D). Beträffande elevers attityder till matematikmomentet kan konstateras att i kontrollgruppen för årskurs 6 förbättrades attityden signifikans, men inte i klass 8. Gällande flickors och pojkars resultat och attityder kan vi konstatera att flickorna i båda klasserna hade bättre förkunskaper än pojkarna samt att i årskurs 6 var flickorna mer positiva till matematikmomentet än pojkarna i kontrollgruppen. Därutöver kan vi inte skönja några signifikanta skillnader. Andra viktiga rön i studien var att provkonstruktionen inte var optimal samt att tiden för provgenomförande har stor betydelse när på dagen det genomfördes. Andra resultat resultaten i den kvalitativa analysen pekar på positiva attityder och beteenden från eleverna vid arbetet med det visuella läromedlet. Elevernas samarbete och kommunikation förbättrades under lektionerna. Vidare pekade lärarna på att med 3D-läromedlet gavs större möjligheter till att stimulera flera sinnen under lärprocessen. En tydlig slutsats är att 3D-läromedlet är ett viktigt komplement i undervisningen, men kan inte användas helt självt.

 

Vi kan varken sälla oss till de forskare som anser att 3D-visualisering är överlägset som läromedel för elevers resultat eller till de forskare som varnar för dess effekter för elevers kognitiva överbelastning.  Våra resultat ligger mer i linje med de slutsatser Skolforskningsinstitutet (2017) drar, nämligen att undervisning med digitala läromedel i matematik kan ha positiva effekter, men en lika effektiv undervisning kan möjligen designas på andra sätt. Däremot pekar resultaten i vår studie på ett flertal störningsmoment som kan ha påverkat möjliga resultat och behovet av god teknologin och välutvecklade programvaror.

 

I studien har vi analyserat resultaten med hjälp av två övergripande ramverk för integrering av teknikstöd i lärande, SAMR och TPACK. Det förra ramverket bidrog med en taxonomi vid diskussionen av hur väl teknikens möjligheter tagits tillvara av läromedel och i läraktiviteter, det senare för en diskussion om de didaktiska frågeställningarna med fokus på teknikens roll. Båda aspekterna är högaktuella med tanke på den ökande digitaliseringen i skolan.

 

Utifrån tidigare forskning och denna pilotstudie förstår vi att det är viktigt att designa forskningsmetoderna noggrant. En randomisering av grupper vore önskvärt. Prestandamått kan också vara svåra att välja. Tester där personer får utvärdera användbarhet (usability) och användarupplevelse (user experience, UX) baserade på både kvalitativa och kvantitativa metoder blir viktiga för själva användandet av tekniken, men det måste till ytterligare utvärderingar för att koppla tekniken och visualiseringen till kvaliteten i lärandet och undervisningen. Flera metoder behövs således och det blir viktigt med samarbete mellan olika ämnen och discipliner.

sted, utgiver, år, opplag, sider
Sundsvall: Mittuniversitetet, 2018. s. 60
HSV kategori
Identifikatorer
urn:nbn:se:miun:diva-35376 (URN)
Tilgjengelig fra: 2018-12-31 Laget: 2018-12-31 Sist oppdatert: 2019-01-07bibliografisk kontrollert
Domanski, M., Grajek, T., Conti, C., Debono, C. J., de Faria, S. M. M., Kovacs, P., . . . Stankiewicz, O. (2018). Emerging Imaging Technologies: Trends and Challenges. In: Assunção, Pedro Amado, Gotchev, Atanas (Ed.), 3D Visual Content Creation, Coding and Delivery: (pp. 5-39). Cham: Springer
Åpne denne publikasjonen i ny fane eller vindu >>Emerging Imaging Technologies: Trends and Challenges
Vise andre…
2018 (engelsk)Inngår i: 3D Visual Content Creation, Coding and Delivery / [ed] Assunção, Pedro Amado, Gotchev, Atanas, Cham: Springer, 2018, s. 5-39Kapittel i bok, del av antologi (Fagfellevurdert)
sted, utgiver, år, opplag, sider
Cham: Springer, 2018
Serie
Signals and Communication Technology, ISSN 1860-4862
HSV kategori
Identifikatorer
urn:nbn:se:miun:diva-34379 (URN)2-s2.0-85063158498 (Scopus ID)978-3-319-77842-6 (ISBN)
Tilgjengelig fra: 2018-09-13 Laget: 2018-09-13 Sist oppdatert: 2019-05-22bibliografisk kontrollert
Dima, E., Sjöström, M., Olsson, R., Kjellqvist, M., Litwic, L., Zhang, Z., . . . Flodén, L. (2018). LIFE: A Flexible Testbed For Light Field Evaluation. In: : . Paper presented at 2018 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON), Stockholm – Helsinki – Stockholm, 3-5 June 2018. , Article ID 8478550.
Åpne denne publikasjonen i ny fane eller vindu >>LIFE: A Flexible Testbed For Light Field Evaluation
Vise andre…
2018 (engelsk)Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

Recording and imaging the 3D world has led to the use of light fields. Capturing, distributing and presenting light field data is challenging, and requires an evaluation platform. We define a framework for real-time processing, and present the design and implementation of a light field evaluation system. In order to serve as a testbed, the system is designed to be flexible, scalable, and able to model various end-to-end light field systems. This flexibility is achieved by encapsulating processes and devices in discrete framework systems. The modular capture system supports multiple camera types, general-purpose data processing, and streaming to network interfaces. The cloud system allows for parallel transcoding and distribution of streams. The presentation system encapsulates rendering and display specifics. The real-time ability was tested in a latency measurement; the capture and presentation systems process and stream frames within a 40 ms limit.

Emneord
Multiview, 3DTV, Light field, Distributed surveillance, 360 video
HSV kategori
Identifikatorer
urn:nbn:se:miun:diva-33620 (URN)000454903900016 ()2-s2.0-85056147245 (Scopus ID)978-1-5386-6125-3 (ISBN)
Konferanse
2018 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON), Stockholm – Helsinki – Stockholm, 3-5 June 2018
Prosjekter
LIFE Project
Forskningsfinansiär
Knowledge Foundation, 20140200
Tilgjengelig fra: 2018-05-15 Laget: 2018-05-15 Sist oppdatert: 2019-02-15bibliografisk kontrollert
Organisasjoner