An Efficient Refocusing Scheme for Camera-Array Captured Light Field Video for Improved Visual Immersiveness

Date
2022-01-04
Authors
Mehajabin, Nusrat
Yan, Peizhi
Kaur, Supreet
Song, Jingxiang
Pourazad, Mahsa T.
Wang, Yixiao
Tohidypour, Hamid Reza
Nasiopoulos, Panos
Contributor
Advisor
Department
Instructor
Depositor
Speaker
Researcher
Consultant
Interviewer
Annotator
Journal Title
Journal ISSN
Volume Title
Publisher
Volume
Number/Issue
Starting Page
Ending Page
Alternative Title
Abstract
Light field video technology attempts to acquire human-like visual data, offering unprecedented immersiveness and a viable path for producing high-quality VR content. Refocusing that is one of the key properties of light field and a must for mixed reality applications has shown to work well for microlens based cameras, but as light field videos acquired by camera arrays have a low angular resolution, the refocused quality suffers. In this paper, we present an approach to improve the visual quality of refocused content captured by a camera array-based setup. Increasing the angular resolution using existing deep learning-based view synthesis method and refocusing the video using shift and sum refocusing algorithm produces over blurring of the in-focus region. Our enhancement method targets these blurry pixels and improves their quality by similarity detection and blending. Experimental results show that the proposed approach achieves better refocusing quality compared to traditional methods.
Description
Keywords
New Potentials of Mixed Reality and its Business Impact, deep learning, immersiveness, light field, mixed reality, refocusing
Citation
Extent
9 pages
Format
Geographic Location
Time Period
Related To
Proceedings of the 55th Hawaii International Conference on System Sciences
Table of Contents
Rights
Attribution-NonCommercial-NoDerivatives 4.0 International
Rights Holder
Local Contexts
Email libraryada-l@lists.hawaii.edu if you need this content in ADA-compliant format.