360 Panorama Synthesis from a Sparse Set of Images with Unknown Field of View (1904.03326v4)
Abstract: 360 images represent scenes captured in all possible viewing directions and enable viewers to navigate freely around the scene thereby providing an immersive experience. Conversely, conventional images represent scenes in a single viewing direction with a small or limited field of view (FOV). As a result, only certain parts of the scenes are observed, and valuable information about the surroundings is lost. In this paper, a learning-based approach that reconstructs the scene in 360 x 180 from a sparse set of conventional images (typically 4 images) is proposed. The proposed approach first estimates the FOV of input images relative to the panorama. The estimated FOV is then used as the prior for synthesizing a high-resolution 360 panoramic output. The proposed method overcomes the difficulty of learning-based approach in synthesizing high resolution images (up to 512$\times$1024). Experimental results demonstrate that the proposed method produces 360 panorama with reasonable quality. Results also show that the proposed method outperforms the alternative method and can be generalized for non-panoramic scenes and images captured by a smartphone camera.