Correlation Engine 2.0
Clear Search sequence regions


filter terms:
  • tower (9)
  • Sizes of these terms reflect their relevance to your search.

    Three-dimensional (3D) light-field display has achieved a great improvement. However, the collection of dense viewpoints in the real 3D scene is still a bottleneck. Virtual views can be generated by unsupervised networks, but the quality of different views is inconsistent because networks are separately trained on each posed view. Here, a virtual view synthesis method for the 3D light-field display based on scene tower blending is presented, which can synthesize high quality virtual views with correct occlusions by blending all tower results, and dense viewpoints on 3D light-field display can be provided with smooth motion parallax. Posed views are combinatorially input into diverse unsupervised CNNs to predict respective input-view towers, and towers of the same viewpoint are fused together. All posed-view towers are blended as a scene color tower and a scene selection tower, so that 3D scene distributions at different depth planes can be accurately estimated. Blended scene towers are soft-projected to synthesize virtual views with correct occlusions. A denoising network is used to improve the image quality of final synthetic views. Experimental results demonstrate the validity of the proposed method, which shows outstanding performances under various disparities. PSNR of the virtual views are about 30 dB and SSIM is above 0.91. We believe that our view synthesis method will be helpful for future applications of the 3D light-field display.

    Citation

    Duo Chen, Xinzhu Sang, Peng Wang, Xunbo Yu, Xin Gao, Binbin Yan, Huachun Wang, Shuai Qi, Xiaoqian Ye. Virtual view synthesis for 3D light-field display based on scene tower blending. Optics express. 2021 Mar 01;29(5):7866-7884


    PMID: 33726280

    View Full Text