This project targets to improve the viewing experience of 360-degree stereoscopic VR content by exploiting the human vision system (HVS) and the constant field of view characteristic in VR.  We rely on our binocular vision system to fuse the two images displayed by the VR headset to create a stereoscopic mental picture.  However, many people have slightly unequal refractive power between their eyes, and this might result in a sub-optimal stereo experience.  Therefore, we propose an asymmetric approach which applies different degrees of image detail enhancement to the left and right eye images as an improvement measure.  As modern VR headset allows the user to freely change the viewing angle but the field of view is kept constant at about 120 degrees; this means only a limited portion of the 360-degree image is visible to the user at any moment.  To further refine the VR viewing experience, we propose to make the detail enhancement process view-dependent such that it dynamically adapts to the visible image content.

The GPU image detail enhancement algorithm used in this project is introduced by Mike wong in 2017 (published in GPU Zen: Advanced Rendering Techniques)

The GPU image detail enhancement algorithm used in this project is introduced by Mike wong in 2017 (published in GPU Zen: Advanced Rendering Techniques)

An asymmetrically enhanced 360-degree stereoscopic image sample

An asymmetrically enhanced 360-degree stereoscopic image sample (Original photo from the insta360 image samples archive)