TY - GEN
T1 - Democratizing rendering for multiple viewers in surround VR systems
AU - Schulze, Jürgen P.
AU - Acevedo-Feliz, Daniel
AU - Mangan, John
AU - Prudhomme, Andrew
AU - Nguyen, Phi Khanh
AU - Weber, Philip P.
N1 - KAUST Repository Item: Exported on 2020-10-01
PY - 2012/3
Y1 - 2012/3
N2 - We present a new approach for how multiple users' views can be rendered in a surround virtual environment without using special multi-view hardware. It is based on the idea that different parts of the screen are often viewed by different users, so that they can be rendered from their own view point, or at least from a point closer to their view point than traditionally expected. The vast majority of 3D virtual reality systems are designed for one head-tracked user, and a number of passive viewers. Only the head tracked user gets to see the correct view of the scene, everybody else sees a distorted image. We reduce this problem by algorithmically democratizing the rendering view point among all tracked users. Researchers have proposed solutions for multiple tracked users, but most of them require major changes to the display hardware of the VR system, such as additional projectors or custom VR glasses. Our approach does not require additional hardware, except the ability to track each participating user. We propose three versions of our multi-viewer algorithm. Each of them balances image distortion and frame rate in different ways, making them more or less suitable for certain application scenarios. Our most sophisticated algorithm renders each pixel from its own, optimized camera perspective, which depends on all tracked users' head positions and orientations. © 2012 IEEE.
AB - We present a new approach for how multiple users' views can be rendered in a surround virtual environment without using special multi-view hardware. It is based on the idea that different parts of the screen are often viewed by different users, so that they can be rendered from their own view point, or at least from a point closer to their view point than traditionally expected. The vast majority of 3D virtual reality systems are designed for one head-tracked user, and a number of passive viewers. Only the head tracked user gets to see the correct view of the scene, everybody else sees a distorted image. We reduce this problem by algorithmically democratizing the rendering view point among all tracked users. Researchers have proposed solutions for multiple tracked users, but most of them require major changes to the display hardware of the VR system, such as additional projectors or custom VR glasses. Our approach does not require additional hardware, except the ability to track each participating user. We propose three versions of our multi-viewer algorithm. Each of them balances image distortion and frame rate in different ways, making them more or less suitable for certain application scenarios. Our most sophisticated algorithm renders each pixel from its own, optimized camera perspective, which depends on all tracked users' head positions and orientations. © 2012 IEEE.
UR - http://hdl.handle.net/10754/564527
UR - http://ieeexplore.ieee.org/document/6184187/
UR - http://www.scopus.com/inward/record.url?scp=84860785254&partnerID=8YFLogxK
U2 - 10.1109/3DUI.2012.6184187
DO - 10.1109/3DUI.2012.6184187
M3 - Conference contribution
SN - 9781467312059
SP - 77
EP - 80
BT - 2012 IEEE Symposium on 3D User Interfaces (3DUI)
PB - Institute of Electrical and Electronics Engineers (IEEE)
ER -