Abstract
In this work, we present a method for acquiring, storing, and using scene data to enable realistic shading of virtual objects in an augmented reality application. Our method allows for sparse sampling of the environment’s lighting condition while still delivering a convincing shading to the rendered objects. We use common camera parameters, provided by a head-mounted camera, to get lighting information from the scene and store them in a tree structure, saving both locality and directionality of the data. This makes our approach suitable for implementation in augmented reality applications where the sparse and unpredictable nature of the data samples captured from a head-mounted device can be problematic. The construction of the data structure and the shading of virtual objects happen in real time, and without requiring high-performance hardware. Our model is designed for augmented reality devices with optical see-through displays, and in this work we used Microsoft’s HoloLens 2.
Original language | English |
---|---|
Title of host publication | Proceedings of the 19th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 1: GRAPP, HUCAPP and IVAPP |
Publisher | SciTePress |
Pages | 293-299 |
Number of pages | 7 |
Volume | 1 |
ISBN (Electronic) | 978-989-758-679-8 |
DOIs | |
Publication status | Published - 2024 |
Event | 19th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications: 19th International Conference on Computer Graphics Theory and Applications - Rome, Italy Duration: 2024 Feb 27 → 2024 Feb 29 Conference number: 19 https://grapp.scitevents.org/?y=2024 |
Conference
Conference | 19th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications |
---|---|
Abbreviated title | GRAPP2024 |
Country/Territory | Italy |
City | Rome |
Period | 2024/02/27 → 2024/02/29 |
Internet address |
Subject classification (UKÄ)
- Computer Science