We propose a novel, automatic generation process for detail maps that allows the reduction of tiling artifacts in real-time terrain rendering. This is achieved by training a generative adversarial network (GAN) with a single input texture and subsequently using it to synthesize a huge texture spanning the whole terrain. The low-frequency components of the GAN output are extracted, down-scaled and combined with the high-frequency components of the input texture during rendering. This results in a terrain texture that is both highly detailed and non-repetitive, which eliminates the tiling artifacts without decreasing overall image quality. The rendering is efficient regarding both memory consumption and computational costs. Furthermore, it is orthogonal to other techniques for terrain texture improvements such as texture splatting and can directly be combined with them.
|Original language||English (US)|
|Title of host publication||2017 International Conference on Image and Vision Computing New Zealand, IVCNZ 2017|
|Publisher||IEEE Computer Society|
|Number of pages||6|
|State||Published - Jul 3 2018|
|Event||2017 International Conference on Image and Vision Computing New Zealand, IVCNZ 2017 - Christchurch, New Zealand|
Duration: Dec 4 2017 → Dec 6 2017
|Name||International Conference Image and Vision Computing New Zealand|
|Conference||2017 International Conference on Image and Vision Computing New Zealand, IVCNZ 2017|
|Period||12/4/17 → 12/6/17|
Bibliographical noteFunding Information:
This work has been funded by ISL and KAUST.
© 2017 IEEE.
ASJC Scopus subject areas
- Computational Theory and Mathematics
- Computer Vision and Pattern Recognition
- Electrical and Electronic Engineering