Abstract
Coarse building mass models are now routinely generated at scales ranging from individual buildings to whole cities. Such models can be abstracted from raw measurements, generated procedurally, or created manually. However, these models typically lack any meaningful geometric or texture details, making them unsuitable for direct display. We introduce the problem of automatically and realistically decorating such models by adding semantically consistent geometric details and textures. Building on the recent success of generative adversarial networks (GANs), we propose FrankenGAN, a cascade of GANs that creates plausible details across multiple scales over large neighborhoods. The various GANs are synchronized to produce consistent style distributions over buildings and neighborhoods.We provide the user with direct control over the variability of the output. We allow him/her to interactively specify the style via images and manipulate style-adapted sliders to control style variability. We test our system on several large-scale examples. The generated outputs are qualitatively evaluated via a set of perceptual studies and are found to be realistic, semantically plausible, and consistent in style.
Original language | English (US) |
---|---|
Pages (from-to) | 1-14 |
Number of pages | 14 |
Journal | ACM Transactions on Graphics |
Volume | 37 |
Issue number | 6 |
DOIs | |
State | Published - Nov 28 2018 |
Bibliographical note
KAUST Repository Item: Exported on 2020-10-01Acknowledged KAUST grant number(s): OSR-2015-CCF-2533, OSR-CRG2017-3426
Acknowledgements: This project was supported by an ERC Starting Grant (SmartGeometry StG-2013-335373), KAUST-UCL Grant (OSR-2015-CCF-2533), ERC PoC Grant (SemanticCity), the KAUST Office of Sponsored Research (OSR-CRG2017-3426), Open3D Project (EPSRC Grant EP/M013685/1), and a Google Faculty Award (UrbanPlan).