Abstract
We propose an unsupervised segmentation framework for StyleGAN generated objects. We build on two main observations. First, the features generated by StyleGAN hold valuable information that can be utilized towards training segmentation networks. Second, the foreground and background can often be treated to be largely independent and be swapped across images to produce plausible composited images. For our solution, we propose to augment the StyleGAN2 generator architecture with a segmentation branch and to split the generator into a foreground and background network. This enables us to generate soft segmentation masks for the foreground object in an unsupervised fashion. On multiple object classes, we report comparable results against state-of-the-art supervised segmentation networks, while against the best unsupervised segmentation approach we demonstrate a clear improvement, both in qualitative and quantitative metricsProject Page : https:/rameenabdal.github.io/Labels4Free
Original language | English (US) |
---|---|
Title of host publication | 2021 IEEE/CVF International Conference on Computer Vision (ICCV) |
Publisher | IEEE |
Pages | 13950-13959 |
Number of pages | 10 |
ISBN (Print) | 978-1-6654-2813-2 |
DOIs | |
State | Published - 2021 |
Bibliographical note
KAUST Repository Item: Exported on 2023-03-24Acknowledged KAUST grant number(s): CRG2017-3426, OSR
Acknowledgements: This work was supported by Adobe and the KAUST Office of Sponsored Research (OSR) under Award No. OSRCRG2017-3426.