Labels4Free: Unsupervised Segmentation using StyleGAN

Rameen Abdal, Peihao Zhu, Niloy J. Mitra, Peter Wonka

Research output: Chapter in Book/Report/Conference proceedingConference contribution

21 Scopus citations


We propose an unsupervised segmentation framework for StyleGAN generated objects. We build on two main observations. First, the features generated by StyleGAN hold valuable information that can be utilized towards training segmentation networks. Second, the foreground and background can often be treated to be largely independent and be swapped across images to produce plausible composited images. For our solution, we propose to augment the StyleGAN2 generator architecture with a segmentation branch and to split the generator into a foreground and background network. This enables us to generate soft segmentation masks for the foreground object in an unsupervised fashion. On multiple object classes, we report comparable results against state-of-the-art supervised segmentation networks, while against the best unsupervised segmentation approach we demonstrate a clear improvement, both in qualitative and quantitative metricsProject Page : https:/
Original languageEnglish (US)
Title of host publication2021 IEEE/CVF International Conference on Computer Vision (ICCV)
Number of pages10
ISBN (Print)978-1-6654-2813-2
StatePublished - 2021

Bibliographical note

KAUST Repository Item: Exported on 2023-03-24
Acknowledged KAUST grant number(s): CRG2017-3426, OSR
Acknowledgements: This work was supported by Adobe and the KAUST Office of Sponsored Research (OSR) under Award No. OSRCRG2017-3426.


Dive into the research topics of 'Labels4Free: Unsupervised Segmentation using StyleGAN'. Together they form a unique fingerprint.

Cite this