TY - JOUR
T1 - Investigating object compositionality in Generative Adversarial Networks
AU - van Steenkiste, Sjoerd
AU - Kurach, Karol
AU - Schmidhuber, Jürgen
AU - Gelly, Sylvain
N1 - Generated from Scopus record by KAUST IRTS on 2022-09-14
PY - 2020/10/1
Y1 - 2020/10/1
N2 - Deep generative models seek to recover the process with which the observed data was generated. They may be used to synthesize new samples or to subsequently extract representations. Successful approaches in the domain of images are driven by several core inductive biases. However, a bias to account for the compositional way in which humans structure a visual scene in terms of objects has frequently been overlooked. In this work, we investigate object compositionality as an inductive bias for Generative Adversarial Networks (GANs). We present a minimal modification of a standard generator to incorporate this inductive bias and find that it reliably learns to generate images as compositions of objects. Using this general design as a backbone, we then propose two useful extensions to incorporate dependencies among objects and background. We extensively evaluate our approach on several multi-object image datasets and highlight the merits of incorporating structure for representation learning purposes. In particular, we find that our structured GANs are better at generating multi-object images that are more faithful to the reference distribution. More so, we demonstrate how, by leveraging the structure of the learned generative process, one can ‘invert’ the learned generative model to perform unsupervised instance segmentation. On the challenging CLEVR dataset, it is shown how our approach is able to improve over other recent purely unsupervised object-centric approaches to image generation.
AB - Deep generative models seek to recover the process with which the observed data was generated. They may be used to synthesize new samples or to subsequently extract representations. Successful approaches in the domain of images are driven by several core inductive biases. However, a bias to account for the compositional way in which humans structure a visual scene in terms of objects has frequently been overlooked. In this work, we investigate object compositionality as an inductive bias for Generative Adversarial Networks (GANs). We present a minimal modification of a standard generator to incorporate this inductive bias and find that it reliably learns to generate images as compositions of objects. Using this general design as a backbone, we then propose two useful extensions to incorporate dependencies among objects and background. We extensively evaluate our approach on several multi-object image datasets and highlight the merits of incorporating structure for representation learning purposes. In particular, we find that our structured GANs are better at generating multi-object images that are more faithful to the reference distribution. More so, we demonstrate how, by leveraging the structure of the learned generative process, one can ‘invert’ the learned generative model to perform unsupervised instance segmentation. On the challenging CLEVR dataset, it is shown how our approach is able to improve over other recent purely unsupervised object-centric approaches to image generation.
UR - https://linkinghub.elsevier.com/retrieve/pii/S0893608020302483
UR - http://www.scopus.com/inward/record.url?scp=85088638451&partnerID=8YFLogxK
U2 - 10.1016/j.neunet.2020.07.007
DO - 10.1016/j.neunet.2020.07.007
M3 - Article
SN - 1879-2782
VL - 130
SP - 309
EP - 325
JO - Neural Networks
JF - Neural Networks
ER -