Efficient Continual Adaptation for Generative Adversarial Networks

Sakshi Varshney, Vinay Kumar Verma, Lawrence Carin, Piyush Rai

Research output: Contribution to journalArticlepeer-review

36 Downloads (Pure)


We present a continual learning approach for generative adversarial networks (GANs), by designing and leveraging parameter-efficient feature map transformations. Our approach is based on learning a set of global and task-specific parameters. The global parameters are fixed across tasks whereas the task specific parameters act as local adapters for each task, and help in efficiently transforming the previous task's feature map to the new task's feature map. Moreover, we propose an element-wise residual bias in the transformed feature space which highly stabilizes GAN training. In contrast to the recent approaches for continual GANs, we do not rely on memory replay, regularization towards previous tasks' parameters, or expensive weight transformations. Through extensive experiments on challenging and diverse datasets, we show that the feature-map transformation based approach outperforms state-of-the-art continual GANs methods, with substantially fewer parameters, and also generates high-quality samples that can be used in generative replay based continual learning of discriminative tasks.
Original languageEnglish (US)
JournalArxiv preprint
StatePublished - Mar 6 2021
Externally publishedYes

Bibliographical note

Under Submission


  • cs.LG
  • cs.CV
  • stat.ML


Dive into the research topics of 'Efficient Continual Adaptation for Generative Adversarial Networks'. Together they form a unique fingerprint.

Cite this