Abstract
There is a strong emphasis in the continual learning literature on sequential classification experiments, where each task bares little semblance to previous ones. While certainly a form of continual learning, such tasks do not accurately represent many continual learning problems of the real-world, where the data distribution often evolves slowly over time. We propose using Generative Adversarial Networks (GANs) as a potential source for generating potentially unlimited datasets of this nature. We also identify that the dynamics of GAN training naturally constitute a continual learning problem, and show that leveraging continual learning methods can improve performance. As such, we show that techniques from both continual learning and GAN, typically studied separately, can be used to each other’s benefit.
Original language | English (US) |
---|---|
Pages (from-to) | 1-10 |
Number of pages | 10 |
Journal | Nips |
Issue number | Nips 2018 |
State | Published - 2018 |
Externally published | Yes |