Continual Learning using a Bayesian Nonparametric Dictionary of Weight Factors

Nikhil Mehta, Kevin J Liang, Vinay K Verma, Lawrence Carin

Research output: Contribution to journalArticlepeer-review

18 Downloads (Pure)

Abstract

Naively trained neural networks tend to experience catastrophic forgetting in sequential task settings, where data from previous tasks are unavailable. A number of methods, using various model expansion strategies, have been proposed recently as possible solutions. However, determining how much to expand the model is left to the practitioner, and often a constant schedule is chosen for simplicity, regardless of how complex the incoming task is. Instead, we propose a principled Bayesian nonparametric approach based on the Indian Buffet Process (IBP) prior, letting the data determine how much to expand the model complexity. We pair this with a factorization of the neural network's weight matrices. Such an approach allows the number of factors of each weight matrix to scale with the complexity of the task, while the IBP prior encourages sparse weight factor selection and factor reuse, promoting positive knowledge transfer between tasks. We demonstrate the effectiveness of our method on a number of continual learning benchmarks and analyze how weight factors are allocated and reused throughout the training.
Original languageEnglish (US)
JournalArxiv preprint
StatePublished - Apr 21 2020
Externally publishedYes

Bibliographical note

Proceedings of the 24th International Conference on Artificial Intelligence and Statistics (AISTATS) 2021

Keywords

  • cs.LG
  • stat.ML

Fingerprint

Dive into the research topics of 'Continual Learning using a Bayesian Nonparametric Dictionary of Weight Factors'. Together they form a unique fingerprint.

Cite this