Make ℓ1 regularization effective in training sparse CNN

Juncai He, Xiaodong Jia, Jinchao Xu, Lian Zhang, Liang Zhao

Research output: Contribution to journalArticlepeer-review

7 Scopus citations

Abstract

Compressed Sensing using ℓ1 regularization is among the most powerful and popular sparsification technique in many applications, but why has it not been used to obtain sparse deep learning model such as convolutional neural network (CNN)? This paper is aimed to provide an answer to this question and to show how to make it work.Following Xiao (J Mach Learn Res 11(Oct):2543–2596, 2010), We first demonstrate that the commonly used stochastic gradient decent and variants training algorithm is not an appropriate match with ℓ1 regularization and then replace it with a different training algorithm based on a regularized dual averaging (RDA) method. The RDA method of Xiao (J Mach Learn Res 11(Oct):2543–2596, 2010) was originally designed specifically for convex problem, but with new theoretical insight and algorithmic modifications (using proper initialization and adaptivity), we have made it an effective match with ℓ1 regularization to achieve a state-of-the-art sparsity for the highly non-convex CNN compared to other weight pruning methods without compromising accuracy (achieving 95% sparsity for ResNet-18 on CIFAR-10, for example).
Original languageEnglish (US)
Pages (from-to)163-182
Number of pages20
JournalComputational Optimization and Applications
Volume77
Issue number1
DOIs
StatePublished - Sep 1 2020
Externally publishedYes

Bibliographical note

Generated from Scopus record by KAUST IRTS on 2023-02-15

ASJC Scopus subject areas

  • Control and Optimization
  • Computational Mathematics
  • Applied Mathematics

Fingerprint

Dive into the research topics of 'Make ℓ1 regularization effective in training sparse CNN'. Together they form a unique fingerprint.

Cite this