TY - GEN
T1 - Pushing the Efficiency Limit Using Structured Sparse Convolutions
AU - Verma, Vinay Kumar
AU - Mehta, Nikhil
AU - Si, Shijing
AU - Henao, Ricardo
AU - Carin, Lawrence
N1 - KAUST Repository Item: Exported on 2023-03-07
PY - 2023/2/6
Y1 - 2023/2/6
N2 - Weight pruning is among the most popular approaches for compressing deep convolutional neural networks. Recent work suggests that in a randomly initialized deep neural network, there exist sparse subnetworks that achieve performance comparable to the original network. Unfortunately, finding these subnetworks involves iterative stages of training and pruning, which can be computationally expensive. We propose Structured Sparse Convolution (SSC), that leverages the inherent structure in images to reduce the parameters in the convolutional filter. This leads to improved efficiency of convolutional architectures compared to existing methods that perform pruning at initialization. We show that SSC is a generalization of commonly used layers (depthwise, groupwise and pointwise convolution) in "efficient architectures."Extensive experiments on well-known CNN models and datasets show the effectiveness of the proposed method. Architectures based on SSC achieve state-of-the-art performance compared to baselines on CIFAR10, CIFAR-100, Tiny-ImageNet, and ImageNet classification benchmarks.
AB - Weight pruning is among the most popular approaches for compressing deep convolutional neural networks. Recent work suggests that in a randomly initialized deep neural network, there exist sparse subnetworks that achieve performance comparable to the original network. Unfortunately, finding these subnetworks involves iterative stages of training and pruning, which can be computationally expensive. We propose Structured Sparse Convolution (SSC), that leverages the inherent structure in images to reduce the parameters in the convolutional filter. This leads to improved efficiency of convolutional architectures compared to existing methods that perform pruning at initialization. We show that SSC is a generalization of commonly used layers (depthwise, groupwise and pointwise convolution) in "efficient architectures."Extensive experiments on well-known CNN models and datasets show the effectiveness of the proposed method. Architectures based on SSC achieve state-of-the-art performance compared to baselines on CIFAR10, CIFAR-100, Tiny-ImageNet, and ImageNet classification benchmarks.
UR - http://hdl.handle.net/10754/686477
UR - https://ieeexplore.ieee.org/document/10030101/
UR - http://www.scopus.com/inward/record.url?scp=85149046294&partnerID=8YFLogxK
U2 - 10.1109/WACV56688.2023.00644
DO - 10.1109/WACV56688.2023.00644
M3 - Conference contribution
SN - 9781665493468
SP - 6492
EP - 6502
BT - 2023 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)
PB - IEEE
ER -