Abstract
Spintronics-based magnetic tunnel junction (MTJ) devices have shown the ability working as both synapse and spike threshold neurons, which is perfectly suitable with the hardware implementation of spike neural network (SNN). It has the inherent advantage of high energy efficiency with ultra-low operation voltage due to its small nanometric size and low depinning current densities. However, hardware-based SNNs training always suffers a significant performance loss compared with original neural networks due to variations among devices and information deficiency as the weights map with device synaptic conductance. Knowledge distillation is a model compression and acceleration method that enables transferring the learning knowledge from a large machine learning model to a smaller model with minimal loss in performance. In this paper, we propose a novel training scheme based on spike knowledge distillation which helps improve the training performance of spin-based SNN (SSNN) model via transferring knowledge from a large CNN model. We propose novel distillation methodologies and demonstrate the effectiveness of the proposed method with detailed experiments on four datasets. The experimental results indicate that our proposed training scheme consistently improves the performance of SSNN model by a large margin.
Original language | English (US) |
---|---|
Title of host publication | 2023 IEEE 5th International Conference on Artificial Intelligence Circuits and Systems (AICAS) |
Publisher | IEEE |
DOIs | |
State | Published - Jul 7 2023 |
Bibliographical note
KAUST Repository Item: Exported on 2023-07-10Acknowledgements: This work was supported by the King Abdullah University of Science and Technology baseline fund.