Efficient Hardware Implementation for Online Local Learning in Spiking Neural Networks

Wenzhe Guo, Mohammed E. Fouda, Ahmed Eltawil, Khaled N. Salama

Research output: Chapter in Book/Report/Conference proceedingConference contribution

2 Scopus citations


Local learning schemes have shown promising performance in spiking neural networks and are considered as a step towards more biologically plausible learning. Despite many efforts to design high-performance neuromorphic systems, a fast and efficient neuromorphic hardware system is still missing. This work proposes a scalable, fast, and efficient spiking neuromorphic hardware system with on-chip local learning capability that can achieve competitive classification accuracy. We introduce an effective hardware-friendly local training algorithm that is compatible with sparse temporal input coding and binary random classification weights. The algorithm is demonstrated to deliver competitive accuracy. The proposed digital system explores spike sparsity in communication, parallelism in vector-matrix operations, and locality of training errors, which leads to low cost and fast training speed. Taking into consideration energy, speed, resource, and accuracy, our design shows 7.7× efficiency over a recent spiking direct feedback alignment method and 2.7× efficiency over the spike-timing-dependent plasticity method.
Original languageEnglish (US)
Title of host publication2022 IEEE 4th International Conference on Artificial Intelligence Circuits and Systems (AICAS)
StatePublished - Sep 5 2022

Bibliographical note

KAUST Repository Item: Exported on 2022-09-09
Acknowledgements: This work was funded by the King Abdullah University of Science and Technology (KAUST) AI Initiative, Saudi Arabia.


Dive into the research topics of 'Efficient Hardware Implementation for Online Local Learning in Spiking Neural Networks'. Together they form a unique fingerprint.

Cite this