Abstract
Local learning schemes have shown promising performance in spiking neural networks (SNNs) training and are considered a step toward more biologically plausible learning. Despite many efforts to design high-performance neuromorphic systems, a fast and efficient on-chip training algorithm is still missing, which limits the deployment of neuromorphic systems in many real-time applications. This work proposes a scalable, fast, and efficient spiking neuromorphic hardware system with on-chip local learning capability. We introduce an effective hardware-friendly local training algorithm compatible with sparse temporal input coding and binary random classification weights. The algorithm is demonstrated to deliver competitive accuracy in different tasks. The proposed digital system explores spike sparsity in communication, parallelism in vector–matrix operations and process-level dataflow, and locality of training errors, which leads to low cost and fast training speed. The system is optimized under various performance metrics. Taking into consideration energy, speed, resources, and accuracy, the proposed method shows around 10 × efficiency over a recent work with a direct feedback alignment (DFA) method and 4.5 × efficiency over the spike-timing-dependent plasticity (STDP) method. Moreover, our hardware architecture can easily scale up with the network size at a linear rate. Thus, our method has demonstrated great potential for use in various applications, especially those demanding low latency.
Original language | English (US) |
---|---|
Pages (from-to) | 1-12 |
Number of pages | 12 |
Journal | IEEE Transactions on Very Large Scale Integration (VLSI) Systems |
DOIs | |
State | Published - Sep 30 2022 |
Bibliographical note
KAUST Repository Item: Exported on 2022-10-03Acknowledgements: This work was supported by the King Abdullah University of Science and Technology (KAUST) AI Initiative.
ASJC Scopus subject areas
- Hardware and Architecture
- Software
- Electrical and Electronic Engineering