Resistive RAMs can implement extremely efficient matrix vector multiplication, drawing much attention for deep learning accelerator research. However, high fault rate is one of the fundamental challenges of ReRAM crossbar array-based deep learning accelerators. In this paper we propose a dataset-free, cost-free method to mitigate the impact of stuck-at faults in ReRAM crossbar arrays for deep learning applications. Our technique exploits the statistical properties of deep learning applications, hence complementary to previous hardware or algorithmic methods. Our experimental results using MNIST and CIFAR-10 datasets in binary networks demonstrate that our technique is very effective, both alone and together with previous methods, up to 20 % fault rate, which is higher than the previous remapping methods. We also evaluate our method in the presence of other non-idealities such as variability and IR drop.
|Original language||English (US)|
|Title of host publication||2021 Design, Automation and Test in Europe Conference and Exhibition, DATE 2021|
|Publisher||Institute of Electrical and Electronics Engineers Inc.|
|Number of pages||6|
|State||Published - Feb 1 2021|
Bibliographical noteKAUST Repository Item: Exported on 2021-08-11
Acknowledgements: This work was supported by NRF grants funded by MSIT of Korea (No. 2016M3A7B4909668, No. 2017R1D1A1B03033591, and No. 2020R1A2C2015066), IITP grant funded by MSIT of Korea (No.2020-0-01336, Artificial Intelligence Graduate School Program (UNIST)), and Free Innovative Research Fund of UNIST (1.170067.01). The EDA tool was supported by the IC Design Education Center (IDEC), Korea. J. Lee is the corresponding author of this paper (Email: email@example.com).