Abstract
The bottleneck between the processor and memory is the most significant barrier to the ongoing development of efficient processing systems. Therefore, a research effort begun to shift from processor-centric architectures to memory-centric architectures. Various in-memory processor architectures have been proposed to break this barrier to pave the way for ever-demanding memory-bound applications. Associative in-memory processing is a successful candidate for truly in-memory computing, in which processor and memory are combined in the same location to eliminate the expensive data access costs. The architecture exhibits an unmatched advantage for data-intensive applications due to its memory-centric design principles. On the other hand, this advantage can be revealed fully by an efficient design methodology. This study puts further progressive effort by proposing a hardware/software design methodology for associative in-memory processors. The methodology aims to decrease energy consumption and area requirement of the processor architecture specifically programmed to perform a given task. According to the evaluation of nine different benchmarks, such as fast Fourier transform and multiply-accumulate, the proposed design flow accomplishes an average 7% reduction in memory area and 18% savings in total energy consumption.
Original language | English (US) |
---|---|
Journal | Journal of Parallel and Distributed Computing |
DOIs | |
State | Published - Nov 2021 |
Bibliographical note
KAUST Repository Item: Exported on 2021-11-13Acknowledgements: We acknowledge the financial support from AI Initiative, King Abdullah University of Science and Technology (KAUST), Saudi Arabia.
ASJC Scopus subject areas
- Artificial Intelligence
- Hardware and Architecture
- Theoretical Computer Science
- Software
- Computer Networks and Communications