Publications

Detailed Information

GradPIM: A Practical Processing-in-DRAM Architecture for Gradient Descent

Cited 14 time in Web of Science Cited 21 time in Scopus
Authors

Kim, Heesu; Park, Hanmin; Kim, Taehyun; Cho, Kwanheum; Lee, Eojin; Ryu, Soojung; Lee, Hyuk-Jae; Choi, Kiyoung; Lee, Jinho

Issue Date
2021
Publisher
IEEE COMPUTER SOC
Citation
2021 27TH IEEE INTERNATIONAL SYMPOSIUM ON HIGH-PERFORMANCE COMPUTER ARCHITECTURE (HPCA 2021), Vol.2021-February, pp.249-262
Abstract
In this paper, we present GradPIM, a processing-in-memory architecture which accelerates parameter updates of deep neural networks training. As one of processing-in-memory techniques that could be realized in the near future, we propose an incremental, simple architectural design that does not invade the existing memory protocol. Extending DDR4 SDRAM to utilize bank-group parallelism makes our operation designs in processing-in-memory (PIM) module efficient in terms of hardware cost and performance. Our experimental results show that the proposed architecture can improve the performance of DNN training and greatly reduce memory bandwidth requirement while posing only a minimal amount of overhead to the protocol and DRAM area.
ISSN
1530-0897
URI
https://hdl.handle.net/10371/200477
DOI
https://doi.org/10.1109/HPCA51647.2021.00030
Files in This Item:
There are no files associated with this item.
Appears in Collections:

Related Researcher

  • College of Engineering
  • Department of Electrical and Computer Engineering
Research Area AI Accelerators, Distributed Deep Learning, Neural Architecture Search

Altmetrics

Item View & Download Count

  • mendeley

Items in S-Space are protected by copyright, with all rights reserved, unless otherwise indicated.

Share