Publications

Detailed Information

Acceleration of DNN Backward Propagation by Selective Computation of Gradients

Cited 9 time in Web of Science Cited 8 time in Scopus
Authors

Lee, Gunhee; Park, Hanmin; Kim, Namhyung; Yu, Joonsang; Jo, Sujeong; Choi, Kiyoung

Issue Date
2019-06
Publisher
ASSOC COMPUTING MACHINERY
Citation
PROCEEDINGS OF THE 2019 56TH ACM/EDAC/IEEE DESIGN AUTOMATION CONFERENCE (DAC), pp.1-16
Abstract
The training process of a deep neural network commonly consists of three phases: forward propagation, backward propagation, and weight update. In this paper, we propose a hardware architecture to accelerate the backward propagation. Our approach applies to neural networks that use rectified linear unit. Considering that the backward propagation results in a zero activation gradient when the corresponding activation is zero, we can safely skip the gradient calculation. Based on this observation, we design an efficient hardware accelerator for training deep neural networks by selectively computing gradients. We show the effectiveness of our approach through experiments with various network models.
URI
https://hdl.handle.net/10371/186954
DOI
https://doi.org/10.1145/3316781.3317755
Files in This Item:
There are no files associated with this item.
Appears in Collections:

Altmetrics

Item View & Download Count

  • mendeley

Items in S-Space are protected by copyright, with all rights reserved, unless otherwise indicated.

Share