Publications

Detailed Information

A novel zero weight/activation-aware hardware architecture of convolutional neural network

Cited 0 time in Web of Science Cited 50 time in Scopus
Authors

Kim, D.; Ahn, J.; Yoo, S.

Issue Date
2017-05
Publisher
Institute of Electrical and Electronics Engineers Inc.
Citation
Proceedings of the 2017 Design, Automation and Test in Europe, DATE 2017, pp.1462-1467
Abstract
It is imperative to accelerate convolutional neural networks (CNNs) due to their ever-widening application areas from server, mobile to IoT devices. Based on the fact that CNNs can be characterized by a significant amount of zero values in both kernel weights and activations, we propose a novel hardware accelerator for CNNs exploiting zero weights and activations. We also report a zero-induced load imbalance problem, which exists in zero-aware parallel CNN hardware architectures, and present a zero-aware kernel allocation as a solution. According to our experiments with a cycle-accurate simulation model, RTL, and layout design of the proposed architecture running two real deep CNNs, pruned AlexNet [1] and VGG-16 [2], our architecture offers 4x/1.8x (AlexNet) and 5.2x/2.1x (VGG-16) speedup compared with state-of-the-art zero-agnostic/zero-activation-aware architectures. © 2017 IEEE.
ISSN
0000-0000
URI
https://hdl.handle.net/10371/192892
Files in This Item:
There are no files associated with this item.
Appears in Collections:

Altmetrics

Item View & Download Count

  • mendeley

Items in S-Space are protected by copyright, with all rights reserved, unless otherwise indicated.

Share