Publications

Detailed Information

BitBlade: Energy-Efficient Variable Bit-Precision Hardware Accelerator for Quantized Neural Networks

Cited 14 time in Web of Science Cited 16 time in Scopus
Authors

Ryu, Sungju; Kim, Hyungjun; Yi, Wooseok; Kim, Eunhwan; Kim, Yulhwa; Kim, Taesu; Kim, Jae-Joon

Issue Date
2022-01
Publisher
Institute of Electrical and Electronics Engineers
Citation
IEEE Journal of Solid-State Circuits
Abstract
IEEEWe introduce an area/energy-efficient precisionscalable neural network accelerator architecture. Previous precision-scalable hardware accelerators have limitations such as the under-utilization of multipliers for low bit-width operations and the large area overhead to support various bit precisions. To mitigate the problems, we first propose a bitwise summation, which reduces the area overhead for the bit-width scaling. In addition, we present a channel-wise aligning scheme (CAS) to efficiently fetch inputs and weights from on-chip SRAM buffers and a channel-first and pixel-last tiling (CFPL) scheme to maximize the utilization of multipliers on various kernel sizes. A test chip was implemented in 28-nm CMOS technology, and the experimental results show that the throughput and energy efficiency of our chip are up to 7.7x and 1.64x higher than those of the state-of-the-art designs, respectively. Moreover, additional 1.5-3.4x throughput gains can be achieved using the CFPL method compared to the CAS.
ISSN
0018-9200
URI
https://hdl.handle.net/10371/184091
DOI
https://doi.org/10.1109/JSSC.2022.3141050
Files in This Item:
There are no files associated with this item.
Appears in Collections:

Altmetrics

Item View & Download Count

  • mendeley

Items in S-Space are protected by copyright, with all rights reserved, unless otherwise indicated.

Share