Publications

Detailed Information

HLHLp: Quantized Neural Networks Training for Reaching Flat Minima in Loss Surface

Cited 2 time in Web of Science Cited 4 time in Scopus
Authors

Shin, Sungho; Park, Jinhwan; Boo, Yoonho; Sung, Wonyong

Issue Date
2020-02
Publisher
ASSOC ADVANCEMENT ARTIFICIAL INTELLIGENCE
Citation
THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, Vol.34, pp.5784-5791
Abstract
Quantization of deep neural networks is extremely essential for efficient implementations. Low-precision networks are typically designed to represent original floating-point counterparts with high fidelity, and several elaborate quantization algorithms have been developed. We propose a novel training scheme for quantized neural networks to reach flat minima in the loss surface with the aid of quantization noise. The proposed training scheme employs high-low-high-low precision in an alternating manner for network training. The learning rate is also abruptly changed at each stage for coarse- or fine-tuning. With the proposed training technique, we show quite good performance improvements for convolutional neural networks when compared to the previous fine-tuning based quantization scheme. We achieve the state-of-the-art results for recurrent neural network based language modeling with 2-bit weight and activation.
ISSN
2159-5399
URI
https://hdl.handle.net/10371/186404
DOI
https://doi.org/10.1609/aaai.v34i04.6035
Files in This Item:
There are no files associated with this item.
Appears in Collections:

Altmetrics

Item View & Download Count

  • mendeley

Items in S-Space are protected by copyright, with all rights reserved, unless otherwise indicated.

Share