Publications

Detailed Information

MEMORIZATION CAPACITY OF DEEP NEURAL NETWORKS UNDER PARAMETER QUANTIZATION

Cited 4 time in Web of Science Cited 7 time in Scopus
Authors

Boo, Yoonho; Shin, Sungho; Sung, Wonyong

Issue Date
2019-05
Publisher
IEEE
Citation
2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), pp.1383-1387
Abstract
Most deep neural networks (DNNs) require complex models to achieve high performance. Parameter quantization is widely used for reducing the implementation complexities. Previous studies on quantization were mostly based on extensive simulation using training data on a specific model. We choose a different approach and attempt to measure the per- parameter capacity of DNN models and interpret the results to obtain insights on optimum quantization of parameters. This research uses artificially generated data and generic forms of fully connected DNNs, convolutional neural networks, and recurrent neural networks. We conduct memorization and classification tests to study the effects of the number and precision of the parameters on the performance. The model and the per- parameter capacities are assessed by measuring the mutual information between the input and the classified output. To get insight for parameter quantization when performing real tasks, the training and test performances are compared.
ISSN
1520-6149
URI
https://hdl.handle.net/10371/186961
DOI
https://doi.org/10.1109/ICASSP.2019.8682462
Files in This Item:
There are no files associated with this item.
Appears in Collections:

Altmetrics

Item View & Download Count

  • mendeley

Items in S-Space are protected by copyright, with all rights reserved, unless otherwise indicated.

Share