Publications

Detailed Information

A New Approach to Binarizing Neural Networks : 신경망을 이진화하는 새로운 방법에 대한 연구

DC Field Value Language
dc.contributor.advisor최기영-
dc.contributor.author서정우-
dc.date.accessioned2017-07-14T02:44:46Z-
dc.date.available2017-07-14T02:44:46Z-
dc.date.issued2017-02-
dc.identifier.other000000142295-
dc.identifier.urihttps://hdl.handle.net/10371/122858-
dc.description학위논문 (석사)-- 서울대학교 대학원 : 전기·정보공학부, 2017. 2. 최기영.-
dc.description.abstractArtificial intelligence is one of the most important technologies, and deep neural network is one branch of artificial intelligence. A deep neural network consists of many neurons and synapses which mimic mammals brain. It has attracted many interests from academia and industry in various fields including computer vision and speech recognition for the last decade.
It is well known that deep neural networks become more powerful with more layers and neurons. However, as deep neural networks grow larger, they suffer from the requirement of huge memory and computation. Therefore, reducing the overhead of handling them becomes one of key challenges in neural networks nowadays. There are many methodologies to address this issue such as weight quantization, weight pruning, and hashing.
This thesis proposes a new approach to binarizing neural networks. It prunes weights and forces remaining weights to degenerate to binary values. Experimental results show that the proposed approach reduces the number of weights down to 5.35% in a fully connected neural network and down to 50.35% in a convolutional neural network. Compared to the floating point convolutional neural network, the proposed approach gives 98.9% reductions in computation and 93.6% reduction in power consumption without any accuracy loss.
-
dc.description.tableofcontentsChapter 1 Introduction 1
1.1 Thesis organization 2
Chapter 2 Related Work 4
2.1 Weights Pruning 4
2.2 Binarized Neural Network 6
2.3 Approximate Neural Network 9
Chapter 3 Proposed Approach 12
3.1 Motivational Example 12
3.2 Weights Compression 14
3.3 Multiplication in Activation Stage 17
Chapter 4 Implementation 19
Chapter 5 Experimental Result 24
5.1 Convolutional Neural Network 24
5.2 Fully-Connected Neural Network 32
Chapter 6 Conclusion and Future work 41
Bibliography 43
국문초록 46
-
dc.formatapplication/pdf-
dc.format.extent1080641 bytes-
dc.format.mediumapplication/pdf-
dc.language.isoen-
dc.publisher서울대학교 대학원-
dc.subjectDeep neural network-
dc.subjectImage recognition-
dc.subjectNetwork pruning-
dc.subjectWeight compression-
dc.subjectFeedforward network-
dc.subject.ddc621-
dc.titleA New Approach to Binarizing Neural Networks-
dc.title.alternative신경망을 이진화하는 새로운 방법에 대한 연구-
dc.typeThesis-
dc.contributor.AlternativeAuthorSeo Jungwoo-
dc.description.degreeMaster-
dc.citation.pages47-
dc.contributor.affiliation공과대학 전기·정보공학부-
dc.date.awarded2017-02-
Appears in Collections:
Files in This Item:

Altmetrics

Item View & Download Count

  • mendeley

Items in S-Space are protected by copyright, with all rights reserved, unless otherwise indicated.

Share