Publications

Detailed Information

A New Approach to Binarizing Neural Networks : 신경망을 이진화하는 새로운 방법에 대한 연구

Cited 0 time in Web of Science Cited 0 time in Scopus
Authors

서정우

Advisor
최기영
Major
공과대학 전기·정보공학부
Issue Date
2017-02
Publisher
서울대학교 대학원
Keywords
Deep neural networkImage recognitionNetwork pruningWeight compressionFeedforward network
Description
학위논문 (석사)-- 서울대학교 대학원 : 전기·정보공학부, 2017. 2. 최기영.
Abstract
Artificial intelligence is one of the most important technologies, and deep neural network is one branch of artificial intelligence. A deep neural network consists of many neurons and synapses which mimic mammals brain. It has attracted many interests from academia and industry in various fields including computer vision and speech recognition for the last decade.
It is well known that deep neural networks become more powerful with more layers and neurons. However, as deep neural networks grow larger, they suffer from the requirement of huge memory and computation. Therefore, reducing the overhead of handling them becomes one of key challenges in neural networks nowadays. There are many methodologies to address this issue such as weight quantization, weight pruning, and hashing.
This thesis proposes a new approach to binarizing neural networks. It prunes weights and forces remaining weights to degenerate to binary values. Experimental results show that the proposed approach reduces the number of weights down to 5.35% in a fully connected neural network and down to 50.35% in a convolutional neural network. Compared to the floating point convolutional neural network, the proposed approach gives 98.9% reductions in computation and 93.6% reduction in power consumption without any accuracy loss.
Language
English
URI
https://hdl.handle.net/10371/122858
Files in This Item:
Appears in Collections:

Altmetrics

Item View & Download Count

  • mendeley

Items in S-Space are protected by copyright, with all rights reserved, unless otherwise indicated.

Share