Browse

Robustness and Upper Bound of Generalization Error in Deep Neural Networks
깊은 신경망에서 강건성과 일반화 오차의 상한

Cited 0 time in Web of Science Cited 0 time in Scopus
Authors
이효제
Advisor
강명주
Major
자연과학대학 수리과학부
Issue Date
2018-08
Publisher
서울대학교 대학원
Description
학위논문 (석사)-- 서울대학교 대학원 : 자연과학대학 수리과학부, 2018. 8. 강명주.
Abstract
The generalization of Deep Neural Networks(DNNs) is a crucial issue. While DNNs perform well in many domains, it is hard to explain theoretically why DNNs generalize well. The robustness, introduced in Xu, is a good notion to explain the generalization of DNNs. In this work, we branch off the notion of the robustness and generalization error. In addition, we assume that the input space is a bounded d-dimensional subspace of R^n and derive a new upper bound of the generalization error in DNNs. We attempt to demonstrate our work by using the Jacobian regularizer, suggested by Sokolic, based on the Jacobian matrix in DNNs. Also we visualize the result by using t-SNE.
Language
English
URI
http://hdl.handle.net/10371/144094
Files in This Item:
Appears in Collections:
College of Natural Sciences (자연과학대학)Dept. of Mathematical Sciences (수리과학부)Theses (Master's Degree_수리과학부)
  • mendeley

Items in S-Space are protected by copyright, with all rights reserved, unless otherwise indicated.

Browse