Publications

Detailed Information

Generating Hierarchical Schemas from Low-level Sensory Data using Deep Neural Networks : 깊은 신경망을 이용하여 저수준 센서 데이터에 내재된 계층적 스키마를 생성하는 방법

Cited 0 time in Web of Science Cited 0 time in Scopus
Authors

김은솔

Advisor
장병탁
Major
공과대학 전기·컴퓨터공학부
Issue Date
2018-02
Publisher
서울대학교 대학원
Keywords
Multimodal sequential learningschema learningknowledge schemasdeep neural network
Description
학위논문 (박사)-- 서울대학교 대학원 : 공과대학 전기·컴퓨터공학부, 2018. 2. 장병탁.
Abstract
Deep neural networks to automatically construct hierarchical schemas from low-level sensory data is presented.
This thesis suggests a new architecture and learning method of deep neural networks, which can get multimodal sensory data as input and learn that input data.
Then, knowledge schemas which are needed to achieve given tasks are extracted from the learned deep neural networks.
As results, three challenging tasks are solved by suggested deep neural network, also corresponding knowledge schemas are automatically constructed from the learned networks.

First of all, a new architecture of deep neural network is designed in order to deal with low-level sensory data over time.
While lots of machine learning algorithms learn from static and unimodal data as inputs, human can learn so continuously in a noisy, complex and changing environment.
Human take noisy and diverse sensory inputs, select small pieces of inputs to be given attention, and integrate the selected inputs robustly and rapidly.
To resolve the limitations of the machine learning algorithms inspired by human mechanisms, new gate modules to deep neural networks are suggested.

Suggested neural networks consider the multimodal sequential data inputs as a set of data piece consisting of two axis, one is spatial (modalites) and the other one is temporal axis.
At each time step, the suggested network selects small number of data pieces, aligns the pieces, and the combines them into a vector representation.
New gate modules are developed to resolve this problem and this module is applied to deep neural networks.

After training the suggested neural networks, knowledge schemas to achieve the given tasks are automatically extracted.
Contrary to the conventional researches on knowledge representation, which manually pre-define the structure of the knowledge after investigating the data, this thesis try to find knowledge schemas inherent in the input data automatically.
Hierarchical causal structures are extracted from the learned neural networks, by analyzing the hidden nodes with population code methods.
Each hidden layer with number of hidden neurons is considered as a population and the activation patterns are analyzed to construct causal structures.
From the causal structures of each hidden layers, a hierarchical knowledge schema is automatically obtained.

The suggested methods, which are the new deep neural network model for multimodal sequential learning and the knowledge schema construction method, are applied to a visual dialogue dataset and a wearable sensory dataset collected from real world.
As the experimental results, the state-of-the-art performances on the three challenging tasks are described.
Also, possibilities of explaining the results of the machine learning algorithms are discussed from the automatically constructed knowledge schemas.
Language
English
URI
https://hdl.handle.net/10371/140681
Files in This Item:
Appears in Collections:

Altmetrics

Item View & Download Count

  • mendeley

Items in S-Space are protected by copyright, with all rights reserved, unless otherwise indicated.

Share