Publications

Detailed Information

Robust Feature Learning on Sequential Data : 순차 데이터에 대한 견고한 특징 학습

DC Field Value Language
dc.contributor.advisor윤성로-
dc.contributor.author권선영-
dc.date.accessioned2018-11-12T00:59:46Z-
dc.date.available2018-11-12T00:59:46Z-
dc.date.issued2018-08-
dc.identifier.other000000152250-
dc.identifier.urihttps://hdl.handle.net/10371/143254-
dc.description학위논문 (박사)-- 서울대학교 대학원 : 공과대학 전기·정보공학부, 2018. 8. 윤성로.-
dc.description.abstractA sequence can be defined as a series of related things: a series of words in natural language, symbols in biological sequences, or a time series representing a stock market or biological signal. Drugs can also be expressed as a series of symbols representing chemical properties. Values in a sequence are not assumed to be independent and identically distributed (i.i.d.), but rather have local or long-distance dependencies and hidden patterns of variable length.

This dissertation proposes three methodologies to understand hidden dependencies and patterns in sequential data. First, motivated by the demand for the accurate, fast, and scalable classification of nucleotide sequences against a group of sequences, we propose a new generative classification method based on the variable order Markov model and universal probability. Second, to learn chemical features automatically without manual curation in chemical-chemical interaction (CCI), we propose an end-to-end convolutional neural network (CNN) based on Siamese architecture. Finally, to achieve robust results in quantitative structure activity relationship (QSAR) prediction, we propose comprehensive ensemble learning incorporating an

individual classifier with CNNs and recurrent neural networks (RNNs).

In summary, this dissertation describes methodologies for solving diverse sequential problems. Information theoretic approaches, and neural network of CNN and RNN approaches were used exquisitely, and we confirmed their robust performance.
-
dc.description.tableofcontentsAbstract i

List of Figures v

List of Tables vii

1 Introduction 1

2 Background 8

2.1 Sequential Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

2.2 Representation of Sequential Data . . . . . . . . . . . . . . . . . . 11

2.2.1 Nucleic Acid Sequence . . . . . . . . . . . . . . . . . . . . 11

2.2.2 Chemical Compound . . . . . . . . . . . . . . . . . . . . . 15

2.3 Preprocessing of Sequential Data . . . . . . . . . . . . . . . . . . . 19

2.4 Methods for Processed Sequential Data . . . . . . . . . . . . . . . 21

2.4.1 Information Theoretic Approaches . . . . . . . . . . . . . . 21

2.4.2 Convectional Machine Learning Approaches . . . . . . . . 24

2.4.3 Deep Neural Network Based Approaches . . . . . . . . . . 24

3 Variable-order Markov Model Based Sequence Classifier 28

3.1 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

3.1.1 Model building . . . . . . . . . . . . . . . . . . . . . . . . 33

3.1.2 Classification . . . . . . . . . . . . . . . . . . . . . . . . . 35

3.1.3 Outlier detection in a family of sequences . . . . . . . . . . 36

3.1.4 Synthetic sequence generation from learned CTM . . . . . . 37

3.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

3.2.1 Dataset preparation and experimental setup . . . . . . . . . 37

3.2.2 Classification Performance and Scalability . . . . . . . . . 38

3.2.3 Other classification features and general purposes of NASCUP 45

3.3 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51

4 Siamese CNN Based Interaction Learning 56

4.1 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61

4.1.1 Notations . . . . . . . . . . . . . . . . . . . . . . . . . . . 61

4.1.2 Input Preprocessing . . . . . . . . . . . . . . . . . . . . . . 62

4.1.3 Weight Shared CNN Networks . . . . . . . . . . . . . . . . 64

4.1.4 Distance and Interaction Prediction . . . . . . . . . . . . . 65

4.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68

4.2.1 Experimental Setup . . . . . . . . . . . . . . . . . . . . . . 68

4.2.2 Effects of Hyperparameter Variation . . . . . . . . . . . . . 72

4.2.3 Performance Comparison with Other Methods . . . . . . . 74

4.2.4 Analysis on Frequent SMILES Characters . . . . . . . . . . 79

4.3 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83

5 Comprehensive Ensemble with CNN, RNN Based Learner 86

5.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88

5.1.1 Ensemble Learning . . . . . . . . . . . . . . . . . . . . . . 88

5.2 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90

5.2.1 Notations . . . . . . . . . . . . . . . . . . . . . . . . . . . 92

5.2.2 First-Level: Individual Learning . . . . . . . . . . . . . . . 92

5.2.3 Second-Level: Meta-Learning . . . . . . . . . . . . . . . . 95

5.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95

5.3.1 Experimental Setup . . . . . . . . . . . . . . . . . . . . . . 95

5.3.2 Performance Comparison with Other Models . . . . . . . . 98

5.3.3 Performance Comparison on Ensemble Combination . . . . 103

5.3.4 Ensemble Effects on Class Imbalance . . . . . . . . . . . . 108

5.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108

6 Conclusion 111

Bibliography 113

Abstract in Korean 129
-
dc.language.isoen-
dc.publisher서울대학교 대학원-
dc.subject.ddc621.3-
dc.titleRobust Feature Learning on Sequential Data-
dc.title.alternative순차 데이터에 대한 견고한 특징 학습-
dc.typeThesis-
dc.contributor.AlternativeAuthorKWON SUNYOUNG-
dc.description.degreeDoctor-
dc.contributor.affiliation공과대학 전기·정보공학부-
dc.date.awarded2018-08-
Appears in Collections:
Files in This Item:

Altmetrics

Item View & Download Count

  • mendeley

Items in S-Space are protected by copyright, with all rights reserved, unless otherwise indicated.

Share