Publications

Detailed Information

Distance-based Directional Self-Attention Network : 거리 및 방향 기반 Self-Attention Network

DC Field Value Language
dc.contributor.advisor조성준-
dc.contributor.author임진배-
dc.date.accessioned2018-05-29T03:20:32Z-
dc.date.available2018-05-29T03:20:32Z-
dc.date.issued2018-02-
dc.identifier.other000000149369-
dc.identifier.urihttps://hdl.handle.net/10371/141437-
dc.description학위논문 (석사)-- 서울대학교 대학원 : 공과대학 산업공학과, 2018. 2. 조성준.-
dc.description.abstractAttention mechanism has been used as an ancillary means to help RNN or CNN. However, the Transformer (Vaswani et al., 2017) recently recorded the state-of-the-art performance in machine translation with a dramatic reduction in training time by solely using attention. Motivated by the Transformer, Directional Self Attention Network (Shen et al., 2017), a fully attention-based sentence encoder, was proposed. It showed good performance with various data by using forward and backward directional information in a sentence. But in their study, not considered at all was the distance between words, an important feature when learning the local dependency to help understand the context of input text. We propose Distance-based Directional Self-Attention Network, which considers the word distance by using a simple distance mask in order to model the local dependency without losing the ability of modeling global dependency which attention has inherent. Our model shows good performance with NLI data, and it records the new state-of-the-art result with SNLI data. Additionally, we show that our model has a strength in long sentences or documents.-
dc.description.tableofcontentsChapter 1 Introduction 1
Chapter 2 Related Work 4
2.1 NLI Models 4
2.2 Attention Mechanism 5
2.2.1 Additive Attention 6
2.2.2 Dot-product Attention 6
2.3 Transformer (Vaswani et al., 2017) 7
2.3.1 Multi-Head Attention 7
2.3.2 Position-wise Feed-Forward Networks 9
2.4 Directional Self-Attention Network (Shen et al., 2017) 9
2.4.1 Directional Self-Attention 10
2.4.2 Multi-Dimensional Source2token Self-Attention 13
Chapter 3 Proposed Model 14
3.1 Overall Architecture 14
3.2 Sentence Encoder 15
3.2.1 Word Embedding Layer 16
3.2.2 Masked Multi-Head Attention 16
3.2.3 Fusion Gate 18
3.2.4 Position-wise Feed Forward Networks 19
3.2.5 Pooling Layer 19
Chapter 4 Experiments and Results 21
4.1 Dataset 21
4.2 Training Details 21
4.3 SNLI Results 22
4.4 MultiNLI Results 24
4.5 Case Study 25
Chapter 5 Conclusion 30
Bibliography 32
국문초록 38
-
dc.formatapplication/pdf-
dc.format.extent2933687 bytes-
dc.format.mediumapplication/pdf-
dc.language.isoen-
dc.publisher서울대학교 대학원-
dc.subjectAttention mechanism-
dc.subjectDistance between words-
dc.subjectDistance mask-
dc.subjectLocal dependency-
dc.subjectGlobal dependency-
dc.subjectNatural Language Inference-
dc.subject.ddc670.42-
dc.titleDistance-based Directional Self-Attention Network-
dc.title.alternative거리 및 방향 기반 Self-Attention Network-
dc.typeThesis-
dc.contributor.AlternativeAuthorJinbae Im-
dc.description.degreeMaster-
dc.contributor.affiliation공과대학 산업공학과-
dc.date.awarded2018-02-
Appears in Collections:
Files in This Item:

Altmetrics

Item View & Download Count

  • mendeley

Items in S-Space are protected by copyright, with all rights reserved, unless otherwise indicated.

Share