Publications
Detailed Information
Distance-based Directional Self-Attention Network : 거리 및 방향 기반 Self-Attention Network
DC Field | Value | Language |
---|---|---|
dc.contributor.advisor | 조성준 | - |
dc.contributor.author | 임진배 | - |
dc.date.accessioned | 2018-05-29T03:20:32Z | - |
dc.date.available | 2018-05-29T03:20:32Z | - |
dc.date.issued | 2018-02 | - |
dc.identifier.other | 000000149369 | - |
dc.identifier.uri | https://hdl.handle.net/10371/141437 | - |
dc.description | 학위논문 (석사)-- 서울대학교 대학원 : 공과대학 산업공학과, 2018. 2. 조성준. | - |
dc.description.abstract | Attention mechanism has been used as an ancillary means to help RNN or CNN. However, the Transformer (Vaswani et al., 2017) recently recorded the state-of-the-art performance in machine translation with a dramatic reduction in training time by solely using attention. Motivated by the Transformer, Directional Self Attention Network (Shen et al., 2017), a fully attention-based sentence encoder, was proposed. It showed good performance with various data by using forward and backward directional information in a sentence. But in their study, not considered at all was the distance between words, an important feature when learning the local dependency to help understand the context of input text. We propose Distance-based Directional Self-Attention Network, which considers the word distance by using a simple distance mask in order to model the local dependency without losing the ability of modeling global dependency which attention has inherent. Our model shows good performance with NLI data, and it records the new state-of-the-art result with SNLI data. Additionally, we show that our model has a strength in long sentences or documents. | - |
dc.description.tableofcontents | Chapter 1 Introduction 1
Chapter 2 Related Work 4 2.1 NLI Models 4 2.2 Attention Mechanism 5 2.2.1 Additive Attention 6 2.2.2 Dot-product Attention 6 2.3 Transformer (Vaswani et al., 2017) 7 2.3.1 Multi-Head Attention 7 2.3.2 Position-wise Feed-Forward Networks 9 2.4 Directional Self-Attention Network (Shen et al., 2017) 9 2.4.1 Directional Self-Attention 10 2.4.2 Multi-Dimensional Source2token Self-Attention 13 Chapter 3 Proposed Model 14 3.1 Overall Architecture 14 3.2 Sentence Encoder 15 3.2.1 Word Embedding Layer 16 3.2.2 Masked Multi-Head Attention 16 3.2.3 Fusion Gate 18 3.2.4 Position-wise Feed Forward Networks 19 3.2.5 Pooling Layer 19 Chapter 4 Experiments and Results 21 4.1 Dataset 21 4.2 Training Details 21 4.3 SNLI Results 22 4.4 MultiNLI Results 24 4.5 Case Study 25 Chapter 5 Conclusion 30 Bibliography 32 국문초록 38 | - |
dc.format | application/pdf | - |
dc.format.extent | 2933687 bytes | - |
dc.format.medium | application/pdf | - |
dc.language.iso | en | - |
dc.publisher | 서울대학교 대학원 | - |
dc.subject | Attention mechanism | - |
dc.subject | Distance between words | - |
dc.subject | Distance mask | - |
dc.subject | Local dependency | - |
dc.subject | Global dependency | - |
dc.subject | Natural Language Inference | - |
dc.subject.ddc | 670.42 | - |
dc.title | Distance-based Directional Self-Attention Network | - |
dc.title.alternative | 거리 및 방향 기반 Self-Attention Network | - |
dc.type | Thesis | - |
dc.contributor.AlternativeAuthor | Jinbae Im | - |
dc.description.degree | Master | - |
dc.contributor.affiliation | 공과대학 산업공학과 | - |
dc.date.awarded | 2018-02 | - |
- Appears in Collections:
- Files in This Item:
Item View & Download Count
Items in S-Space are protected by copyright, with all rights reserved, unless otherwise indicated.