Publications

Detailed Information

한국어 사전학습모델의 토큰화가 문장 임베딩에 끼치는 영향 분석 : The Analysis of the Impact of Tokenization of Korean Pre-trained Model on Sentence Embedding

Cited 0 time in Web of Science Cited 0 time in Scopus
Authors

김민석; 박수민; 신효필

Issue Date
2023
Publisher
한국언어학회
Citation
언어, Vol.48 No.4, pp.917-947
Abstract
The pre-trained models leading the field of natural language processing these days perform tokenization that do not consider linguistic units, such as Byte-Pair Encoding, WordPiece, or SentencePiece. While these methods alleviate the OOV(Out of Vocabulary) problem, they generate many tokens that have lost their lexical meaning by splitting words into smaller units. This paper analyzes how these tokens affect sentence embedding and ultimately points out the limitations of tokenization of the pre-trained models in this regard. To this end, this study conducts an experiment to determine how tokens interact with sentence embedding depending on whether they preserve their semantics. The interaction between tokens and sentence embedding is measured by Self-Similarity and Intra-Similarity proposed by Ethayarajh(2019). This study found that tokens without semantics show both low Self-Similarity and Intra-Similarity while the other tokens reached a high level in terms of both indicators. Through analysis of the word embedding layer and Self-Attention layer, this study concludes that the former lead to bias in sentence embedding, which is a problem that the pre-trained models inevitably suffer from as long as they continue with existing tokenization.
ISSN
1229-4039
URI
https://hdl.handle.net/10371/201054
DOI
https://doi.org/10.18855/lisoko.2023.48.4.007
Files in This Item:
There are no files associated with this item.
Appears in Collections:

Altmetrics

Item View & Download Count

  • mendeley

Items in S-Space are protected by copyright, with all rights reserved, unless otherwise indicated.

Share