Publications

Detailed Information

Large Scale Video Understanding with Narrative Description : 이야기형 설명문을 활용한 대규모 비디오 학습 연구

DC Field Value Language
dc.contributor.advisor김건희-
dc.contributor.author유영재-
dc.date.accessioned2021-11-30T02:39:31Z-
dc.date.available2021-11-30T02:39:31Z-
dc.date.issued2021-02-
dc.identifier.other000000165275-
dc.identifier.urihttps://hdl.handle.net/10371/175413-
dc.identifier.urihttps://dcollection.snu.ac.kr/common/orgView/000000165275ko_KR
dc.description학위논문 (박사) -- 서울대학교 대학원 : 공과대학 컴퓨터공학부, 2021. 2. 김건희.-
dc.description.abstractExtensive contributions are being made to develop intelligent agents that can recognize and communicate with the world. In this sense, various video-language tasks have drawn a lot of interests in computer vision research, including image/video captioning, video retrieval and video question answering.
It can be applied to high-level computer vision tasks and various future industries such as search engines, social marketing, automated driving, and robotics support through QA / dialog generation for the surrounding environment.

However, despite these developments, video-language learning suffers from a higher degree of complexity.

This thesis investigates methodologies for learning the relationship between videos and free-formed languages, including explanations, conversations, and question-and-answers, so that the machine can easily adapt to target downstream tasks.

First, we introduce several methods to learn the relationship between long sentences and videos efficiently. We introduce the approaches for supervising human attention transfer for the video attention model, which shows the video attention mechanism can benefit from explicit human gaze labels. Next, we introduce the end-to-end semantic attention method, which further reduces the visual attention algorithm's complexity by using the representative visual concept word detected by the attention-based detector. As a follow-up study on previous methods, we introduce a JSFusion (Joint Sequence Fusion) method that enables efficient video search and QA by enabling many-to-many matching of attention model.

Next, we introduce the CiSIN(Character in Story Identification Network), which uses Attention to increase the performance of character grounding and character re-identification in the movie. Finally, we introduce Transitional Adaptation, which promotes the caption generation models to generates coherent narratives for long videos.

In summary, this thesis presents a novel approaches for automatic video description generation/retrieval and shows the benefits of extracting linguistic knowledge for object and motion in the video as well as the advantage of multimodal audio-visual learning for understanding videos. Since the proposed methods are easily adapted to any video-language tasks, it is expected to be applied to the latest models, bringing additional performance improvements.
Moving forward, we plan to design an unsupervised video learning framework that can solve many challenges in the industry by integrating an unlimited amount of video, audio, and free-formed language data from the web.
-
dc.description.abstract시각-언어 학습은 이미지/비디오 캡션(Image/Video captioning), 시각 질의응답(Visual Question and Answering), 비디오 검색(Video Retrieval), 장면 이해(scene understanding), 이벤트 인식(event detection) 등 고차원의 컴퓨터 비전 태스크(task)뿐만 아니라 주변 환경에 대한 질의 응답 및 대화 생성(Dialogue Generation)으로 인터넷 검색 뿐만 아니라 최근 활발한 소셜 마케팅(Social Marketing) 자율 주행(Automated Driving), 로보틱스(Robotics)을 보조하는 등 여러 미래 산업에 적용될 수 있어 활발히 연구되고 있는 중요한 분야이다.
컴퓨터 비젼과 자연어 처리는 이러한 중요성을 바탕으로 각자 고유한 영역에서 발전을 거듭해 왔으나, 최근 딥러닝의 등장과 함께 눈부시게 발전하면서 서로를 보완하며 학습 결과를 향상시키는 등 큰 시너지 효과를 발휘하게 되었다.

하지만 이런 발전에도 불구하고, 비디오-언어간 학습은 문제의 복잡도가 한층 높아 어려움을 겪게 되는 경우가 많다.
본 논문에서는 비디오와 이에 대응하는 설명, 대화, 질의 응답 등 더 나아가 자유 형태의 언어 (Free-formed language)간의 관계를 더욱 효율적으로 학습하고, 목표 임무에 잘 대응할 수 있도록 개선하는 것을 목표로 한다.

먼저, 시각적 복잡도가 이미지보다 높은 비디오와 긴 문장 사이의 관계를 효율적으로 학습하기 위한 여러 방법들을 소개한다. 인간의 주의 인식(Attention) 모델을 비디오-언어 모델에 지도 학습 하는 방법을 소개하고, 이어서 비디오에서 우선 검출된 대표 시각 단어를 매개로 하여 주의 인식(Attention) 알고리즘의 복잡도를 더욱 줄이는 의미 중심 주의 인식 (Semantic Attention) 방법, 어텐션 모델의 다대다 매칭을 기반으로 효율적인 비디오 검색 및 질의응답을 가능케 하는 비디오-언어간 융합 (Joint Sequence Fusion) 방법 등 비디오 주의 인식을 효율적으로 학습시킬 수 있는 방법들을 제시한다.

다음으로는, 주의 인식(Attention) 모델이 물체-단어 간 관계를 넘어 비디오 상에서 인물 검색 (Person Searching) 그리고 인물 재 식별 (Person Re-Identification)을 동시에 수행하며 상승작용을 일으키는 스토리 속 캐릭터 인식 신경망 (Character in Story Identification Network) 을 소개하며, 마지막으로 자기 지도 학습(Self-supervised Learning)을 통해 주의 인식(Attention) 기반 언어 모델이 긴 비디오에 대한 설명을 연관성 있게 잘 생성할 수 있도록 유도하는 방법을 소개한다.

요약하자면, 이 학위 논문에서 제안한 새로운 방법론들은 비디오-언어 학습에 해당하는 비디오 캡션(Video captioning), 비디오 검색(Video Retrieval), 시각 질의응답(Video Question and Answering)등을 해결할 수 있는 기술적 디딤돌이 되며, 비디오 캡션 학습을 통해 학습된 주의 인식 모듈은 검색 및 질의응답, 인물 검색 등 각 네트워크에 이식되면서 새로운 문제들에 대해 동시에 최고 수준(State-of-the-art)의 성능을 달성하였다. 이를 통해 비디오-언어 학습으로 얻은 언어 지식의 이전은 시각-청각을 아우르는 비디오 멀티모달 학습에 큰 도움이 되는 것을 실험적으로 보여준다. 향후 작업 방향 (Future Work)으로는 앞서 연구한 내용들을 기반으로 웹 속에 존재하는 대규모의 언어, 비디오, 오디오 데이터를 통합해 학습에 활용하여 산업계의 많은 난제를 해결할 수 있는 비지도 학습 모델을 만들고자 한다.
-
dc.description.tableofcontentsChapter 1
Introduction
1.1 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4
1.2 Outline of the thesis . . . . . . . . . . . . . . . . . . . . . . . . .8
Chapter 2
Related Work
2.1 Video Captioning . . . . . . . . . . . . . . . . . . . . . . . . . . .9
2.2 Video Retrieval with Natural Language . . . . . . . . . . . . . . 12
2.3 Video Question and Answering . . . . . . . . . . . . . . . . . . . 13
2.4 Cross-modal Representation Learning for Vision and LanguageTasks . . . . 15
Chapter 3 Human Attention Transfer for Video Captioning18
3.1 Introduction
3.2 Video Datasets for Caption and Gaze . . . . . . . . . . . . . . . 21
3.3 Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.3.1 Video Pre-processing and Description . . . . . . . . . . . 22
3.3.2The Recurrent Gaze Prediction (RGP) Model . . . . . . . 23
3.3.3Construction of Visual Feature Pools . . . . . . . . . . . . 24
3.3.4The Decoder for Caption Generation . . . . . . . . . . . . 26
3.3.5Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
3.4 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
3.4.1Evaluation of Gaze Prediction . . . . . . . . . . . . . . . . 29
3.4.2Evaluation of Video Captioning . . . . . . . . . . . . . . . 32
3.4.3Human Evaluation via AMT . . . . . . . . . . . . . . . . 35
3.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
Chapter 4 Semantic Word Attention for Video QA and VideoCaptioning
4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
4.1.1Related Work . . . . . . . . . . . . . . . . . . . . . . . . . 39
4.1.2Contributions . . . . . . . . . . . . . . . . . . . . . . . . . 40
4.2 Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
4.2.1Preprocessing . . . . . . . . . . . . . . . . . . . . . . . . . 41
4.2.2An Attention Model for Concept Detection . . . . . . . . 42
4.2.3Video-to-Language Models . . . . . . . . . . . . . . . . . 45
4.2.4A Model for Description . . . . . . . . . . . . . . . . . . . 45
4.2.5A Model for Fill-in-the-Blank . . . . . . . . . . . . . . . . 48
4.2.6A Model for Multiple-Choice Test . . . . . . . . . . . . . 50
4.2.7A Model for Retrieval . . . . . . . . . . . . . . . . . . . . 51
4.3 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
4.3.1The LSMDC Dataset and Tasks . . . . . . . . . . . . . . 52
4.3.2Quantitative Results . . . . . . . . . . . . . . . . . . . . . 54
4.3.3Qualitative Results . . . . . . . . . . . . . . . . . . . . . . 56
4.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
Chapter 5 Joint Sequnece Fusion Attention for Multimodal Sequence Data
5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
5.2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
5.3 Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
5.3.1Preprocessing . . . . . . . . . . . . . . . . . . . . . . . . . 63
5.3.2The Joint Semantic Tensor . . . . . . . . . . . . . . . . . 65
5.3.3The Convolutional Hierarchical Decoder . . . . . . . . . . 66
5.3.4An Illustrative Example of How the JSFusion Model Works 68
5.3.5Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
5.3.6Implementation of Video-Language Models . . . . . . . . 69
5.4 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
5.4.1LSMDC Dataset and Tasks . . . . . . . . . . . . . . . . . 71
5.4.2MSR-VTT-(RET/MC) Dataset and Tasks . . . . . . . . . 73
5.4.3Quantitative Results . . . . . . . . . . . . . . . . . . . . . 74
5.4.4Qualitative Results . . . . . . . . . . . . . . . . . . . . . . 76
5.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
Chapter 6 Character Re-Identification and Character Ground-ing for Movie Understanding
6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
6.2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
6.3 Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
6.3.1Video Preprocessing . . . . . . . . . . . . . . . . . . . . . 84
6.3.2Visual Track Embedding . . . . . . . . . . . . . . . . . . . 85
6.3.3Textual Character Embedding . . . . . . . . . . . . . . . 86
6.3.4Character Grounding . . . . . . . . . . . . . . . . . . . . 87
6.3.5Re-Identification . . . . . . . . . . . . . . . . . . . . . . . 88
6.3.6Joint Training . . . . . . . . . . . . . . . . . . . . . . . . 90
6.4 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
6.4.1Experimental Setup . . . . . . . . . . . . . . . . . . . . . 92
6.4.2Quantitative Results . . . . . . . . . . . . . . . . . . . . . 93
6.4.3Qualitative Results . . . . . . . . . . . . . . . . . . . . . . 95
6.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
Chapter 7 Transitional Adaptation of Pretrained Models forVisual Storytelling
7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
7.2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
7.3 Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
7.3.1The Visual Encoder . . . . . . . . . . . . . . . . . . . . . 104
7.3.2The Language Generator . . . . . . . . . . . . . . . . . . 104
7.3.3Adaptation training . . . . . . . . . . . . . . . . . . . . . 105
7.3.4The Sequential Coherence Loss . . . . . . . . . . . . . . . 105
7.3.5Training with the adaptation Loss . . . . . . . . . . . . . 107
7.3.6Fine-tuning and Inference . . . . . . . . . . . . . . . . . . 107
7.4 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
7.4.1Experimental Setup . . . . . . . . . . . . . . . . . . . . . 109
7.4.2Quantitative Results . . . . . . . . . . . . . . . . . . . . . 112
7.4.3Further Analyses . . . . . . . . . . . . . . . . . . . . . . . 112
7.4.4Human Evaluation Results . . . . . . . . . . . . . . . . . 115
7.4.5Qualitative Results . . . . . . . . . . . . . . . . . . . . . . 116
7.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
Chapter 8 Conclusion
8.1 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
8.2 Future Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
Bibliography ... 123
요약 ... 148
Acknowledgements ... 150
-
dc.format.extentxix, 150-
dc.language.isoeng-
dc.publisher서울대학교 대학원-
dc.subjectDeep Learning-
dc.subjectComputer Vision-
dc.subjectNatual Language Processing-
dc.subjectMultimodal Learning-
dc.subjectVideo Understanding-
dc.subjectVisual Question Answering-
dc.subject딥러닝-
dc.subject컴퓨터 비젼-
dc.subject자연어 처리-
dc.subject멀티모달 학습-
dc.subject비디오 이해-
dc.subject비디오 질의 응답-
dc.subject.ddc621.39-
dc.titleLarge Scale Video Understanding with Narrative Description-
dc.title.alternative이야기형 설명문을 활용한 대규모 비디오 학습 연구-
dc.typeThesis-
dc.typeDissertation-
dc.contributor.AlternativeAuthorYoungjae Yu-
dc.contributor.department공과대학 컴퓨터공학부-
dc.description.degreeDoctor-
dc.date.awarded2021-02-
dc.identifier.uciI804:11032-000000165275-
dc.identifier.holdings000000000044▲000000000050▲000000165275▲-
Appears in Collections:
Files in This Item:

Altmetrics

Item View & Download Count

  • mendeley

Items in S-Space are protected by copyright, with all rights reserved, unless otherwise indicated.

Share