Publications

Detailed Information

Video-based Visual Question Answering with Spatio-Temporal Reasoning Tasks : 시·공간 추론에 기반한 동영상 질의 응답

DC Field Value Language
dc.contributor.advisor김건희-
dc.contributor.author장윤석-
dc.date.accessioned2018-05-29T03:33:45Z-
dc.date.available2018-05-29T03:33:45Z-
dc.date.issued2018-02-
dc.identifier.other000000149522-
dc.identifier.urihttps://hdl.handle.net/10371/141566-
dc.description학위논문 (석사)-- 서울대학교 대학원 : 공과대학 컴퓨터공학부, 2018. 2. 김건희.-
dc.description.abstractVision and language understanding has emerged as a subject undergoing intense study in Artificial Intelligence. Among many tasks in this line of research, visual question answering (VQA) has been one of the most successful ones, where the goal is to learn a model that understands visual content at region-level details and finds their associations with pairs of questions and answers in the natural language form. Despite the rapid progress in the past few years, most existing work in VQA have focused primarily on images. In this paper, we focus on extending VQA to the video domain and contribute to the literature in three important ways. First, we propose three new tasks designed specifically for video VQA, which require spatio-temporal reasoning from videos to answer questions correctly. Next, we introduce a new large-scale dataset for video VQA named TGIF-QA that extends existing VQA work with our new tasks. Finally, we propose a dual-LSTM based approach with both spatial and temporal attention, and show its effectiveness over conventional VQA techniques through empirical evaluations. Code and the dataset are available on our project page: http://vision.snu.ac.kr/projects/tgif-qa-
dc.description.tableofcontentsChapter 1 Introduction 1
Chapter 2 Related Works 5
2.1 Datasets 5
2.2 Tasks 6
2.3 Techniques 7
Chapter 3 TGIF-QA Dataset 8
3.1 Task Definition 8
3.2 QA Collection 10
3.2.1 Crowdsourcing 10
3.2.2 Task setup 13
3.2.3 Quality control 13
3.2.4 Post processing 16
3.2.5 QA generation 16
3.3 Comparison with Other Video VQA Datasets 17
Chapter 4 Approach 21
4.1 Feature Representation 21
4.1.1 Video representation 21
4.1.2 Text representation 22
4.2 Video and Text Encoders 23
4.2.1 Video encoder 23
4.2.2 Text encoder 23
4.3 Answer Decoders 24
4.3.1 Multiple choice 24
4.3.2 Open-ended, number 24
4.3.3 Open-ended, word 24
4.4 Attention Mechanism 25
4.4.1 Spatial attention 25
4.4.2 Temporal attention 26
4.5 Implementation Details 26
Chapter 5 Experiments 28
5.1 Baselines 28
5.1.1 Image-based 28
5.1.2 Video-based 29
5.1.3 Variants of our method 29
5.2 Results and Analysis 30
Chapter 6 Conclusion 35
요약 41
-
dc.formatapplication/pdf-
dc.format.extent11695658 bytes-
dc.format.mediumapplication/pdf-
dc.language.isoen-
dc.publisher서울대학교 대학원-
dc.subjectNeural Network-
dc.subjectDeep Learning-
dc.subjectComputer Vision-
dc.subjectNatual Lan- guage Processing-
dc.subjectVisual Question Answering-
dc.subjectVisual Understanding-
dc.subjectVisual Reasoning-
dc.subject.ddc621.39-
dc.titleVideo-based Visual Question Answering with Spatio-Temporal Reasoning Tasks-
dc.title.alternative시·공간 추론에 기반한 동영상 질의 응답-
dc.typeThesis-
dc.contributor.AlternativeAuthorYunseok Jang-
dc.description.degreeMaster-
dc.contributor.affiliation공과대학 컴퓨터공학부-
dc.date.awarded2018-02-
Appears in Collections:
Files in This Item:

Altmetrics

Item View & Download Count

  • mendeley

Items in S-Space are protected by copyright, with all rights reserved, unless otherwise indicated.

Share