Publications

Detailed Information

Video-based Visual Question Answering with Spatio-Temporal Reasoning Tasks : 시·공간 추론에 기반한 동영상 질의 응답

Cited 0 time in Web of Science Cited 0 time in Scopus
Authors

장윤석

Advisor
김건희
Major
공과대학 컴퓨터공학부
Issue Date
2018-02
Publisher
서울대학교 대학원
Keywords
Neural NetworkDeep LearningComputer VisionNatual Lan- guage ProcessingVisual Question AnsweringVisual UnderstandingVisual Reasoning
Description
학위논문 (석사)-- 서울대학교 대학원 : 공과대학 컴퓨터공학부, 2018. 2. 김건희.
Abstract
Vision and language understanding has emerged as a subject undergoing intense study in Artificial Intelligence. Among many tasks in this line of research, visual question answering (VQA) has been one of the most successful ones, where the goal is to learn a model that understands visual content at region-level details and finds their associations with pairs of questions and answers in the natural language form. Despite the rapid progress in the past few years, most existing work in VQA have focused primarily on images. In this paper, we focus on extending VQA to the video domain and contribute to the literature in three important ways. First, we propose three new tasks designed specifically for video VQA, which require spatio-temporal reasoning from videos to answer questions correctly. Next, we introduce a new large-scale dataset for video VQA named TGIF-QA that extends existing VQA work with our new tasks. Finally, we propose a dual-LSTM based approach with both spatial and temporal attention, and show its effectiveness over conventional VQA techniques through empirical evaluations. Code and the dataset are available on our project page: http://vision.snu.ac.kr/projects/tgif-qa
Language
English
URI
https://hdl.handle.net/10371/141566
Files in This Item:
Appears in Collections:

Altmetrics

Item View & Download Count

  • mendeley

Items in S-Space are protected by copyright, with all rights reserved, unless otherwise indicated.

Share