Publications

Detailed Information

Automatic Story Extraction for Photo Stream via Coherence Recurrent Convolutional Neural Network

Cited 0 time in Web of Science Cited 0 time in Scopus
Authors

박천성

Advisor
김건희
Major
공과대학 컴퓨터공학부
Issue Date
2017-02
Publisher
서울대학교 대학원
Keywords
Deep learningRecurrent Neural NetworkConvolutional Neural NetworkPhoto streamStory extractionCoherenceImage captioningNatural Language Processing
Description
학위논문 (석사)-- 서울대학교 대학원 : 컴퓨터공학부, 2017. 2. 김건희.
Abstract
Due to advances in computing power, data gathering and researchers there have been many improvements in artificial intelligence. Particularly, research related to images has proceeded very quickly. Computers have a similar level of cognitive abilities and can do many things that people can do through vision. It became possible to see, understand and express. Among them, We will focus on visual understanding and natural language expression. Various studies have been conducted to understand visual information and express it in natural language. One challenge that comes to the performance that a person can make is the creation of image captions for Flickr30K and MS COCO dataset. However, there is still a limit to simple data and tasks.
In this dissertation, we propose an approach for retrieving a sequence of natural sentences for an image stream. We dill with more complex, non-refined data compared to the previous work. This dissertation extends the preliminary work of Park and Kim, and an amendment of it was submitted to IEEE Transactions on Pattern Analysis and Machine Intelligence.
Since general users often take a series of pictures on their experiences, much online visual information exists in the form of image streams, for which it would better take into consideration of the whole image stream to produce natural language descriptions. While almost all previous studies have dealt with the relation between a single image and a single natural sentence, our work extends both input and output dimension to a sequence of images and a sequence of sen- tences. To this end, we propose a multimodal neural architecture called coher- ence recurrent convolutional network (CRCN), which consists of convolutional neural networks, bidirectional long short-term memory (LSTM) networks, and an entity-based local coherence model. Our approach directly learns from vast user-generated resource of blog posts as text-image parallel training data. We collect more than 22K unique blog posts with 170K associated images for the topics of NYC, Disneyland, Australia, and Hawaii. We demonstrate that our approach outperforms other state-of-the-art image captioning candidate meth- ods, using both quantitative measures and user studies via Amazon Mechanical Turk.
Language
English
URI
https://hdl.handle.net/10371/122686
Files in This Item:
Appears in Collections:

Altmetrics

Item View & Download Count

  • mendeley

Items in S-Space are protected by copyright, with all rights reserved, unless otherwise indicated.

Share