Publications

Detailed Information

Transitional adaptation of pretrained models for visual storytelling

Cited 15 time in Web of Science Cited 18 time in Scopus
Authors

Yu, Youngjae; Chung, Jiwan; Yun, Heeseung; Kim, Jongseok; Kim, Gun Hee

Issue Date
2021-01
Publisher
IEEE Computer Society
Citation
Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp.12653-12663
Abstract
© 2021 IEEEPrevious models for vision-to-language generation tasks usually pretrain a visual encoder and a language generator in the respective domains and jointly finetune them with the target task. However, this direct transfer practice may suffer from the discord between visual specificity and language fluency since they are often separately trained from large corpora of visual and text data with no common ground. In this work, we claim that a transitional adaptation task is required between pretraining and finetuning to harmonize the visual encoder and the language model for challenging downstream target tasks like visual storytelling. We propose a novel approach named Transitional Adaptation of Pretrained Model (TAPM) that adapts the multi-modal modules to each other with a simpler alignment task between visual inputs only with no need for text labels. Through extensive experiments, we show that the adaptation step significantly improves the performance of multiple language models for sequential video and image captioning tasks. We achieve new state-of-the-art performance on both language metrics and human evaluation in the multi-sentence description task of LSMDC 2019 [50] and the image storytelling task of VIST [18]. Our experiments reveal that this improvement in caption quality does not depend on the specific choice of language models.
ISSN
1063-6919
URI
https://hdl.handle.net/10371/183788
DOI
https://doi.org/10.1109/CVPR46437.2021.01247
Files in This Item:
There are no files associated with this item.
Appears in Collections:

Altmetrics

Item View & Download Count

  • mendeley

Items in S-Space are protected by copyright, with all rights reserved, unless otherwise indicated.

Share