Publications

Detailed Information

Large-scale Lifelong Learning of In-context Instructions and How to Tackle It

Cited 0 time in Web of Science Cited 1 time in Scopus
Authors

Mok, Ji Soo; Do, Jae Young; Lee, Sung Jin; Taghavi, Tara; Yu, Seung Hak; Yoon, Sung Roh

Issue Date
2023-07
Publisher
Association for Computational Linguistics (ACL)
Citation
Association for Computational Linguistics (ACL). Annual Meeting Conference Proceedings, Vol.1, pp.12573-12589
Abstract
Jointly fine-tuning a Pre-trained Language Model (PLM) on a pre-defined set of tasks with in-context instructions has been proven to improve its generalization performance, allowing us to build a universal language model that can be deployed across task boundaries. In this work, we explore for the first time whether this attractive property of in-context instruction learning can be extended to a scenario in which tasks are fed to the target PLM in a sequential manner. The primary objective of so-called lifelong in-context instruction learning is to improve the target PLM's instance- and task-level generalization performance as it observes more tasks. DYNAINST, the proposed method to lifelong in-context instruction learning, achieves noticeable improvements in both types of generalization, nearly reaching the upper bound performance obtained through joint training.
ISSN
0736-587X
URI
https://hdl.handle.net/10371/201358
DOI
https://doi.org/10.18653/v1/2023.acl-long.703
Files in This Item:
There are no files associated with this item.
Appears in Collections:

Related Researcher

  • College of Engineering
  • Department of Electrical and Computer Engineering
Research Area AI 애플리케이션을 위한 알고리즘-시스템 공동 설계, AI-powered Big Data Management, Generative AI, Large Language Model, ML, 고성능 대규모 AI 데이터 분석 및 처리, 모달 AI

Altmetrics

Item View & Download Count

  • mendeley

Items in S-Space are protected by copyright, with all rights reserved, unless otherwise indicated.

Share