Publications
Detailed Information
Large-scale Lifelong Learning of In-context Instructions and How to Tackle It
Cited 0 time in
Web of Science
Cited 1 time in Scopus
- Authors
- Issue Date
- 2023-07
- Citation
- Association for Computational Linguistics (ACL). Annual Meeting Conference Proceedings, Vol.1, pp.12573-12589
- Abstract
- Jointly fine-tuning a Pre-trained Language Model (PLM) on a pre-defined set of tasks with in-context instructions has been proven to improve its generalization performance, allowing us to build a universal language model that can be deployed across task boundaries. In this work, we explore for the first time whether this attractive property of in-context instruction learning can be extended to a scenario in which tasks are fed to the target PLM in a sequential manner. The primary objective of so-called lifelong in-context instruction learning is to improve the target PLM's instance- and task-level generalization performance as it observes more tasks. DYNAINST, the proposed method to lifelong in-context instruction learning, achieves noticeable improvements in both types of generalization, nearly reaching the upper bound performance obtained through joint training.
- ISSN
- 0736-587X
- Files in This Item:
- There are no files associated with this item.
Related Researcher
- College of Engineering
- Department of Electrical and Computer Engineering
Item View & Download Count
Items in S-Space are protected by copyright, with all rights reserved, unless otherwise indicated.