Publications

Detailed Information

Learning from a Friend: Improving Event Extraction via Self-Training with Feedback from Abstract Meaning Representation

DC Field Value Language
dc.contributor.authorXu, Zhiyang-
dc.contributor.authorLee, Jay-Yoon-
dc.contributor.authorHuang, Lifu-
dc.date.accessioned2024-05-03T07:35:16Z-
dc.date.available2024-05-03T07:35:16Z-
dc.date.created2024-04-29-
dc.date.issued2023-
dc.identifier.citationAssociation for Computational Linguistics (ACL). Annual Meeting Conference Proceedings, pp.10421-10437-
dc.identifier.issn0736-587X-
dc.identifier.urihttps://hdl.handle.net/10371/200907-
dc.description.abstractData scarcity has been the main factor that hinders the progress of event extraction. To overcome this issue, we propose a Self-Training with Feedback (STF) framework that leverages the large-scale unlabeled data and acquires feedback for each new event prediction from the unlabeled data by comparing it to the Abstract Meaning Representation (AMR) graph of the same sentence. Specifically, STF consists of (1) a base event extraction model trained on existing event annotations and then applied to large-scale unlabeled corpora to predict new event mentions as pseudo training samples, and (2) a novel scoring model that takes in each new predicted event trigger, an argument, its argument role, as well as their paths in the AMR graph to estimate a compatibility score indicating the correctness of the pseudo label. The compatibility scores further act as feedback to encourage or discourage the model learning on the pseudo labels during self-training. Experimental results on three benchmark datasets, including ACE05-E, ACE05-E+, and ERE, demonstrate the effectiveness of the STF framework on event extraction, especially event argument extraction, with significant performance gain over the base event extraction models and strong baselines. Our experimental analysis further shows that STF is a generic framework as it can be applied to improve most, if not all, event extraction models by leveraging large-scale unlabeled data, even when high-quality AMR graph annotations are not available.-
dc.language영어-
dc.publisherAssociation for Computational Linguistics (ACL). Annual Meeting Conference Proceedings-
dc.titleLearning from a Friend: Improving Event Extraction via Self-Training with Feedback from Abstract Meaning Representation-
dc.typeArticle-
dc.citation.journaltitleAssociation for Computational Linguistics (ACL). Annual Meeting Conference Proceedings-
dc.identifier.scopusid2-s2.0-85175467074-
dc.citation.endpage10437-
dc.citation.startpage10421-
dc.description.isOpenAccessN-
dc.contributor.affiliatedAuthorLee, Jay-Yoon-
dc.type.docTypeConference Paper-
dc.description.journalClass1-
Appears in Collections:
Files in This Item:
There are no files associated with this item.

Related Researcher

  • Graduate School of Data Science
Research Area Constraint injection, Energy-based models, Structured Prediction

Altmetrics

Item View & Download Count

  • mendeley

Items in S-Space are protected by copyright, with all rights reserved, unless otherwise indicated.

Share