Publications

Detailed Information

Semi-Supervised Learning on Meta Structure: Multi-Task Tagging and Parsing in Low-Resource Scenarios

DC Field Value Language
dc.contributor.authorLim, KyungTae-
dc.contributor.authorLee, Jay Yoon-
dc.contributor.authorCarbone, Jaime-
dc.contributor.authorPoibeau, Thierry-
dc.date.accessioned2024-05-08T01:12:46Z-
dc.date.available2024-05-08T01:12:46Z-
dc.date.created2024-04-29-
dc.date.issued2020-
dc.identifier.citationProceedings of the AAAI Conference on Artificial Intelligence, Vol.34, pp.8344-8351-
dc.identifier.issn2159-5399-
dc.identifier.urihttps://hdl.handle.net/10371/201063-
dc.description.abstractMulti-view learning makes use of diverse models arising from multiple sources of input or different feature subsets for the same task. For example, a given natural language processing task can combine evidence from models arising from character, morpheme, lexical, or phrasal views. The most common strategy with multi-view learning, especially popular in the neural network community, is to unify multiple representations into one unified vector through concatenation, averaging, or pooling, and then build a single-view model on top of the unified representation. As an alternative, we examine whether building one model per view and then unifying the different models can lead to improvements, especially in low-resource scenarios. More specifically, taking inspiration from co-training methods, we propose a semi-supervised learning approach based on multi-view models through consensus promotion, and investigate whether this improves overall performance. To test the multi-view hypothesis, we use moderately low-resource scenarios for nine languages and test the performance of the joint model for part-of-speech tagging and dependency parsing. The proposed model shows significant improvements across the test cases, with average gains of -0.9 similar to +9.3 labeled attachment score (LAS) points. We also investigate the effect of unlabeled data on the proposed model by varying the amount of training data and by using different domains of unlabeled data.-
dc.language영어-
dc.publisherProceedings of the AAAI Conference on Artificial Intelligence-
dc.titleSemi-Supervised Learning on Meta Structure: Multi-Task Tagging and Parsing in Low-Resource Scenarios-
dc.typeArticle-
dc.citation.journaltitleProceedings of the AAAI Conference on Artificial Intelligence-
dc.identifier.wosid000668126800088-
dc.identifier.scopusid2-s2.0-85106090764-
dc.citation.endpage8351-
dc.citation.startpage8344-
dc.citation.volume34-
dc.description.isOpenAccessN-
dc.contributor.affiliatedAuthorLee, Jay Yoon-
dc.type.docTypeProceedings Paper-
dc.description.journalClass1-
Appears in Collections:
Files in This Item:
There are no files associated with this item.

Related Researcher

  • Graduate School of Data Science
Research Area Constraint injection, Energy-based models, Structured Prediction

Altmetrics

Item View & Download Count

  • mendeley

Items in S-Space are protected by copyright, with all rights reserved, unless otherwise indicated.

Share