Publications

Detailed Information

Robustifying multi-hop question answering through pseudo-evidentiality training

Cited 0 time in Web of Science Cited 6 time in Scopus
Authors

Lee, Kyungjae; Hwang, Seung-Won; Han, Sang-Eun; Lee, Dohyeon

Issue Date
2021-08
Publisher
Association for Computational Linguistics (ACL)
Citation
ACL-IJCNLP 2021 - 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, Proceedings of the Conference, pp.6110-6119
Abstract
© 2021 Association for Computational LinguisticsThis paper studies the bias problem of multihop question answering models, of answering correctly without correct reasoning. One way to robustify these models is by supervising to not only answer right, but also with right reasoning chains. An existing direction is to annotate reasoning chains to train models, requiring expensive additional annotations. In contrast, we propose a new approach to learn evidentiality, deciding whether the answer prediction is supported by correct evidences, without such annotations. Instead, we compare counterfactual changes in answer confidence with and without evidence sentences, to generate pseudo-evidentiality annotations. We validate our proposed model on an original set and challenge set in HotpotQA, showing that our method is accurate and robust in multi-hop reasoning.
URI
https://hdl.handle.net/10371/183729
DOI
https://doi.org/10.48550/arXiv.2107.03242
Files in This Item:
There are no files associated with this item.
Appears in Collections:

Altmetrics

Item View & Download Count

  • mendeley

Items in S-Space are protected by copyright, with all rights reserved, unless otherwise indicated.

Share