Publications

Detailed Information

Case-Based Reasoning for Natural Language Queries over Knowledge Bases

DC Field Value Language
dc.contributor.authorDas, Rajarshi-
dc.contributor.authorZaheer, Manzil-
dc.contributor.authorThai, Dung-
dc.contributor.authorGodbole, Ameya-
dc.contributor.authorPerez, Ethan-
dc.contributor.authorLee, Jay-Yoon-
dc.contributor.authorTan, Lizhen-
dc.contributor.authorPolymenakos, Lazaros-
dc.contributor.authorMcCallum, Andrew-
dc.date.accessioned2024-05-03T07:38:33Z-
dc.date.available2024-05-03T07:38:33Z-
dc.date.created2024-04-11-
dc.date.issued2021-11-
dc.identifier.citation2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), pp.9594-9611-
dc.identifier.urihttps://hdl.handle.net/10371/200917-
dc.description.abstractIt is often challenging to solve a complex problem from scratch, but much easier if we can access other similar problems with their solutions - a paradigm known as case-based reasoning (CBR). We propose a neuro-symbolic CBR approach (CBR-KBQA) for question answering over large knowledge bases. CBR-KBQA consists of a nonparametric memory that stores cases (question and logical forms) and a parametric model that can generate a logical form for a new question by retrieving cases that are relevant to it. On several KBQA datasets that contain complex questions, CBR-KBQA achieves competitive performance. For example, on the COMPLEXWEBQUESTIONS dataset, CBR-KBQA outperforms the current state of the art by 11% on accuracy. Furthermore, we show that CBR-KBQA is capable of using new cases without any further training: by incorporating a few human-labeled examples in the case memory, CBR-KBQA is able to successfully generate logical forms containing unseen KB entities as well as relations.-
dc.language영어-
dc.publisherASSOC COMPUTATIONAL LINGUISTICS-ACL-
dc.titleCase-Based Reasoning for Natural Language Queries over Knowledge Bases-
dc.typeArticle-
dc.identifier.doi10.18653/v1/2021.emnlp-main.755-
dc.citation.journaltitle2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021)-
dc.identifier.wosid000860727003056-
dc.identifier.scopusid2-s2.0-85121717169-
dc.citation.endpage9611-
dc.citation.startpage9594-
dc.description.isOpenAccessN-
dc.contributor.affiliatedAuthorLee, Jay-Yoon-
dc.type.docTypeProceedings Paper-
dc.description.journalClass1-
Appears in Collections:
Files in This Item:
There are no files associated with this item.

Related Researcher

  • Graduate School of Data Science
Research Area Constraint injection, Energy-based models, Structured Prediction

Altmetrics

Item View & Download Count

  • mendeley

Items in S-Space are protected by copyright, with all rights reserved, unless otherwise indicated.

Share