Learning as Abduction: Trainable Natural Logic Theorem Prover for Natural Language Inference
Abstract
Tackling Natural Language Inference with a logic-based method is becoming less and less common. While this might have been counterintuitive several decades ago, nowadays it seems pretty obvious. The main reasons for such a conception are that (a) logic-based methods are usually brittle when it comes to processing wide-coverage texts, and (b) instead of automatically learning from data, they require much of manual effort for development. We make a step towards to overcome such shortcomings by modeling learning from data as abduction: reversing a theorem-proving procedure to abduce semantic relations that serve as the best explanation for the gold label of an inference problem. In other words, instead of proving sentence-level inference relations with the help of lexical relations, the lexical relations are proved taking into account the sentence-level inference relations. We implement the learning method in a tableau theorem prover for natural language and show that it improves the performance of the theorem prover on the SICK dataset by 1.4% while still maintaining high precision (>94%). The obtained results are competitive with the state of the art among logic-based systems.
- Publication:
-
arXiv e-prints
- Pub Date:
- October 2020
- DOI:
- 10.48550/arXiv.2010.15909
- arXiv:
- arXiv:2010.15909
- Bibcode:
- 2020arXiv201015909A
- Keywords:
-
- Computer Science - Computation and Language;
- 03B65;
- 68T50;
- F.4.1;
- I.2.3;
- K.3.2;
- I.2.6;
- I.2.7
- E-Print:
- Presented at *SEM, see the official link https://www.aclweb.org/anthology/2020.starsem-1.3 The code available at https://github.com/kovvalsky/LangPro