Towards Interpretable Reasoning over Paragraph Effects in Situation
Abstract
We focus on the task of reasoning over paragraph effects in situation, which requires a model to understand the cause and effect described in a background paragraph, and apply the knowledge to a novel situation. Existing works ignore the complicated reasoning process and solve it with a one-step "black box" model. Inspired by human cognitive processes, in this paper we propose a sequential approach for this task which explicitly models each step of the reasoning process with neural network modules. In particular, five reasoning modules are designed and learned in an end-to-end manner, which leads to a more interpretable model. Experimental results on the ROPES dataset demonstrate the effectiveness and explainability of our proposed approach.
- Publication:
-
arXiv e-prints
- Pub Date:
- October 2020
- arXiv:
- arXiv:2010.01272
- Bibcode:
- 2020arXiv201001272R
- Keywords:
-
- Computer Science - Computation and Language
- E-Print:
- 14 pages. Accepted as EMNLP2020 Long paper