Lessons from Natural Language Inference in the Clinical Domain
Abstract
State of the art models using deep neural networks have become very good in learning an accurate mapping from inputs to outputs. However, they still lack generalization capabilities in conditions that differ from the ones encountered during training. This is even more challenging in specialized, and knowledge intensive domains, where training data is limited. To address this gap, we introduce MedNLI - a dataset annotated by doctors, performing a natural language inference task (NLI), grounded in the medical history of patients. We present strategies to: 1) leverage transfer learning using datasets from the open domain, (e.g. SNLI) and 2) incorporate domain knowledge from external data and lexical sources (e.g. medical terminologies). Our results demonstrate performance gains using both strategies.
- Publication:
-
arXiv e-prints
- Pub Date:
- August 2018
- DOI:
- 10.48550/arXiv.1808.06752
- arXiv:
- arXiv:1808.06752
- Bibcode:
- 2018arXiv180806752R
- Keywords:
-
- Computer Science - Computation and Language
- E-Print:
- Extended version of the EMNLP 2018 paper. Dataset and code available at https://jgc128.github.io/mednli/