BERT for Coreference Resolution: Baselines and Analysis
Abstract
We apply BERT to coreference resolution, achieving strong improvements on the OntoNotes (+3.9 F1) and GAP (+11.5 F1) benchmarks. A qualitative analysis of model predictions indicates that, compared to ELMo and BERT-base, BERT-large is particularly better at distinguishing between related but distinct entities (e.g., President and CEO). However, there is still room for improvement in modeling document-level context, conversations, and mention paraphrasing. Our code and models are publicly available.
- Publication:
-
arXiv e-prints
- Pub Date:
- August 2019
- DOI:
- 10.48550/arXiv.1908.09091
- arXiv:
- arXiv:1908.09091
- Bibcode:
- 2019arXiv190809091J
- Keywords:
-
- Computer Science - Computation and Language
- E-Print:
- Fix test set numbers for e2e-coref on GAP