Challenges to Evaluating the Generalization of Coreference Resolution Models: A Measurement Modeling Perspective
Abstract
It is increasingly common to evaluate the same coreference resolution (CR) model on multiple datasets. Do these multi-dataset evaluations allow us to draw meaningful conclusions about model generalization? Or, do they rather reflect the idiosyncrasies of a particular experimental setup (e.g., the specific datasets used)? To study this, we view evaluation through the lens of measurement modeling, a framework commonly used in the social sciences for analyzing the validity of measurements. By taking this perspective, we show how multi-dataset evaluations risk conflating different factors concerning what, precisely, is being measured. This in turn makes it difficult to draw more generalizable conclusions from these evaluations. For instance, we show that across seven datasets, measurements intended to reflect CR model generalization are often correlated with differences in both how coreference is defined and how it is operationalized; this limits our ability to draw conclusions regarding the ability of CR models to generalize across any singular dimension. We believe the measurement modeling framework provides the needed vocabulary for discussing challenges surrounding what is actually being measured by CR evaluations.
- Publication:
-
arXiv e-prints
- Pub Date:
- March 2023
- DOI:
- arXiv:
- arXiv:2303.09092
- Bibcode:
- 2023arXiv230309092P
- Keywords:
-
- Computer Science - Computation and Language
- E-Print:
- ACL Findings 2024