Who Gets Recommended? Investigating Gender, Race, and Country Disparities in Paper Recommendations from Large Language Models
Abstract
This paper investigates the performance of several representative large models in the tasks of literature recommendation and explores potential biases in research exposure. The results indicate that not only LLMs' overall recommendation accuracy remains limited but also the models tend to recommend literature with greater citation counts, later publication date, and larger author teams. Yet, in scholar recommendation tasks, there is no evidence that LLMs disproportionately recommend male, white, or developed-country authors, contrasting with patterns of known human biases.
- Publication:
-
arXiv e-prints
- Pub Date:
- December 2024
- DOI:
- arXiv:
- arXiv:2501.00367
- Bibcode:
- 2025arXiv250100367T
- Keywords:
-
- Computer Science - Information Retrieval;
- Computer Science - Computers and Society;
- Computer Science - Digital Libraries