Peer Selection with Noisy Assessments
Abstract
In the peer selection problem a group of agents must select a subset of themselves as winners for, e.g., peer-reviewed grants or prizes. Here, we take a Condorcet view of this aggregation problem, i.e., that there is a ground-truth ordering over the agents and we wish to select the best set of agents, subject to the noisy assessments of the peers. Given this model, some agents may be unreliable, while others might be self-interested, attempting to influence the outcome in their favour. In this paper we extend PeerNomination, the most accurate peer reviewing algorithm to date, into WeightedPeerNomination, which is able to handle noisy and inaccurate agents. To do this, we explicitly formulate assessors' reliability weights in a way that does not violate strategyproofness, and use this information to reweight their scores. We show analytically that a weighting scheme can improve the overall accuracy of the selection significantly. Finally, we implement several instances of reweighting methods and show empirically that our methods are robust in the face of noisy assessments.
- Publication:
-
arXiv e-prints
- Pub Date:
- July 2021
- DOI:
- arXiv:
- arXiv:2107.10121
- Bibcode:
- 2021arXiv210710121L
- Keywords:
-
- Computer Science - Computer Science and Game Theory;
- Computer Science - Artificial Intelligence;
- Computer Science - Multiagent Systems;
- 91A80;
- 91B10;
- 91B12;
- 91B14;
- J.4;
- I.2
- E-Print:
- 15 pages, 5 figures