Equilibrium Selection in Information Elicitation without Verification via Information Monotonicity
Abstract
Peerprediction is a mechanism which elicits privatelyheld, nonvariable information from selfinterested agentsformally, truthtelling is a strict Bayes Nash equilibrium of the mechanism. The original Peerprediction mechanism suffers from two main limitations: (1) the mechanism must know the "common prior" of agents' signals; (2) additional undesirable and nontruthful equilibria exist which often have a greater expected payoff than the truthtelling equilibrium. A series of results has successfully weakened the known common prior assumption. However, the equilibrium multiplicity issue remains a challenge. In this paper, we address the above two problems. In the setting where a common prior exists but is not known to the mechanism we show (1) a general negative result applying to a large class of mechanisms showing truthtelling can never pay strictly more in expectation than a particular set of equilibria where agents collude to "relabel" the signals and tell the truth after relabeling signals; (2) provide a mechanism that has no information about the common prior but where truthtelling pays as much in expectation as any relabeling equilibrium and pays strictly more than any other symmetric equilibrium; (3) moreover in our mechanism, if the number of agents is sufficiently large, truthtelling pays similarly to any equilibrium close to a "relabeling" equilibrium and pays strictly more than any equilibrium that is not close to a relabeling equilibrium.
 Publication:

arXiv eprints
 Pub Date:
 March 2016
 arXiv:
 arXiv:1603.07751
 Bibcode:
 2016arXiv160307751K
 Keywords:

 Computer Science  Computer Science and Game Theory