Learning Supervised PageRank with Gradient-Free Optimization Methods
Abstract
In this paper, we consider a problem of learning supervised PageRank models, which can account for some properties not considered by classical approaches such as the classical PageRank algorithm. Due to huge hidden dimension of the optimization problem we use random gradient-free methods to solve it. We prove a convergence theorem and estimate the number of arithmetic operations needed to solve it with a given accuracy. We find the best settings of the gradient-free optimization method in terms of the number of arithmetic operations needed to achieve given accuracy of the objective. In the paper, we apply our algorithm to the web page ranking problem. We consider a parametric graph model of users' behavior and evaluate web pages' relevance to queries by our algorithm. The experiments show that our optimization method outperforms the untuned gradient-free method in the ranking quality.
- Publication:
-
arXiv e-prints
- Pub Date:
- November 2014
- DOI:
- arXiv:
- arXiv:1411.4282
- Bibcode:
- 2014arXiv1411.4282B
- Keywords:
-
- Mathematics - Optimization and Control
- E-Print:
- 11 pages