Orthogonal Inductive Matrix Completion
Abstract
We propose orthogonal inductive matrix completion (OMIC), an interpretable approach to inductive matrix completion based on a sum of multiple orthonormal side information, together with nuclearnorm regularisation. The approach allows us to inject prior knowledge about the eigenvectors of the ground truth matrix. We optimize the approach by a provably converging algorithm, which optimizes all components of the model simultaneously. Our method is enjoys distributionfree learning guarantees that improve with the quality of the injected knowledge. As a special case of our general framework, we study a model consisting of a sum of user and item biases (generic behaviour), a noninductive term (specific behaviour), and an inductive term using side information. Our theoretical analysis shows that $\epsilon$recovering the ground truth matrix requires at most $O\left( \frac{n+m+(\sqrt{n}+\sqrt{m}) \sqrt{mnr}C}{\epsilon^2}\right)$ entries, where $r$ (resp. $C$) is the rank (resp. maximum entry) of the biasfree part of the ground truth matrix. We analyse the performance of OMIC on several synthetic and real datasets. On synthetic datasets with a sliding scale of user bias relevance, we show that OMIC better adapts to different regimes than other methods and can recover the ground truth. On real life datasets containing user/items recommendations and relevant side information, we find that OMIC surpasses the state of the art, with the added benefit of greater interpretability.
 Publication:

arXiv eprints
 Pub Date:
 April 2020
 arXiv:
 arXiv:2004.01653
 Bibcode:
 2020arXiv200401653L
 Keywords:

 Computer Science  Machine Learning;
 Statistics  Machine Learning