Sparse approximation based on a random overcomplete basis
Abstract
We discuss a strategy of sparse approximation that is based on the use of an overcomplete basis, and evaluate its performance when a random matrix is used as this basis. A small combination of basis vectors is chosen from a given overcomplete basis, according to a given compression rate, such that they compactly represent the target data with as small a distortion as possible. As a selection method, we study the {{\ell}_{0}}  and {{\ell}_{1}} based methods, which employ the exhaustive search and {{\ell}_{1}} norm regularization techniques, respectively. The performance is assessed in terms of the tradeoff relation between the distortion and the compression rate. First, we evaluate the performance analytically in the case that the methods are carried out ideally, using methods of statistical mechanics. The analytical result is then confirmed by performing numerical experiments on finite size systems, and extrapolating the results to the infinitesize limit. Our result clarifies the fact that the {{\ell}_{0}} based method greatly outperforms the {{\ell}_{1}} based one. An interesting outcome of our analysis is that any small value of distortion is achievable for any fixed compression rate r in the largesize limit of the overcomplete basis, for both the {{\ell}_{0}}  and {{\ell}_{1}} based methods. The difference between these two methods is manifested in the size of the overcomplete basis that is required in order to achieve the desired value for the distortion. As the desired distortion decreases, the required size grows in a polynomial and an exponential manners for the {{\ell}_{0}}  and {{\ell}_{1}} based methods, respectively. Second, we examine the practical performances of two wellknown algorithms, orthogonal matching pursuit and approximate message passing, when they are used to execute the {{\ell}_{0}}  and {{\ell}_{1}} based methods, respectively. Our examination shows that orthogonal matching pursuit achieves a much better performance than the exact execution of the {{\ell}_{1}} based method, as well as approximate message passing. However, regarding the {{\ell}_{0}} based method, there is still room to design more effective greedy algorithms than orthogonal matching pursuit. Finally, we evaluate the performances of the algorithms when they are applied to image data compression.
 Publication:

Journal of Statistical Mechanics: Theory and Experiment
 Pub Date:
 June 2016
 DOI:
 10.1088/17425468/2016/06/063302
 arXiv:
 arXiv:1510.02189
 Bibcode:
 2016JSMTE..06.3302N
 Keywords:

 Computer Science  Information Theory;
 Condensed Matter  Disordered Systems and Neural Networks;
 Condensed Matter  Statistical Mechanics
 EPrint:
 35 pages, 11 figures