Optimal Algorithms for L_{1}subspace Signal Processing
Abstract
We describe ways to define and calculate $L_1$norm signal subspaces which are less sensitive to outlying data than $L_2$calculated subspaces. We start with the computation of the $L_1$ maximumprojection principal component of a data matrix containing $N$ signal samples of dimension $D$. We show that while the general problem is formally NPhard in asymptotically large $N$, $D$, the case of engineering interest of fixed dimension $D$ and asymptotically large sample size $N$ is not. In particular, for the case where the sample size is less than the fixed dimension ($N<D$), we present in explicit form an optimal algorithm of computational cost $2^N$. For the case $N \geq D$, we present an optimal algorithm of complexity $\mathcal O(N^D)$. We generalize to multiple $L_1$maxprojection components and present an explicit optimal $L_1$ subspace calculation algorithm of complexity $\mathcal O(N^{DKK+1})$ where $K$ is the desired number of $L_1$ principal components (subspace rank). We conclude with illustrations of $L_1$subspace signal processing in the fields of data dimensionality reduction, directionofarrival estimation, and image conditioning/restoration.
 Publication:

IEEE Transactions on Signal Processing
 Pub Date:
 October 2014
 DOI:
 10.1109/TSP.2014.2338077
 arXiv:
 arXiv:1405.6785
 Bibcode:
 2014ITSP...62.5046M
 Keywords:

 Computer Science  Data Structures and Algorithms
 EPrint:
 doi:10.1109/TSP.2014.2338077