A Convergent Gradient Descent Algorithm for Rank Minimization and Semidefinite Programming from Random Linear Measurements
Zheng, Qinqing
and
Lafferty, John D.
Neural Information Processing Systems Conference - 2015 via Local Bibsonomy
Keywords:
dblp
The paper presents results on recovery of low-rank semidefinite matrices from linear measurements, using nonconvex optimization. The approach is inspired by recent work on phase retrieval, and combines spectral initialization with gradient descent. The connection to phase retrieval comes because measurements which are linear in the semidefinite matrix $X = Z Z'$ are quadratic in the factors $Z$. The paper proves recovery results which imply that correct recovery occurs when the number of measurements m is essentially proportional to n $r^2$, where n is the dimensionality and r is the rank. The convergence analysis is based on a form of restricted strong convexity (restricted because there is an $r(r-1)/2$-dimensional set of equivalent solutions along which the objective is flat). This condition also implies linear convergence of the proposed algorithm.
The implementation seems awful. When compared to recent implementations, e.g. http://arxiv.org/abs/1408.2467 the performance seems orders of magnitude away from the state of the art -- and being an order of magnitude faster than general-purpose SDP solver on the nuclear norm does not make it any better. The authors should acknowledge that and compare the results with other codes on some established benchmark (e.g. Lenna), so as to show that the price in terms of run-time brings about much better performance in terms of objective function values (SNR, RMSE) -- which is plausible, but far from certain.