This paper proves fast convergence rates for Oja's well-known incremental algorithm for PCA. The proof uses a novel technique to describe the progress of the algorithm, by breaking it into several "epochs"; this is necessary because the PCA problem is not convex, and has saddle points. The proof also uses some ideas from the study of stochastic gradient descent algorithms for strongly convex functions. The theoretical bounds give some insight into the practical performance of Oja's algorithm, and its sensitivity to different parameter settings.
They prove the $\tilde{O}(1/n)$ finite sample rate of convergence for estimating the leading eigenvector of the covaraince matrix. Their results suggest the best learning rate for incremental PCA. Also, their analysis provide insights for relationship with SGD on strongly convex functions.