Two different multiplicative algorithms for NMF are proposed. Both of them try to rescaled the gradient descent, and choose the optimal rescaling factor to update the W and H in each iteration.
Like W←αW And H← βH
The authors propose two different algorithms based on the definition of the cost functions which help to quantify the quality of the approximation. The first one is constructed based on the Euclidean distance: 〖||A-B||〗^2 . The second one is based on the divergence: D(A||B). Then the new algorithms can be regarded as minimize〖 ||V-WH||〗^2 with the respect to W and H, and minimize D(V||WH) with the respect to W and H by using gradient descent. For each iteration of the gradient descent, the authors design new update rules (Theorem 1 and Theorem 2) to get the optimal new W and H.
The authors provide the proofs of convergence to show that the proposed Theorem 1 and Theorem 2 can help to get the optimal results.