[link]
Summary by NIPS Conference Reviews 8 years ago
The authors introduce a functional called the "harmonic loss" and show that (a) it characterizes smoothness in the sense that functions with small harmonic loss change little across large cuts (to be precise, the cut has to be a level set separator) (b) several algorithms for learning on graphs implicitly try to find functions that minimize the harmonic loss, subject to some constraints.
The "harmonic loss" they define is essentially the (signed) divergence $\nabla f$ of the function across the cut, so it's not surprising that it should be closely related to smoothness. In classical vector calculus one would take the inner product of this divergence with itself and use the identity
< $\nabla f, \nabla f $> = < $f, \nabla^2 f $>
to argue that functions with small variation, i.e., small $| \nabla f |^2$ almost everywhere can be found by solving the Laplace equation. On graphs, modulo some tweaking with edge weights, essentially the same holds, leading to minimizing the quadratic form $ f^\top L f$, which is at the heart of all spectral methods. So in this sense, I am not surprised.
Alternatively, one can minimize the integral of $| \nabla f |$, which is the total variation, and leads to a different type of regularization ($l1$ rather than $l2$ is one way to put it). The "harmonic loss" introduced in this paper is essentially this total variation, except there is no absolute value sign. Among all this fairly standard stuff, the interesting thing about the paper is that for the purpose of analyzing algorithms one can get away with only considering this divergence across cuts that separate level sets of $f$, and in that case all the gradients point in the same direction so one can drop the absolute value sign. This is nice because the "harmonic loss" becomes linear and a bunch of things about it are very easy to prove. At least this is my interpretation of what the paper is about.
more
less