The paper seeks to establish a connection between algorithmic stability and generalization performance. Notions of algorithmic stability have been proposed before and linked to the generalization performance of learning algorithms \cite{conf/uai/KutinN02} \cite{journals/neco/KearnsR99} and have also been shown to be crucial for learnability \cite{journals/jmlr/Shalev-ShwartzSSS10}.
\cite{PoggioETAL:04} proved that for bounded loss functions, the generalization of ERM is equivalent to the probabilistic leave-one-out stability of the learning algorithm. \cite{journals/jmlr/Shalev-ShwartzSSS10} then showed that a problem is learnable in Vapnik's general setting of learning iff there exists an asymptotically stability ERM procedure.
This paper first establishes that for Vapnik's general setting of learning, a probabilistic notion of stability, is necessary and sufficient for the training losses to converge to test losses uniformly for all distributions. The paper then presents some discussions on how this notion of stability can be interpreted to give results in terms of the capacity of the function class or the size of the population.