[link]
Summary by Chris Murray 8 years ago
This is a fantastic paper: it presents the problem well, the algorithms are intuitive and clearly presented, and the proofs are not overly long. This describes the online allocation problem: roughly how to use the hypothesis of N "experts" to create a hypothesis that isn't much worse than the best expert (alternatively, how to allocate wealth every period among N traders, conditioned on only their past performance, such that at the end you performed almost as well as the best trader chosen in hindsight). This paper presents the hedge algorithm, which works with general bounded summable loss functions and has only one parameter (a learning rate) to tune: it works by simply decreasing the weight on each expert every period by a factor like (learning_rate)^(loss), where 0 < learning_rate <= 1. The final bound, where L(hedge) is the loss of the hedge algorithm, L* is the loss of the best expert (the one with minimum loss), and $0<B<=1$ is the learning rate, is:
$$L(hedge) <= \[ ln(1/B) \* L\* + ln(N) \] / (1 - B)$$
This paper also describes boosting and relates it to the hedge algorithm, though in my opinion the description given of the boosting problem and adaboost isn't nearly as good as the explanation of the online allocation problem and Hedge. In boosting, we have a weak learner, which can learn a function $X \rightarrow Y$ given some examples, but possibly with high error rate (the precise definition of a weak is, of course, in the paper). In boosting, a "master algorithm" has some set of labelled examples $X_i \rightarrow Y_i$ (X may be multi-dimensional). it calls the weak learner many times, giving it different distributions over these examples and looking at the hypotheses created by the weak learner each time. It then combines these hypothesis into a "master" hypothesis that is guaranteed to be "good". The paper continues with several extentions of boosting to other domains like regression.
more
less