Better-than-Demonstrator Imitation Learning via Automatically-Ranked Demonstrations
Daniel S. Brown
and
Wonjoon Goo
and
Scott Niekum
arXiv e-Print archive - 2019 via Local arXiv
Keywords:
cs.LG, stat.ML
First published: 2019/07/09 (5 years ago) Abstract: The performance of imitation learning is typically upper-bounded by the
performance of the demonstrator. While recent empirical results demonstrate
that ranked demonstrations allow for better-than-demonstrator performance,
preferences over demonstrations may be difficult to obtain, and little is known
theoretically about when such methods can be expected to successfully
extrapolate beyond the performance of the demonstrator. To address these
issues, we first contribute a sufficient condition for better-than-demonstrator
imitation learning and provide theoretical results showing why preferences over
demonstrations can better reduce reward function ambiguity when performing
inverse reinforcement learning. Building on this theory, we introduce
Disturbance-based Reward Extrapolation (D-REX), a ranking-based imitation
learning method that injects noise into a policy learned through behavioral
cloning to automatically generate ranked demonstrations. These ranked
demonstrations are used to efficiently learn a reward function that can then be
optimized using reinforcement learning. We empirically validate our approach on
simulated robot and Atari imitation learning benchmarks and show that D-REX
outperforms standard imitation learning approaches and can significantly
surpass the performance of the demonstrator. D-REX is the first imitation
learning approach to achieve significant extrapolation beyond the
demonstrator's performance without additional side-information or supervision,
such as rewards or human preferences. By generating rankings automatically, we
show that preference-based inverse reinforcement learning can be applied in
traditional imitation learning settings where only unlabeled demonstrations are
available.
## General Framework
Extends T-REX (see [summary](https://www.shortscience.org/paper?bibtexKey=journals/corr/1904.06387&a=muntermulehitch)) so that preferences (rankings) over demonstrations are generated automatically (back to the common IL/IRL setting where we only have access to a set of unlabeled demonstrations). Also derives some theoretical requirements and guarantees for better-than-demonstrator performance.
## Motivations
* Preferences over demonstrations may be difficult to obtain in practice.
* There is no theoretical understanding of the requirements that lead to outperforming demonstrator.
## Contributions
* Theoretical results (with linear reward function) on when better-than-demonstrator performance is possible: 1- the demonstrator must be suboptimal (room for improvement, obviously), 2- the learned reward must be close enough to the reward that the demonstrator is suboptimally optimizing for (be able to accurately capture the intent of the demonstrator), 3- the learned policy (optimal wrt the learned reward) must be close enough to the optimal policy (wrt to the ground truth reward). Obviously if we have 2- and a good enough RL algorithm we should have 3-, so it might be interesting to see if one can derive a requirement from only 1- and 2- (and possibly a good enough RL algo).
* Theoretical results (with linear reward function) showing that pairwise preferences over demonstrations reduce the error and ambiguity of the reward learning. They show that without rankings two policies might have equal performance under a learned reward (that makes expert's demonstrations optimal) but very different performance under the true reward (that makes the expert optimal everywhere). Indeed, the expert's demonstration may reveal very little information about the reward of (suboptimal or not) unseen regions which may hurt very much the generalizations (even with RL as it would try to generalize to new states under a totally wrong reward). They also show that pairwise preferences over trajectories effectively give half-space constraints on the feasible reward function domain and thus may decrease exponentially the reward function ambiguity.
* Propose a practical way to generate as many ranked demos as desired.
## Additional Assumption
Very mild, assumes that a Behavioral Cloning (BC) policy trained on the provided demonstrations is better than a uniform random policy.
## Disturbance-based Reward Extrapolation (D-REX)
![](https://i.imgur.com/9g6tOrF.png)
![](https://i.imgur.com/zSRlDcr.png)
They also show that the more noise added to the BC policy the lower the performance of the generated trajs.
## Results
Pretty much like T-REX.