1. Introduction Goal: given Markov Decision Process (MDP) M without its re- ward function R, as well as example traces D from its optimal policy, find R. Motivations: learning policies from examples, inferring goals, specifying tasks by demonstration. Challenge: many functions R fit the examples, but many will not generalize to unobserved states. Selecting compact set of features that represent R is difficult. Solution: construct features to represent R from exhaustive list of component features, using logical conjunctions of component features represented as a regression tree. 2. Background Markov Decision Process: M = {S, A, θ, γ, R} S – set of statesA – set of actions γ – discount factorR – reward function θ – state transition probabilities: θsas = P (s |s, a) Optimal Policy: denotedmaximizes Et∗γR(s,a)|π,θttt=0∗π ,∞ Example Traces: D = {(s1,1 , a1,1 ), ..., (sn,T , an,T )}, where si,t is thththe t state in the i trace, and ai,t is the optimal action in si,t . Previous Work: most existing algorithms require a set of fea- tures Φ to be provided, and find a reward function that is a linear combination of the features [1, 2, 3, 4]. Finding features that are relevant and sufficient is difficult. Furthermore, a linear combina- tion is not always a good estimate for the reward. Component Features: instead of a complete set of relevant fea- tures, our method accepts an exhaustive list of component features δ : S → Z. The algorithm finds a regression tree, with relevant component features acting as tests, to represent the reward. 3. Algorithm Overview: Iteratively construct feature set Φ and reward R, al- ternating between an optimization phase that determines a re- ward, and a fitting phase that determines the features. Optimization Phase: Find reward R “close” to current features Φ, under which examples D are part of the optimal policy. Letting P rojΦ R denote the closest reward to R that is a linear combination of features Φ, we find R as: Note that R can “step outside” of the current features to satisfy the examples, if the current features Φ are insufficient. Fitting Phase: Fit a regression tree to R, with component features δ acting as tests at tree nodes. Indicators for leaves of the tree are the new features Φ. Only component features that are relevant to the structure of R are selected, and leaves correspond to their logical conjunctions. 4. Illustrated Example
5. Experimental Results Gridworld transfer comparison: 64×64 gridworld with colored objects placed at random. Component features give distance to object of specific color. Many colors are irrelevant. Transfer performance corresponds to learning reward on one random gridworld, and evaluating on 10 others (with random object placement). Comparing FIRL (proposed algorithm), Abbeel & Ng [1], MMP [3], LPAL [4]. FIRL outperforms prior methods, which cannot distinguish relevant objects from irrelevant ones.
Highway driving: “lawful” policy avoids going fast in right lane, “outlaw” policy drives fast, but slows down near police. Features indicate presence of police, current lane, speed, distance to cars, etc. Logical connection be- tween speed and lanes/police cars cannot be captured by linear combina- tions, and prior methods cannot match the expert’s speed while also match- ing feature expectations. Videos of the learned policies can be found at: http://graphics.stanford.edu/projects/firl/index.htm. 6. References [1] P. Abbeel and A. Y. Ng. Apprenticeship learning via inverse reinforcement learn- ing. In ICML ’04: Proceedings of the 21st International Conference on Machine Learning. ACM, 2004. [2] A. Y. Ng and S. J. Russell. Algorithms for inverse reinforcement learning. In ICML ’00: Proceedings of the 17th International Conference on Machine Learn- ing, pages 663–670. Morgan Kaufmann Publishers Inc., 2000. [3] N. D. Ratliff, J. A. Bagnell, and M. A. Zinkevich. Maximum margin planning. In ICML ’06: Proceedings of the 23rd International Conference on Machine Learn- ing, pages 729–736. ACM, 2006. [4] U. Syed, M. Bowling, and R. E. Schapire. Apprenticeship learning using linear programming. In ICML ’08: Proceedings of the 25th International Conference on Machine Learning, pages 1032–1039. ACM, 2008.