Modelling transition dynamics in MDPs with RKHS embeddings

Grunewalder, S. and Lever, G. and Baldassarre, L. and Pontil, M. and Gretton, A. (2012) Modelling transition dynamics in MDPs with RKHS embeddings. In: Proceedings of the 29th International Conference on Machine Learning, Edinburgh, Scotland, UK, 2012. UNSPECIFIED.

Full text not available from this repository.


We propose a new, nonparametric approach to learning and representing transition dynamics in Markov decision processes (MDPs), which can be combined easily with dynamic programming methods for policy optimisation and value estimation. This approach makes use of a recently developed representation of conditional distributions as embeddings in a reproducing kernel Hilbert space (RKHS). Such representations bypass the need for estimating transition probabilities or densities, and apply to any domain on which kernels can be defined. This avoids the need to calculate intractable integrals, since expectations are represented as RKHS inner products whose computation has linear complexity in the number of points used to represent the embedding. We are able to provide guarantees for the proposed applications in MDPs: in the context of a value iteration algorithm, we prove convergence to either the optimal policy, or to the closest projection of the optimal policy in our model class (an RKHS), under reasonable assumptions. In experiments, we investigate a learning task in a typical classical control setting (the under-actuated pendulum), and on a navigation problem where only images from a sensor are observed. For policy optimisation we compare with least-squares policy iteration where a Gaussian process is used for value function estimation. For value estimation we also compare to the recent NPDP method. Our approach achieves better performance in all experiments.

Item Type:
Contribution in Book/Report/Proceedings
ID Code:
Deposited By:
Deposited On:
02 Mar 2017 16:40
Last Modified:
04 Jun 2020 09:45