Conditional mean embeddings as regressors

Grunewalder, S. and Lever, G. and Gretton, A. and Baldassarre, L. and Patterson, S. and Pontil, M. (2012) Conditional mean embeddings as regressors. In: Proceedings of the 29th International Conference on Machine Learning, Edinburgh, Scotland, 2012. UNSPECIFIED.

Full text not available from this repository.

Abstract

We demonstrate an equivalence between reproducing kernel Hilbert space (RKHS) embeddings of conditional distributions and vector-valued regressors. This connection introduces a natural regularized loss function which the RKHS embeddings minimise, providing an intuitive understanding of the embeddings and a justification for their use. Furthermore, the equivalence allows the application of vector-valued regression methods and results to the problem of learning conditional distributions. Using this link we derive a sparse version of the embedding by considering alternative formulations. Further, by applying convergence results for vector-valued regression to the embedding problem we derive minimax convergence rates which are O(log(n)=n) – compared to current state of the art rates of O(n􀀀1=4) – and are valid under milder and more intuitive assumptions. These minimax upper rates coincide with lower rates up to a logarithmic factor, showing that the embedding method achieves nearly optimal rates. We study our sparse embedding algorithm in a reinforcement learning task where the algorithm shows significant improvement in sparsity over an incomplete Cholesky decomposition.

Item Type:
Contribution in Book/Report/Proceedings
ID Code:
76774
Deposited By:
Deposited On:
23 Nov 2015 16:46
Refereed?:
Yes
Published?:
Published
Last Modified:
17 Sep 2023 03:55