Coin Sampling : Gradient-Based Bayesian Inference without Learning Rates

Sharrock, Louis and Nemeth, Christopher (2023) Coin Sampling : Gradient-Based Bayesian Inference without Learning Rates. Proceedings of Machine Learning Research, 202. pp. 30850-30882. ISSN 1938-7228

[thumbnail of Coin Sampling: Gradient-Based Bayesian Inference without Learning Rates]
Text (Coin Sampling: Gradient-Based Bayesian Inference without Learning Rates)
sharrock23a.pdf - Published Version
Available under License Creative Commons Attribution.

Download (9MB)

Abstract

In recent years, particle-based variational inference (ParVI) methods such as Stein variational gradient descent (SVGD) have grown in popularity as scalable methods for Bayesian inference. Unfortunately, the properties of such methods invariably depend on hyperparameters such as the learning rate, which must be carefully tuned by the practitioner in order to ensure convergence to the target measure at a suitable rate. In this paper, we introduce a suite of new particle-based methods for scalable Bayesian inference based on coin betting, which are entirely learning-rate free. We illustrate the performance of our approach on a range of numerical examples, including several high-dimensional models and datasets, demonstrating comparable performance to other ParVI algorithms with no need to tune a learning rate.

Item Type:
Journal Article
Journal or Publication Title:
Proceedings of Machine Learning Research
Additional Information:
In: Proceedings of the 40th International Conference on Machine Learning (ICML), Hawaii, USA.
Uncontrolled Keywords:
Research Output Funding/yes_externally_funded
Subjects:
?? yes - externally fundedyes ??
ID Code:
204493
Deposited By:
Deposited On:
20 Sep 2023 08:35
Refereed?:
Yes
Published?:
Published
Last Modified:
26 Sep 2024 13:35