Approximations of the Restless Bandit Problem

Grunewalder, Steffen and Khaleghi, Azadeh (2017) Approximations of the Restless Bandit Problem. arXiv.

Full text not available from this repository.

Abstract

The multi-armed restless bandit problem is studied in the case where the pay-offs are not necessarily independent over time nor across the arms. Even though this version of the problem provides a more realistic model for most real-world applications, it cannot be optimally solved in practice since it is known to be PSPACE-hard. The objective of this paper is to characterize special sub-classes of the problem where good approximate solutions can be found using tractable approaches. Specifically, it is shown that in the case where the joint distribution over the arms is $\varphi$-mixing, and under some conditions on the $\varphi$-mixing coefficients, a modified version of UCB can prove optimal. On the other hand, it is shown that when the pay-off distributions are strongly dependent, simple switching strategies may be devised which leverage the strong inter-dependencies. To this end, an example is provided using Gaussian Processes. The techniques developed in this paper apply, more generally, to the problem of online sampling under dependence.

Item Type:
Journal Article
Journal or Publication Title:
arXiv
Subjects:
?? MATH.STCS.LGMATH.PRSTAT.MLSTAT.TH ??
ID Code:
86678
Deposited By:
Deposited On:
13 Jun 2017 10:56
Refereed?:
No
Published?:
Published
Last Modified:
11 Jun 2019 04:39