Pike-Burke, Ciara and Agrawal, Shipra and Szepesvari, Csaba and Grunewalder, Steffen
(2018)
*Bandits with Delayed, Aggregated Anonymous Feedback.*
In:
Proceedings of the International Conference on Machine Learning, 10-15 July 2018, Stockholmsmässan, Stockholm Sweden.
Proceedings of Machine Learning Research
.
PMLR, pp. 4105-4123.

1709.06853.pdf - Accepted Version

Available under License Creative Commons Attribution-NonCommercial.

Download (4MB)

## Abstract

We study a variant of the stochastic K-armed bandit problem, which we call “bandits with delayed, aggregated anonymous feedback”. In this problem, when the player pulls an arm, a reward is generated, however it is not immediately observed. Instead, at the end of each round the player observes only the sum of a number of previously generated rewards which happen to arrive in the given round. The rewards are stochastically delayed and due to the aggregated nature of the observations, the information of which arm led to a particular reward is lost. The question is what is the cost of the information loss due to this delayed, aggregated anonymous feedback? Previous works have studied bandits with stochastic, non-anonymous delays and found that the regret increases only by an additive factor relating to the expected delay. In this paper, we show that this additive regret increase can be maintained in the harder delayed, aggregated anonymous feedback setting when the expected delay (or a bound on it) is known. We provide an algorithm that matches the worst case regret of the non-anonymous problem exactly when the delays are bounded, and up to logarithmic factors or an additive variance term for unbounded delays.