Learning in unknown reward games:application to sensor networks

Chapman, A. C. and Leslie, David. S. and Rogers, A. and Jennings, N. R. (2014) Learning in unknown reward games:application to sensor networks. The Computer Journal, 57 (6). pp. 875-892. ISSN 0010-4620

Full text not available from this repository.

Abstract

This paper demonstrates a decentralized method for optimization using game-theoretic multi-agent techniques, applied to a sensor network management problem. Our first major contribution is to show how the marginal contribution utility design is used to construct an unknown-reward potential game formulation of the problem. This formulation exploits the sparse structure of sensor network problems, and allows us to apply a bound to the price of anarchy of the Nash equilibria of the induced game. Furthermore, since the game is a potential game, solutions can be found using multi-agent learning techniques. The techniques we derive use Q-learning to estimate an agent's rewards, while an action adaptation process responds to an agent's opponents’ behaviour. However, there are many different algorithmic configurations that could be used to solve these games. Thus, our second major contribution is an extensive evaluation of several action adaptation processes. Specifically, we compare six algorithms across a variety of parameter settings to ascertain the quality of the solutions they produce, their speed of convergence and their robustness to pre-specified parameter choices. Our results show that they each perform similarly across a wide range of parameters. There is, however, a significant effect from moving to a learning policy with sampling probabilities that go to zero too quickly for rewards to be accurately estimated.

Item Type:
Journal Article
Journal or Publication Title:
The Computer Journal
Uncontrolled Keywords:
/dk/atira/pure/subjectarea/asjc/1700
Subjects:
ID Code:
70683
Deposited By:
Deposited On:
08 Sep 2014 10:54
Refereed?:
Yes
Published?:
Published
Last Modified:
12 Aug 2020 03:42