Abstract
This paper presents a reinforcement learning algorithm and provides conditions for global convergence to Nash equilibria. For several reinforcement learning schemes, including the ones proposed here, excluding convergence to action profiles which are not Nash equilibria may not be trivial, unless the step-size sequence is appropriately tailored to the specifics of the game. In this paper, we sidestep these issues by introducing a new class of reinforcement learning schemes where the strategy of each agent is perturbed by a state-dependent perturbation function. Contrary to prior work on equilibrium selection in games, where perturbation functions are globally state dependent, the perturbation function here is assumed to be local, i.e., it only depends on the strategy of each agent. We provide conditions under which the strategies of the agents will converge to an arbitrarily small neighborhood of the set of Nash equilibria almost surely. We further specialize the results to a class of potential games.
Original language | English |
---|---|
Publication status | Accepted/In press - 2011 |
Event | 50th IEEE Conference on Decision and Control and European Control Conference, 2011 - Orlando, Florida, United States Duration: 2011 Dec 12 → 2011 Dec 15 Conference number: 50 http://www.ieeecss.org/CAB/conferences/cdcecc2011/ |
Conference
Conference | 50th IEEE Conference on Decision and Control and European Control Conference, 2011 |
---|---|
Abbreviated title | cdcecc2011 |
Country/Territory | United States |
City | Orlando, Florida |
Period | 2011/12/12 → 2011/12/15 |
Internet address |
Subject classification (UKÄ)
- Control Engineering