Learning to signal: analysis of a micro-level reinforcement model

Raffaele Argiento, Robin Pemantle, Brian Skyrms, Stanislav Volkov

Research output: Contribution to journalArticlepeer-review

235 Downloads (Pure)

Abstract

We consider the following signaling game. Nature plays first from the set {1,2}{1,2}. Player 1 (the Sender) sees this and plays from the set {A,B}{A,B}. Player 2 (the Receiver) sees only Player 1’s play and plays from the set {1,2}{1,2}. Both players win if Player 2’s play equals Nature’s play and lose otherwise. Players are told whether they have won or lost, and the game is repeated. An urn scheme for learning coordination in this game is as follows. Each node of the decision tree for Players 1 and 2 contains an urn with balls of two colors for the two possible decisions. Players make decisions by drawing from the appropriate urns. After a win, each ball that was drawn is reinforced by adding another of the same color to the urn. A number of equilibria are possible for this game other than the optimal ones. However, we show that the urn scheme achieves asymptotically optimal coordination.
Original languageEnglish
Pages (from-to)373-390
JournalStochastic Processes and their Applications
Volume119
Issue number2
DOIs
Publication statusPublished - 2009
Externally publishedYes

Subject classification (UKÄ)

  • Probability Theory and Statistics

Free keywords

  • Urn model
  • Stochastic approximation
  • Evolution
  • game
  • Probability
  • Stable
  • Unstable
  • Two-player game

Fingerprint

Dive into the research topics of 'Learning to signal: analysis of a micro-level reinforcement model'. Together they form a unique fingerprint.

Cite this