Home Reinforcement Learning in Repeated Interaction Games
Article
Licensed
Unlicensed Requires Authentication

Reinforcement Learning in Repeated Interaction Games

  • Jonathan Bendor , Dilip Mookherjee and Debraj Ray
Published/Copyright: March 30, 2001

Abstract

We study long run implications of reinforcement learning when two players repeatedly interact with one another over multiple rounds to play a finite action game. Within each round, the players play the game many successive times with a fixed set of aspirations used to evaluate payoff experiences as successes or failures. The probability weight on successful actions is increased, while failures result in players trying alternative actions in subsequent rounds. The learning rule is supplemented by small amounts of inertia and random perturbations to the states of players. Aspirations are adjusted across successive rounds on the basis of the discrepancy between the average payoff and aspirations in the most recently concluded round. We define and characterize pure steady states of this model, and establish convergence to these under appropriate conditions. Pure steady states are shown to be individually rational, and are either Pareto-efficient or a protected Nash equilibrium of the stage game. Conversely, any Pareto-efficient and strictly individually rational action pair, or any strict protected Nash equilibrium, constitutes a pure steady state, to which the process converges from non-negligible sets of initial aspirations. Applications to games of coordination, cooperation, oligopoly, and electoral competition are discussed.

Published Online: 2001-03-30

©2011 Walter de Gruyter GmbH & Co. KG, Berlin/Boston

Downloaded on 20.9.2025 from https://www.degruyterbrill.com/document/doi/10.2202/1534-5963.1008/html
Scroll to top button