Published February 17, 2026 | Version v1
Dissertation Open

Analysis of Multi-Agent Reinforcement Learning from a Statistical Physics Perspective

Authors/Creators

Description

Multi-Agent Reinforcement Learning involves interacting agents whose learning processes are coupled through their shared environment, giving rise to emergent, collective dynamics that are sensitive to initial conditions and parameter variations. This thesis explores how a statistical physics perspective can be applied to illuminate the mechanisms governing collective behaviour, levering in particular the toolset of dynamical systems theory. By constructing deterministic approximation models of stochastic algorithms, this approach has uncovered some of the underlying dynamics. Nonetheless, even in the simple independent Q-learning algorithm with a Boltzmann exploration policy, significant discrepancies arise between the actual dynamics and previous approximation models. It is clarified why these models actually do not approximate the original algorithm but interesting variants, which simplify the learning dynamics. To resolve the inconsistencies, a new approximation model is proposed, which explicitly incorporates agents’ update frequencies and demonstrates good agreement with the stochastic dynamics of the real system. The model’s utility is showcased by applying it to the question of spontaneous cooperation in social dilemmas. In the Prisoner’s Dilemma, it reveals that mutual cooperation is merely a metastable transient phase and not a true equilibrium, making it exploitable. Furthermore, a systematic analysis shows that increasing the discount factor exacerbates a “moving target” problem, preventing convergence to a joint policy by inducing oscillations. The oscillations arise from a supercritical Neimark–Sacker bifurcation, where the unique stable fixed point of the learning dynamics transitions into an unstable focus surrounded by a stable limit cycle. These phenomena are observed not only for independent learning but also in memory-one joint-action Q-learning on the iterated Prisoner’s Dilemma. Overall, these results demonstrate that even in trivial two-agent, two-action games, basic algorithms like Q-learning can exhibit complex and unstable learning dynamics.

Files

Master_thesis_Goll.pdf

Files (13.9 MB)

Name Size Download all
md5:6d609657345d2a32d57144372b38e291
13.9 MB Preview Download