Deviations from the Nash equilibrium and emergence of tacit collusion in a two-player optimal execution game with reinforcement learning

ArXiv ID: 2408.11773 “View on arXiv”

Authors: Unknown

Abstract

The use of reinforcement learning algorithms in financial trading is becoming increasingly prevalent. However, the autonomous nature of these algorithms can lead to unexpected outcomes that deviate from traditional game-theoretical predictions and may even destabilize markets. In this study, we examine a scenario in which two autonomous agents, modeled with Double Deep Q-Learning, learn to liquidate the same asset optimally in the presence of market impact, using the Almgren-Chriss (2000) framework. Our results show that the strategies learned by the agents deviate significantly from the Nash equilibrium of the corresponding market impact game. Notably, the learned strategies exhibit tacit collusion, closely aligning with the Pareto-optimal solution. We further explore how different levels of market volatility influence the agents’ performance and the equilibria they discover, including scenarios where volatility differs between the training and testing phases.

Keywords: Reinforcement Learning, Double Deep Q-Learning, Market Impact, Almgren-Chriss Framework, Tacit Collusion, Equities / Single Asset Trading

Complexity vs Empirical Score

  • Math Complexity: 7.5/10
  • Empirical Rigor: 6.0/10
  • Quadrant: Holy Grail
  • Why: The paper employs advanced mathematical frameworks like the Almgren-Chriss market impact model, game-theoretic Nash and Pareto-optimal equilibria derivations, and reinforcement learning theory, indicating high mathematical complexity. Empirically, it uses simulated trading environments with Double Deep Q-Learning, compares learned strategies to theoretical benchmarks, and tests robustness under varying volatility regimes, demonstrating substantial data and implementation rigor.
  flowchart TD
    A["Research Goal: <br>How do RL agents in optimal execution <br>deviate from Nash Equilibrium?"] --> B["Methodology: <br>Double Deep Q-Learning Agents"]
    B --> C["Data/Inputs: <br>Almgren-Chriss Framework <br>with Market Impact & Volatility"]
    C --> D["Computational Process: <br>Training two autonomous agents <br>to liquidate same asset"]
    D --> E["Outcome 1: <br>Significant deviation <br>from Nash Equilibrium"]
    D --> F["Outcome 2: <br>Emergence of Tacit Collusion <br>(Pareto-optimal strategy)"]
    D --> G["Outcome 3: <br>Volatility-dependent performance <br>and equilibria discovery"]