When AI Trading Agents Compete: Adverse Selection of Meta-Orders by Reinforcement Learning-Based Market Making

ArXiv ID: 2510.27334 “View on arXiv”

Authors: Ali Raza Jafree, Konark Jain, Nick Firoozye

Abstract

We investigate the mechanisms by which medium-frequency trading agents are adversely selected by opportunistic high-frequency traders. We use reinforcement learning (RL) within a Hawkes Limit Order Book (LOB) model in order to replicate the behaviours of high-frequency market makers. In contrast to the classical models with exogenous price impact assumptions, the Hawkes model accounts for endogenous price impact and other key properties of the market (Jain et al. 2024a). Given the real-world impracticalities of the market maker updating strategies for every event in the LOB, we formulate the high-frequency market making agent via an impulse control reinforcement learning framework (Jain et al. 2025). The RL used in the simulation utilises Proximal Policy Optimisation (PPO) and self-imitation learning. To replicate the adverse selection phenomenon, we test the RL agent trading against a medium frequency trader (MFT) executing a meta-order and demonstrate that, with training against the MFT meta-order execution agent, the RL market making agent learns to capitalise on the price drift induced by the meta-order. Recent empirical studies have shown that medium-frequency traders are increasingly subject to adverse selection by high-frequency trading agents. As high-frequency trading continues to proliferate across financial markets, the slippage costs incurred by medium-frequency traders are likely to increase over time. However, we do not observe that increased profits for the market making RL agent necessarily cause significantly increased slippages for the MFT agent.

Keywords: High-Frequency Trading, Hawkes Limit Order Book, adverse selection, Proximal Policy Optimization, market making

Complexity vs Empirical Score

  • Math Complexity: 8.0/10
  • Empirical Rigor: 3.0/10
  • Quadrant: Lab Rats
  • Why: The paper employs advanced mathematical modeling (Hawkes processes, Hamilton-Jacobian-Bellman PDEs, impulse control RL) but is implemented in a simulated environment using a provided codebase rather than real market data or live backtesting.
  flowchart TD
    A["Research Goal"] --> B["Methodology"]
    B --> C["Computational Process"]
    C --> D["Key Findings"]

    A["Research Goal<br>How RL-based Market Makers<br>Adversely Select MFTs?"]

    B["Methodology"]
    subgraph B [" "]
        B1["Hawkes LOB Model<br>Endogenous Price Impact"]
        B2["RL Market Making Agent<br>Impulse Control + PPO"]
    end

    C["Computational Process"]
    subgraph C [" "]
        C1["Simulation Setup<br>RL vs MFT Meta-Order"]
        C2["Training Phase<br>Learn to Capitalize on Drift"]
    end

    D["Key Findings"]
    subgraph D [" "]
        D1["RL Agent Profits<br>from MFT Price Drift"]
        D2["No Significant Increase<br>in MFT Slippage Costs"]
    end