JaxMARL-HFT: GPU-Accelerated Large-Scale Multi-Agent Reinforcement Learning for High-Frequency Trading
ArXiv ID: 2511.02136 “View on arXiv”
Authors: Valentin Mohl, Sascha Frey, Reuben Leyland, Kang Li, George Nigmatulin, Mihai Cucuringu, Stefan Zohren, Jakob Foerster, Anisoara Calinescu
Abstract
Agent-based modelling (ABM) approaches for high-frequency financial markets are difficult to calibrate and validate, partly due to the large parameter space created by defining fixed agent policies. Multi-agent reinforcement learning (MARL) enables more realistic agent behaviour and reduces the number of free parameters, but the heavy computational cost has so far limited research efforts. To address this, we introduce JaxMARL-HFT (JAX-based Multi-Agent Reinforcement Learning for High-Frequency Trading), the first GPU-accelerated open-source multi-agent reinforcement learning environment for high-frequency trading (HFT) on market-by-order (MBO) data. Extending the JaxMARL framework and building on the JAX-LOB implementation, JaxMARL-HFT is designed to handle a heterogeneous set of agents, enabling diverse observation/action spaces and reward functions. It is designed flexibly, so it can also be used for single-agent RL, or extended to act as an ABM with fixed-policy agents. Leveraging JAX enables up to a 240x reduction in end-to-end training time, compared with state-of-the-art reference implementations on the same hardware. This significant speed-up makes it feasible to exploit the large, granular datasets available in high-frequency trading, and to perform the extensive hyperparameter sweeps required for robust and efficient MARL research in trading. We demonstrate the use of JaxMARL-HFT with independent Proximal Policy Optimization (IPPO) for a two-player environment, with an order execution and a market making agent, using one year of LOB data (400 million orders), and show that these agents learn to outperform standard benchmarks. The code for the JaxMARL-HFT framework is available on GitHub.
Keywords: Multi-agent reinforcement learning, high-frequency trading, JAX, Limit Order Book, market making
Complexity vs Empirical Score
- Math Complexity: 4.0/10
- Empirical Rigor: 8.0/10
- Quadrant: Street Traders
- Why: The paper focuses heavily on implementing a practical GPU-accelerated framework (JAX) and scaling experiments with real-world data (400M orders), but lacks advanced mathematical derivations or novel theory.
flowchart TD
A["Research Goal: Develop GPU-accelerated<br>Multi-Agent RL for HFT"] --> B["JaxMARL-HFT Framework Development"]
B --> C["Input: 1 Year MBO Data<br>400M Orders"]
C --> D{"Computational Process:<br>GPU-Accelerated Training"}
D --> E["240x Speed-Up vs SOTA<br>Enables Large-Scale Sweeps"]
E --> F["Agents: Market Making<br>& Order Execution"]
F --> G["Method: IPPO Training"]
G --> H["Key Findings:<br>Agents Outperform Benchmarks<br>Open Source GitHub Code"]