EarnHFT: Efficient Hierarchical Reinforcement Learning for High Frequency Trading
ArXiv ID: 2309.12891 “View on arXiv”
Authors: Unknown
Abstract
High-frequency trading (HFT) uses computer algorithms to make trading decisions in short time scales (e.g., second-level), which is widely used in the Cryptocurrency (Crypto) market (e.g., Bitcoin). Reinforcement learning (RL) in financial research has shown stellar performance on many quantitative trading tasks. However, most methods focus on low-frequency trading, e.g., day-level, which cannot be directly applied to HFT because of two challenges. First, RL for HFT involves dealing with extremely long trajectories (e.g., 2.4 million steps per month), which is hard to optimize and evaluate. Second, the dramatic price fluctuations and market trend changes of Crypto make existing algorithms fail to maintain satisfactory performance. To tackle these challenges, we propose an Efficient hieArchical Reinforcement learNing method for High Frequency Trading (EarnHFT), a novel three-stage hierarchical RL framework for HFT. In stage I, we compute a Q-teacher, i.e., the optimal action value based on dynamic programming, for enhancing the performance and training efficiency of second-level RL agents. In stage II, we construct a pool of diverse RL agents for different market trends, distinguished by return rates, where hundreds of RL agents are trained with different preferences of return rates and only a tiny fraction of them will be selected into the pool based on their profitability. In stage III, we train a minute-level router which dynamically picks a second-level agent from the pool to achieve stable performance across different markets. Through extensive experiments in various market trends on Crypto markets in a high-fidelity simulation trading environment, we demonstrate that EarnHFT significantly outperforms 6 state-of-art baselines in 6 popular financial criteria, exceeding the runner-up by 30% in profitability.
Keywords: High-Frequency Trading (HFT), Hierarchical Reinforcement Learning, Dynamic Programming, Cryptocurrency Trading, Cryptocurrency
Complexity vs Empirical Score
- Math Complexity: 7.5/10
- Empirical Rigor: 8.0/10
- Quadrant: Holy Grail
- Why: The paper employs advanced mathematical concepts like dynamic programming, KL divergence, and hierarchical Markov Decision Processes, placing it in the high complexity range. It demonstrates strong empirical rigor through extensive experiments in a high-fidelity simulation environment, evaluating against multiple baselines and financial criteria, making it highly backtest-ready.
flowchart TD
A["Research Goal<br>Develop efficient RL for HFT<br>in volatile Crypto markets"] --> B["Data: BTC/ETH Market Data<br>High-frequency time series"]
B --> C["Stage 1: Q-Teacher Pre-training<br>Dynamic Programming for<br>Optimal Action Values"]
C --> D["Stage 2: Agent Pool Creation<br>Train 100s of RL agents<br>Filter by profitability & return preference"]
D --> E["Stage 3: Minute-Level Router<br>Dynamic Agent Selection<br>for stable trend adaptation"]
E --> F["Simulated Trading Environment<br>High-fidelity backtesting"]
F --> G["Key Outcomes<br>1. 30% higher profitability than SOTA<br>2. Outperforms 6 baselines<br>3. Robust across market trends"]