Learning from Expert Factors: Trajectory-level Reward Shaping for Formulaic Alpha Mining
ArXiv ID: 2507.20263 “View on arXiv”
Authors: Junjie Zhao, Chengxi Zhang, Chenkai Wang, Peng Yang
Abstract
Reinforcement learning (RL) has successfully automated the complex process of mining formulaic alpha factors, for creating interpretable and profitable investment strategies. However, existing methods are hampered by the sparse rewards given the underlying Markov Decision Process. This inefficiency limits the exploration of the vast symbolic search space and destabilizes the training process. To address this, Trajectory-level Reward Shaping (TLRS), a novel reward shaping method, is proposed. TLRS provides dense, intermediate rewards by measuring the subsequence-level similarity between partially generated expressions and a set of expert-designed formulas. Furthermore, a reward centering mechanism is introduced to reduce training variance. Extensive experiments on six major Chinese and U.S. stock indices show that TLRS significantly improves the predictive power of mined factors, boosting the Rank Information Coefficient by 9.29% over existing potential-based shaping algorithms. Notably, TLRS achieves a major leap in computational efficiency by reducing its time complexity with respect to the feature dimension from linear to constant, which is a significant improvement over distance-based baselines.
Keywords: Reinforcement Learning (RL), Alpha Factor Mining, Reward Shaping, Symbolic Regression, Rank Information Coefficient, Equities
Complexity vs Empirical Score
- Math Complexity: 8.0/10
- Empirical Rigor: 7.5/10
- Quadrant: Holy Grail
- Why: The paper presents advanced mathematics, including MDPs, reward shaping theory, and complexity analysis, while also demonstrating strong empirical validation with extensive experiments on six stock indices and significant performance metrics.
flowchart TD
A["Research Goal<br>Address sparse rewards in<br>Formulaic Alpha Mining via RL"] --> B{"Methodology: Trajectory-level<br>Reward Shaping TLRS"}
B --> C["Input: Expert Formulas<br>& Symbolic Search Space"]
C --> D["Compute Subsequence<br>Similarity Dense Rewards"]
D --> E["Reward Centering<br>Reduce Training Variance"]
E --> F["RL Agent Optimization<br>with TLRS"]
F --> G["Outcomes: 6 Major Stock Indices"]
G --> H["Key Findings<br>9.29% Rank IC Increase<br>Constant Time Complexity"]