Mean–Variance Portfolio Selection by Continuous-Time Reinforcement Learning: Algorithms, Regret Analysis, and Empirical Study
ArXiv ID: 2412.16175 “View on arXiv”
Authors: Unknown
Abstract
We study continuous-time mean–variance portfolio selection in markets where stock prices are diffusion processes driven by observable factors that are also diffusion processes, yet the coefficients of these processes are unknown. Based on the recently developed reinforcement learning (RL) theory for diffusion processes, we present a general data-driven RL algorithm that learns the pre-committed investment strategy directly without attempting to learn or estimate the market coefficients. For multi-stock Black–Scholes markets without factors, we further devise a baseline algorithm and prove its performance guarantee by deriving a sublinear regret bound in terms of the Sharpe ratio. For performance enhancement and practical implementation, we modify the baseline algorithm and carry out an extensive empirical study to compare its performance, in terms of a host of common metrics, with a large number of widely employed portfolio allocation strategies on S&P 500 constituents. The results demonstrate that the proposed continuous-time RL strategy is consistently among the best, especially in a volatile bear market, and decisively outperforms the model-based continuous-time counterparts by significant margins.
Keywords: Reinforcement Learning (RL), Mean-Variance Portfolio Selection, Diffusion Processes, Sublinear Regret Bound, Sharpe Ratio, Equities (Stocks)
Complexity vs Empirical Score
- Math Complexity: 8.5/10
- Empirical Rigor: 7.5/10
- Quadrant: Holy Grail
- Why: The paper employs advanced continuous-time stochastic analysis and proves sublinear regret bounds, indicating high mathematical density. It also features an extensive empirical study with real S&P 500 data, comparing multiple strategies across various metrics, demonstrating substantial implementation and backtesting rigor.
flowchart TD
A["Research Goal:<br>Mean-Variance Portfolio<br>Selection in Unknown Markets"]
B["Methodology:<br>Continuous-Time<br>Reinforcement Learning RL"]
C["Inputs:<br>S&P 500 Constituents<br>Real Market Data"]
D["Computational Process:<br>Direct Strategy Learning<br>No Model Estimation"]
E["Key Findings:<br>Sublinear Regret Bound<br>& Superior Sharpe Ratio"]
F["Outcomes:<br>Consistent Top Performance<br>Especially in Bear Markets"]
A --> B
B --> C
C --> D
D --> E
E --> F