Exploratory Mean-Variance Portfolio Optimization with Regime-Switching Market Dynamics

ArXiv ID: 2501.16659 “View on arXiv”

Authors: Unknown

Abstract

Considering the continuous-time Mean-Variance (MV) portfolio optimization problem, we study a regime-switching market setting and apply reinforcement learning (RL) techniques to assist informed exploration within the control space. We introduce and solve the Exploratory Mean Variance with Regime Switching (EMVRS) problem. We also present a Policy Improvement Theorem. Further, we recognize that the widely applied Temporal Difference (TD) learning is not adequate for the EMVRS context, hence we consider Orthogonality Condition (OC) learning, leveraging the martingale property of the induced optimal value function from the analytical solution to EMVRS. We design a RL algorithm that has more meaningful parameterization using the market parameters and propose an updating scheme for each parameter. Our empirical results demonstrate the superiority of OC learning over TD learning with a clear convergence of the market parameters towards their corresponding ``grounding true" values in a simulated market scenario. In a real market data study, EMVRS with OC learning outperforms its counterparts with the highest mean and reasonably low volatility of the annualized portfolio returns.

Keywords: Reinforcement Learning (RL), Regime Switching, Mean-Variance Optimization, Orthogonality Condition Learning, Exploratory Control, Equities

Complexity vs Empirical Score

  • Math Complexity: 9.5/10
  • Empirical Rigor: 7.0/10
  • Quadrant: Holy Grail
  • Why: The paper uses advanced continuous-time stochastic control, Lagrangian duality, and RL theory, indicating very high math complexity. It includes both simulated convergence tests and real-market backtests with performance metrics, showing strong empirical rigor.
  flowchart TD
    A["Research Goal: How to solve MV Portfolio Optimization<br>with Regime Switching (EMVRS) using RL?"] --> B["Methodology: Exploratory MV with Regime Switching<br>Policy Improvement Theorem"]
    B --> C["Key Innovation:<br>Orthogonality Condition (OC) Learning<br>vs Standard TD Learning"]
    C --> D["Data & Simulation:<br>Simulated Markets with 'True' Parameters"]
    D --> E["Computational Process:<br>RL Algorithm with Market Parameter Updates"]
    E --> F["Key Findings:<br>1. OC Learning converges to true parameters<br>2. OC outperforms TD in accuracy<br>3. Real market: Highest returns & low volatility"]