Continuous-time reinforcement learning for optimal switching over multiple regimes
ArXiv ID: 2512.04697 “View on arXiv”
Authors: Yijie Huang, Mengge Li, Xiang Yu, Zhou Zhou
Abstract
This paper studies the continuous-time reinforcement learning (RL) for optimal switching problems across multiple regimes. We consider a type of exploratory formulation under entropy regularization where the agent randomizes both the timing of switches and the selection of regimes through the generator matrix of an associated continuous-time finite-state Markov chain. We establish the well-posedness of the associated system of Hamilton-Jacobi-Bellman (HJB) equations and provide a characterization of the optimal policy. The policy improvement and the convergence of the policy iterations are rigorously established by analyzing the system of equations. We also show the convergence of the value function in the exploratory formulation towards the value function in the classical formulation as the temperature parameter vanishes. Finally, a reinforcement learning algorithm is devised and implemented by invoking the policy evaluation based on the martingale characterization. Our numerical examples with the aid of neural networks illustrate the effectiveness of the proposed RL algorithm.
Keywords: reinforcement learning, optimal switching, Hamilton-Jacobi-Bellman (HJB), entropy regularization, continuous-time Markov chain, Multi-Asset/Regime Switching
Complexity vs Empirical Score
- Math Complexity: 9.5/10
- Empirical Rigor: 4.0/10
- Quadrant: Lab Rats
- Why: The paper is mathematically dense, featuring advanced stochastic analysis, systems of Hamilton-Jacobi-Bellman (HJB) equations, and rigorous convergence proofs. While it includes a numerical implementation with neural networks, the core contribution is theoretical with analytical proofs dominating the rigor, focusing on PDE existence and convergence rather than extensive backtesting on real financial datasets.
flowchart TD
A["Research Goal<br>Continuous-time RL for Optimal Switching"] --> B["Methodology: Entropy-Regularized Formulation"]
B --> C{"Analytical Derivation"}
C --> D["HJB Equations &<br>Optimal Policy Characterization"]
C --> E["Convergence Analysis<br>Exploratory → Classical Value"]
B --> F["Algorithm Design<br>RL with Policy Evaluation"]
F --> G["Computational Process<br>Neural Network Implementation"]
D & E & G --> H["Outcomes<br>Validated RL Algorithm &<br>Theoretical Guarantees"]