Data-Driven Merton’s Strategies via Policy Randomization

ArXiv ID: 2312.11797 “View on arXiv”

Authors: Unknown

Abstract

We study Merton’s expected utility maximization problem in an incomplete market, characterized by a factor process in addition to the stock price process, where all the model primitives are unknown. The agent under consideration is a price taker who has access only to the stock and factor value processes and the instantaneous volatility. We propose an auxiliary problem in which the agent can invoke policy randomization according to a specific class of Gaussian distributions, and prove that the mean of its optimal Gaussian policy solves the original Merton problem. With randomized policies, we are in the realm of continuous-time reinforcement learning (RL) recently developed in Wang et al. (2020) and Jia and Zhou (2022a, 2022b, 2023), enabling us to solve the auxiliary problem in a data-driven way without having to estimate the model primitives. Specifically, we establish a policy improvement theorem based on which we design both online and offline actor-critic RL algorithms for learning Merton’s strategies. A key insight from this study is that RL in general and policy randomization in particular are useful beyond the purpose for exploration – they can be employed as a technical tool to solve a problem that cannot be otherwise solved by mere deterministic policies. At last, we carry out both simulation and empirical studies in a stochastic volatility environment to demonstrate the decisive outperformance of the devised RL algorithms in comparison to the conventional model-based, plug-in method.

Keywords: Merton problem, Reinforcement Learning, Policy randomization, Incomplete markets, Stochastic volatility, Equity

Complexity vs Empirical Score

  • Math Complexity: 8.0/10
  • Empirical Rigor: 7.0/10
  • Quadrant: Holy Grail
  • Why: The paper uses advanced stochastic control (HJB, Pontryagin) and modern RL theory with convergence proofs, indicating high mathematical density. It includes simulation and empirical studies in a stochastic volatility environment, but relies on structured models without full live backtesting, placing it between theoretical and empirical rigor.
  flowchart TD
    A["Research Goal: Solve Data-Driven Merton Problem<br>in Incomplete Markets"] --> B{"Key Methodology: Policy Randomization"}
    B --> C["Formulate Auxiliary Problem with Gaussian Policies"]
    C --> D["Apply Policy Improvement Theorem<br>Derive Optimal Mean Policy"]
    D --> E["Data/Inputs: Stock & Factor Prices, Volatility<br>Without Model Primitives"]
    E --> F{"Computational Process: Actor-Critic RL Algorithms"}
    F --> G["Online RL Learning"]
    F --> H["Offline RL Learning"]
    G --> I["Outcomes & Findings: <br>1. Optimal Mean Policy Solves Original Merton Problem <br>2. RL Outperforms Plug-in Methods <br>3. Policy Randomization as Technical Tool"]
    H --> I