Optimal Capital Deployment Under Stochastic Deal Arrivals: A Continuous-Time ADP Approach
ArXiv ID: 2508.10300 “View on arXiv”
Authors: Kunal Menda, Raphael S Benarrosh
Abstract
Suppose you are a fund manager with $100 million to deploy and two years to invest it. A deal comes across your desk that looks appealing but costs $50 million – half of your available capital. Should you take it, or wait for something better? The decision hinges on the trade-off between current opportunities and uncertain future arrivals. This work formulates the problem of capital deployment under stochastic deal arrivals as a continuous-time Markov decision process (CTMDP) and solves it numerically via an approximate dynamic programming (ADP) approach. We model deal economics using correlated lognormal distributions for multiples on invested capital (MOIC) and deal sizes, and model arrivals as a nonhomogeneous Poisson process (NHPP). Our approach uses quasi-Monte Carlo (QMC) sampling to efficiently approximate the continuous-time Bellman equation for the value function over a discretized capital grid. We present an interpretable acceptance policy, illustrating how selectivity evolves over time and as capital is consumed. We show in simulation that this policy outperforms a baseline that accepts any affordable deal exceeding a fixed hurdle rate.
Keywords: Approximate dynamic programming, Continuous-time Markov decision process, Nonhomogeneous Poisson process, Capital deployment, Quasi-Monte Carlo, Private Equity/Venture Capital
Complexity vs Empirical Score
- Math Complexity: 8.5/10
- Empirical Rigor: 7.2/10
- Quadrant: Holy Grail
- Why: The paper employs advanced continuous-time stochastic control theory (CTMDP, Bellman equation) with QMC sampling and ADP, indicating high mathematical density. It validates the method with extensive simulation-based backtesting, comparing against a baseline and reporting performance metrics (portfolio IRR), demonstrating strong empirical rigor.
flowchart TD
A["Research Goal: Optimal capital deployment under stochastic deal arrivals"] --> B["Methodology: Continuous-time Markov Decision Process<br>Approximate Dynamic Programming"]
B --> C["Data & Inputs: Correlated Lognormal Distributions for MOIC/Size<br>Nonhomogeneous Poisson Process for Arrivals"]
C --> D["Computational Process: Quasi-Monte Carlo Sampling<br>Discretized Capital Grid"]
D --> E["Key Findings: Interpretable acceptance policy<br>Adaptive selectivity over time/capital"]
E --> F["Outcome: Outperforms fixed hurdle rate benchmark"]