false

Optimal Investment with Costly Expert Opinions

Optimal Investment with Costly Expert Opinions ArXiv ID: 2409.11569 “View on arXiv” Authors: Unknown Abstract We consider the Merton problem of optimizing expected power utility of terminal wealth in the case of an unobservable Markov-modulated drift. What makes the model special is that the agent is allowed to purchase costly expert opinions of varying quality on the current state of the drift, leading to a mixed stochastic control problem with regular and impulse controls involving random consequences. Using ideas from filtering theory, we first embed the original problem with unobservable drift into a full information problem on a larger state space. The value function of the full information problem is characterized as the unique viscosity solution of the dynamic programming PDE. This characterization is achieved by a new variant of the stochastic Perron’s method, which additionally allows us to show that, in between purchases of expert opinions, the problem reduces to an exit time control problem which is known to admit an optimal feedback control. Under the assumption of sufficient regularity of this feedback map, we are able to construct optimal trading and expert opinion strategies. ...

September 17, 2024 · 2 min · Research Team

Portfolio Optimization under Transaction Costs with Recursive Preferences

Portfolio Optimization under Transaction Costs with Recursive Preferences ArXiv ID: 2402.08387 “View on arXiv” Authors: Unknown Abstract The Merton investment-consumption problem is fundamental, both in the field of finance, and in stochastic control. An important extension of the problem adds transaction costs, which is highly relevant from a financial perspective but also challenging from a control perspective because the solution now involves singular control. A further significant extension takes us from additive utility to stochastic differential utility (SDU), which allows time preferences and risk preferences to be disentangled. In this paper, we study this extended version of the Merton problem with proportional transaction costs and Epstein-Zin SDU. We fully characterise all parameter combinations for which the problem is well posed (which may depend on the level of transaction costs) and provide a full verification argument that relies on no additional technical assumptions and uses primal methods only. The case with SDU requires new mathematical techniques as duality methods break down. Even in the special case of (additive) power utility, our arguments are significantly simpler, more elegant and more far-reaching than the ones in the extant literature. This means that we can easily analyse aspects of the problem which previously have been very challenging, including comparative statics, boundary cases which heretofore have required separate treatment and the situation beyond the small transaction cost regime. A key and novel idea is to parametrise consumption and the value function in terms of the shadow fraction of wealth, which may be of much wider applicability. ...

February 13, 2024 · 2 min · Research Team

Data-Driven Merton's Strategies via Policy Randomization

Data-Driven Merton’s Strategies via Policy Randomization ArXiv ID: 2312.11797 “View on arXiv” Authors: Unknown Abstract We study Merton’s expected utility maximization problem in an incomplete market, characterized by a factor process in addition to the stock price process, where all the model primitives are unknown. The agent under consideration is a price taker who has access only to the stock and factor value processes and the instantaneous volatility. We propose an auxiliary problem in which the agent can invoke policy randomization according to a specific class of Gaussian distributions, and prove that the mean of its optimal Gaussian policy solves the original Merton problem. With randomized policies, we are in the realm of continuous-time reinforcement learning (RL) recently developed in Wang et al. (2020) and Jia and Zhou (2022a, 2022b, 2023), enabling us to solve the auxiliary problem in a data-driven way without having to estimate the model primitives. Specifically, we establish a policy improvement theorem based on which we design both online and offline actor-critic RL algorithms for learning Merton’s strategies. A key insight from this study is that RL in general and policy randomization in particular are useful beyond the purpose for exploration – they can be employed as a technical tool to solve a problem that cannot be otherwise solved by mere deterministic policies. At last, we carry out both simulation and empirical studies in a stochastic volatility environment to demonstrate the decisive outperformance of the devised RL algorithms in comparison to the conventional model-based, plug-in method. ...

December 19, 2023 · 2 min · Research Team