Improving Portfolio Optimization Results with Bandit Networks
ArXiv ID: 2410.04217 “View on arXiv”
Authors: Unknown
Abstract
In Reinforcement Learning (RL), multi-armed Bandit (MAB) problems have found applications across diverse domains such as recommender systems, healthcare, and finance. Traditional MAB algorithms typically assume stationary reward distributions, which limits their effectiveness in real-world scenarios characterized by non-stationary dynamics. This paper addresses this limitation by introducing and evaluating novel Bandit algorithms designed for non-stationary environments. First, we present the Adaptive Discounted Thompson Sampling (ADTS) algorithm, which enhances adaptability through relaxed discounting and sliding window mechanisms to better respond to changes in reward distributions. We then extend this approach to the Portfolio Optimization problem by introducing the Combinatorial Adaptive Discounted Thompson Sampling (CADTS) algorithm, which addresses computational challenges within Combinatorial Bandits and improves dynamic asset allocation. Additionally, we propose a novel architecture called Bandit Networks, which integrates the outputs of ADTS and CADTS, thereby mitigating computational limitations in stock selection. Through extensive experiments using real financial market data, we demonstrate the potential of these algorithms and architectures in adapting to dynamic environments and optimizing decision-making processes. For instance, the proposed bandit network instances present superior performance when compared to classic portfolio optimization approaches, such as capital asset pricing model, equal weights, risk parity, and Markovitz, with the best network presenting an out-of-sample Sharpe Ratio 20% higher than the best performing classical model.
Keywords: Multi-armed Bandit, Reinforcement Learning, Portfolio Optimization, Thompson Sampling, Dynamic Asset Allocation, Equities
Complexity vs Empirical Score
- Math Complexity: 7.0/10
- Empirical Rigor: 8.0/10
- Quadrant: Holy Grail
- Why: The paper introduces novel algorithms (ADTS, CADTS) with mathematical formulations involving discounting, sliding windows, and combinatorial optimization, indicating high math density. It demonstrates strong empirical rigor through extensive experiments on real market data (S&P 500), comparing against baselines and reporting metrics like Sharpe Ratio.
flowchart TD
A["Research Goal:<br>Optimize Portfolio in Non-Stationary Markets"] --> B["Data Input:<br>Real Financial Market Data"]
B --> C["Core Methodology:<br>Novel Bandit Algorithms"]
C --> D{"Computational Process:<br>Bandit Networks Architecture"}
D --> E["Algorithm 1:<br>ADTS<br>Adaptive Discounted TS"]
D --> F["Algorithm 2:<br>CADTS<br>Combinatorial ADTS"]
E --> G["Outcome:<br>Dynamic Asset Allocation"]
F --> G
G --> H["Key Finding:<br>20% Higher Sharpe Ratio<br>vs. Classical Models"]