Adaptive Alpha Weighting with PPO: Enhancing Prompt-Based LLM-Generated Alphas in Quant Trading
ArXiv ID: 2509.01393 “View on arXiv”
Authors: Qizhao Chen, Hiroaki Kawashima
Abstract
This paper proposes a reinforcement learning framework that employs Proximal Policy Optimization (PPO) to dynamically optimize the weights of multiple large language model (LLM)-generated formulaic alphas for stock trading strategies. Formulaic alphas are mathematically defined trading signals derived from price, volume, sentiment, and other data. Although recent studies have shown that LLMs can generate diverse and effective alphas, a critical challenge lies in how to adaptively integrate them under varying market conditions. To address this gap, we leverage the deepseek-r1-distill-llama-70b model to generate fifty alphas for five major stocks: Apple, HSBC, Pepsi, Toyota, and Tencent, and then use PPO to adjust their weights in real time. Experimental results demonstrate that the PPO-optimized strategy achieves strong returns and high Sharpe ratios across most stocks, outperforming both an equal-weighted alpha portfolio and traditional benchmarks such as the Nikkei 225, S&P 500, and Hang Seng Index. The findings highlight the importance of reinforcement learning in the allocation of alpha weights and show the potential of combining LLM-generated signals with adaptive optimization for robust financial forecasting and trading.
Keywords: Reinforcement Learning, Proximal Policy Optimization (PPO), Large Language Models (LLMs), Algorithmic Trading, Alpha Generation, Equities
Complexity vs Empirical Score
- Math Complexity: 7.5/10
- Empirical Rigor: 8.0/10
- Quadrant: Holy Grail
- Why: The paper employs advanced reinforcement learning (PPO) and deals with mathematical formulation of alpha signals, indicating moderate to high math complexity, while the experimental setup with specific stock tickers, Sharpe ratios, and comparisons to benchmarks demonstrates strong empirical rigor.
flowchart TD
A["Research Goal: <br>Adaptive Alpha Weighting"] --> B["Data: 50 LLM-Generated Alphas<br>5 Stocks: Apple, HSBC, Pepsi, Toyota, Tencent"]
B --> C["Core Methodology:<br>Proximal Policy Optimization PPO"]
C --> D{"Compute Portfolio Weighted by<br>PPO-Optimized Alpha Weights"}
D --> E["Execution: <br>Trading Strategy Backtest"]
E --> F["Outcome: <br>Higher Returns & Sharpe Ratios"]
F --> G["Comparison: <br>Outperforms Equal-Weight & Benchmarks S&P 500, Nikkei 225"]