Integrating Large Language Models and Reinforcement Learning for Sentiment-Driven Quantitative Trading
ArXiv ID: 2510.10526 “View on arXiv”
Authors: Wo Long, Wenxin Zeng, Xiaoyu Zhang, Ziyao Zhou
Abstract
This research develops a sentiment-driven quantitative trading system that leverages a large language model, FinGPT, for sentiment analysis, and explores a novel method for signal integration using a reinforcement learning algorithm, Twin Delayed Deep Deterministic Policy Gradient (TD3). We compare the performance of strategies that integrate sentiment and technical signals using both a conventional rule-based approach and a reinforcement learning framework. The results suggest that sentiment signals generated by FinGPT offer value when combined with traditional technical indicators, and that reinforcement learning algorithm presents a promising approach for effectively integrating heterogeneous signals in dynamic trading environments.
Keywords: FinGPT, Sentiment analysis, Twin Delayed Deep Deterministic Policy Gradient (TD3), Reinforcement learning, Signal integration, Equities
Complexity vs Empirical Score
- Math Complexity: 8.0/10
- Empirical Rigor: 8.5/10
- Quadrant: Holy Grail
- Why: The paper employs advanced mathematics, including TD3 (a sophisticated deep reinforcement learning algorithm) and quantitative finance concepts, while also demonstrating high empirical rigor through detailed data sourcing (Thomson Reuters, CRSP, Bloomberg), a backtest-ready framework, robustness checks for look-ahead bias, and performance evaluation against benchmarks.
flowchart TD
Start["Research Goal: Integrate Sentiment & Technical Signals for Trading"] --> Data["Inputs: Financial Data & News Sentiment via FinGPT"]
Data --> Method["Methodology: Rule-Based vs. RL TD3"]
Method --> Comp["Computation: Signal Processing & Portfolio Optimization"]
Comp --> Out["Outcome: RL Approach Outperforms in Volatile Markets"]
Out --> End["Key Finding: Sentiment Adds Value & RL is Promising"]