false

Integrating Large Language Models and Reinforcement Learning for Sentiment-Driven Quantitative Trading

Integrating Large Language Models and Reinforcement Learning for Sentiment-Driven Quantitative Trading ArXiv ID: 2510.10526 “View on arXiv” Authors: Wo Long, Wenxin Zeng, Xiaoyu Zhang, Ziyao Zhou Abstract This research develops a sentiment-driven quantitative trading system that leverages a large language model, FinGPT, for sentiment analysis, and explores a novel method for signal integration using a reinforcement learning algorithm, Twin Delayed Deep Deterministic Policy Gradient (TD3). We compare the performance of strategies that integrate sentiment and technical signals using both a conventional rule-based approach and a reinforcement learning framework. The results suggest that sentiment signals generated by FinGPT offer value when combined with traditional technical indicators, and that reinforcement learning algorithm presents a promising approach for effectively integrating heterogeneous signals in dynamic trading environments. ...

October 12, 2025 · 2 min · Research Team

Application of Deep Reinforcement Learning to At-the-Money S&P 500 Options Hedging

Application of Deep Reinforcement Learning to At-the-Money S&P 500 Options Hedging ArXiv ID: 2510.09247 “View on arXiv” Authors: Zofia Bracha, Paweł Sakowski, Jakub Michańków Abstract This paper explores the application of deep Q-learning to hedging at-the-money options on the S&P500 index. We develop an agent based on the Twin Delayed Deep Deterministic Policy Gradient (TD3) algorithm, trained to simulate hedging decisions without making explicit model assumptions on price dynamics. The agent was trained on historical intraday prices of S&P500 call options across years 2004–2024, using a single time series of six predictor variables: option price, underlying asset price, moneyness, time to maturity, realized volatility, and current hedge position. A walk-forward procedure was applied for training, which led to nearly 17~years of out-of-sample evaluation. The performance of the deep reinforcement learning (DRL) agent is benchmarked against the Black–Scholes delta-hedging strategy over the same period. We assess both approaches using metrics such as annualized return, volatility, information ratio, and Sharpe ratio. To test the models’ adaptability, we performed simulations across varying market conditions and added constraints such as transaction costs and risk-awareness penalties. Our results show that the DRL agent can outperform traditional hedging methods, particularly in volatile or high-cost environments, highlighting its robustness and flexibility in practical trading contexts. While the agent consistently outperforms delta-hedging, its performance deteriorates when the risk-awareness parameter is higher. We also observed that the longer the time interval used for volatility estimation, the more stable the results. ...

October 10, 2025 · 2 min · Research Team