Learning to Generate Explainable Stock Predictions using Self-Reflective Large Language Models
ArXiv ID: 2402.03659 “View on arXiv”
Authors: Unknown
Abstract
Explaining stock predictions is generally a difficult task for traditional non-generative deep learning models, where explanations are limited to visualizing the attention weights on important texts. Today, Large Language Models (LLMs) present a solution to this problem, given their known capabilities to generate human-readable explanations for their decision-making process. However, the task of stock prediction remains challenging for LLMs, as it requires the ability to weigh the varying impacts of chaotic social texts on stock prices. The problem gets progressively harder with the introduction of the explanation component, which requires LLMs to explain verbally why certain factors are more important than the others. On the other hand, to fine-tune LLMs for such a task, one would need expert-annotated samples of explanation for every stock movement in the training set, which is expensive and impractical to scale. To tackle these issues, we propose our Summarize-Explain-Predict (SEP) framework, which utilizes a self-reflective agent and Proximal Policy Optimization (PPO) to let a LLM teach itself how to generate explainable stock predictions in a fully autonomous manner. The reflective agent learns how to explain past stock movements through self-reasoning, while the PPO trainer trains the model to generate the most likely explanations from input texts. The training samples for the PPO trainer are also the responses generated during the reflective process, which eliminates the need for human annotators. Using our SEP framework, we fine-tune a LLM that can outperform both traditional deep-learning and LLM methods in prediction accuracy and Matthews correlation coefficient for the stock classification task. To justify the generalization capability of our framework, we further test it on the portfolio construction task, and demonstrate its effectiveness through various portfolio metrics.
Keywords: Large Language Models (LLMs), Stock Prediction, Proximal Policy Optimization (PPO), Self-Reflective Agent, Portfolio Construction
Complexity vs Empirical Score
- Math Complexity: 3.0/10
- Empirical Rigor: 8.0/10
- Quadrant: Street Traders
- Why: The paper’s primary innovation lies in a novel RL/agent-based framework (SEP) rather than dense mathematical theory, focusing on LLM application and training dynamics. It is highly empirical, featuring a specific dataset, real-world portfolio construction task, backtest-ready metrics like Sharpe Ratio and MCC, and a public GitHub repository for code.
flowchart TD
A["Research Goal: Generate Explainable Stock Predictions<br>without expensive human annotation"] --> B["Key Methodology: SEP Framework<br>Summarize-Explain-Predict"]
B --> C["Data Input: Chaotic Social Texts<br>+ Historical Stock Prices"]
C --> D["Process 1: Self-Reflective Agent<br>Generates reasoning samples via self-reasoning"]
C --> E["Process 2: Proximal Policy Optimization PPO<br>Trains LLM on self-generated explanations"]
D --> F["Computational Core: Fine-tuned LLM"]
E --> F
F --> G["Outcome: High-Performance Explainable Predictor<br>Outperforms traditional models & LLMs in accuracy"]
F --> H["Application: Portfolio Construction<br>Validated via effective portfolio metrics"]