Fine-Tuning Large Language Models for Stock Return Prediction Using Newsflow

ArXiv ID: 2407.18103 “View on arXiv”

Authors: Unknown

Abstract

Large language models (LLMs) and their fine-tuning techniques have demonstrated superior performance in various language understanding and generation tasks. This paper explores fine-tuning LLMs for stock return forecasting with financial newsflow. In quantitative investing, return forecasting is fundamental for subsequent tasks like stock picking, portfolio optimization, etc. We formulate the model to include text representation and forecasting modules. We propose to compare the encoder-only and decoder-only LLMs, considering they generate text representations in distinct ways. The impact of these different representations on forecasting performance remains an open question. Meanwhile, we compare two simple methods of integrating LLMs’ token-level representations into the forecasting module. The experiments on real news and investment universes reveal that: (1) aggregated representations from LLMs’ token-level embeddings generally produce return predictions that enhance the performance of long-only and long-short portfolios; (2) in the relatively large investment universe, the decoder LLMs-based prediction model leads to stronger portfolios, whereas in the small universes, there are no consistent winners. Among the three LLMs studied (DeBERTa, Mistral, Llama), Mistral performs more robustly across different universes; (3) return predictions derived from LLMs’ text representations are a strong signal for portfolio construction, outperforming conventional sentiment scores.

Keywords: Large Language Models (LLMs), Fine-tuning, Financial Newsflow, Text Representation, Portfolio Optimization, Equities

Complexity vs Empirical Score

  • Math Complexity: 5.0/10
  • Empirical Rigor: 8.0/10
  • Quadrant: Street Traders
  • Why: The paper exhibits moderate mathematical complexity with conceptual modeling of LLM architectures and forecasting modules, but the core methodology relies on applying existing transformer models. It demonstrates high empirical rigor through detailed backtesting on real news and investment universes, evaluating portfolio performance (long-only/long-short) with specific out-of-sample results and comparative metrics.
  flowchart TD
    A["Research Goal: Fine-tune LLMs for Stock Return Prediction<br>using Financial Newsflow"] --> B{"Data & Models"};
    B --> B1["Input: Financial Newsflow &<br>Investment Universe Data"];
    B --> B2["LLM Architectures:<br>Encoder vs. Decoder"];
    B --> B3["Forecasting Module:<br>LLM Representation Integration"];
    B1 & B2 & B3 --> C["Computational Process:<br>Fine-tuning LLMs for Return Forecasting"];
    C --> D{"Key Outcomes"};
    D --> D1["Aggregated LLM representations<br>enhance portfolio performance"];
    D --> D2["Mistral: Most robust across universes<br>Decoder LLMs excel in large universes"];
    D --> D3["LLM predictions outperform<br>conventional sentiment scores"];