FinLlama: Financial Sentiment Classification for Algorithmic Trading Applications

ArXiv ID: 2403.12285 “View on arXiv”

Authors: Unknown

Abstract

There are multiple sources of financial news online which influence market movements and trader’s decisions. This highlights the need for accurate sentiment analysis, in addition to having appropriate algorithmic trading techniques, to arrive at better informed trading decisions. Standard lexicon based sentiment approaches have demonstrated their power in aiding financial decisions. However, they are known to suffer from issues related to context sensitivity and word ordering. Large Language Models (LLMs) can also be used in this context, but they are not finance-specific and tend to require significant computational resources. To facilitate a finance specific LLM framework, we introduce a novel approach based on the Llama 2 7B foundational model, in order to benefit from its generative nature and comprehensive language manipulation. This is achieved by fine-tuning the Llama2 7B model on a small portion of supervised financial sentiment analysis data, so as to jointly handle the complexities of financial lexicon and context, and further equipping it with a neural network based decision mechanism. Such a generator-classifier scheme, referred to as FinLlama, is trained not only to classify the sentiment valence but also quantify its strength, thus offering traders a nuanced insight into financial news articles. Complementing this, the implementation of parameter-efficient fine-tuning through LoRA optimises trainable parameters, thus minimising computational and memory requirements, without sacrificing accuracy. Simulation results demonstrate the ability of the proposed FinLlama to provide a framework for enhanced portfolio management decisions and increased market returns. These results underpin the ability of FinLlama to construct high-return portfolios which exhibit enhanced resilience, even during volatile periods and unpredictable market events.

Keywords: Large Language Models (LLMs), Sentiment Analysis, Fine-tuning, LoRA, Algorithmic Trading, General Financial Markets

Complexity vs Empirical Score

  • Math Complexity: 3.5/10
  • Empirical Rigor: 7.5/10
  • Quadrant: Street Traders
  • Why: The paper focuses on implementing and evaluating a specific NLP architecture (LLM fine-tuning) using concrete datasets and portfolio backtests, indicating high empirical rigor but relatively low mathematical formalism compared to theoretical derivations.
  flowchart TD
    A["Research Goal<br>Develop Finance-Specific<br>Sentiment Classification LLM"] --> B
    subgraph B ["Methodology"]
        direction LR
        B1["Fine-tune Llama 2 7B<br>via LoRA"] --> B2["Generator-Classify Scheme<br>Sentiment & Strength"]
    end
    B --> C["Input<br>Financial News Text"]
    C --> D["Computational Process<br>FinLlama Analysis"]
    D --> E["Output<br>Sentiment Valence &<br>Strength Quantification"]
    E --> F["Outcome<br>Enhanced Portfolio Management<br>Higher Market Returns"]