Comparing LLMs for Sentiment Analysis in Financial Market News

ArXiv ID: 2510.15929 “View on arXiv”

Authors: Lucas Eduardo Pereira Teles, Carlos M. S. Figueiredo

Abstract

This article presents a comparative study of large language models (LLMs) in the task of sentiment analysis of financial market news. This work aims to analyze the performance difference of these models in this important natural language processing task within the context of finance. LLM models are compared with classical approaches, allowing for the quantification of the benefits of each tested model or approach. Results show that large language models outperform classical models in the vast majority of cases.

Keywords: Sentiment Analysis, Large Language Models (LLMs), Natural Language Processing, Financial News, Market Sentiment, General Financial Markets

Complexity vs Empirical Score

  • Math Complexity: 3.5/10
  • Empirical Rigor: 8.0/10
  • Quadrant: Street Traders
  • Why: The paper involves practical data processing, model training, and performance evaluation on specific datasets, reflecting strong empirical rigor. Math complexity is moderate, focusing on applying existing techniques like TF-IDF and SVD rather than developing novel mathematical theory.
  flowchart TD
    A["Research Goal: Compare LLMs vs. Classical Models for Financial Sentiment Analysis"] --> B["Data: Financial Market News Dataset"]
    B --> C["Methodology: Model Training & Evaluation"]
    subgraph C ["Computational Process"]
        C1["Classical Models e.g., SVM/LR"]
        C2["LLMs e.g., BERT/FinBERT"]
    end
    C --> D["Performance Metrics: Accuracy, F1-Score"]
    D --> E["Key Finding: LLMs consistently outperform classical models"]
    E --> F["Outcome: Superior performance of LLMs for financial sentiment analysis"]