SeQwen at the Financial Misinformation Detection Challenge Task: Sequential Learning for Claim Verification and Explanation Generation in Financial Domains

ArXiv ID: 2412.00549 “View on arXiv”

Authors: Unknown

Abstract

This paper presents the system description of our entry for the COLING 2025 FMD challenge, focusing on misinformation detection in financial domains. We experimented with a combination of large language models, including Qwen, Mistral, and Gemma-2, and leveraged pre-processing and sequential learning for not only identifying fraudulent financial content but also generating coherent, and concise explanations that clarify the rationale behind the classifications. Our approach achieved competitive results with an F1-score of 0.8283 for classification, and ROUGE-1 of 0.7253 for explanations. This work highlights the transformative potential of LLMs in financial applications, offering insights into their capabilities for combating misinformation and enhancing transparency while identifying areas for future improvement in robustness and domain adaptation.

Keywords: Large Language Models (LLMs), Misinformation Detection, Sequential Learning, Explanation Generation, Financial NLP, General Financial Information

Complexity vs Empirical Score

  • Math Complexity: 1.5/10
  • Empirical Rigor: 7.5/10
  • Quadrant: Street Traders
  • Why: The paper uses standard NLP metrics (F1, ROUGE) and detailed fine-tuning procedures with specific datasets and model configurations, indicating strong empirical implementation. The math is primarily focused on model application rather than novel theoretical developments.
  flowchart TD
    Start["Research Goal:<br>Financial Misinformation Detection<br>& Explanation Generation"] --> Methodology
    subgraph Methodology["Key Methodology"]
        Data["Input: Financial Claims & Dataset<br>(FMD Challenge)"]
        Proc["Sequential Learning Pipeline:<br>Preprocessing & Sequential Processing"]
        Models["Model Ensemble:<br>Qwen, Mistral, Gemma-2"]
        Data --> Proc --> Models
    end
    Models --> Computation["Computational Process:<br>Claim Verification &<br>Coherent Explanation Generation"]
    Computation --> Outcomes["Key Outcomes:<br>Class. F1: 0.8283 | Expl. ROUGE-1: 0.7253<br>LLM Potential for Financial Transparency"]