CatMemo at the FinLLM Challenge Task: Fine-Tuning Large Language Models using Data Fusion in Financial Applications
ArXiv ID: 2407.01953 “View on arXiv”
Authors: Unknown
Abstract
The integration of Large Language Models (LLMs) into financial analysis has garnered significant attention in the NLP community. This paper presents our solution to IJCAI-2024 FinLLM challenge, investigating the capabilities of LLMs within three critical areas of financial tasks: financial classification, financial text summarization, and single stock trading. We adopted Llama3-8B and Mistral-7B as base models, fine-tuning them through Parameter Efficient Fine-Tuning (PEFT) and Low-Rank Adaptation (LoRA) approaches. To enhance model performance, we combine datasets from task 1 and task 2 for data fusion. Our approach aims to tackle these diverse tasks in a comprehensive and integrated manner, showcasing LLMs’ capacity to address diverse and complex financial tasks with improved accuracy and decision-making capabilities.
Keywords: Large Language Models (LLM), Parameter Efficient Fine-Tuning (PEFT), Low-Rank Adaptation (LoRA), Financial sentiment analysis, Trading strategy, Equities
Complexity vs Empirical Score
- Math Complexity: 1.5/10
- Empirical Rigor: 8.0/10
- Quadrant: Street Traders
- Why: The paper is heavily implementation-focused, detailing specific model architectures (Llama3-8B, Mistral-7B), parameter-efficient fine-tuning (PEFT/LoRA), hardware specifications (RTX-A6000 GPUs), and data fusion strategies with clear validation metrics; however, it contains almost no advanced mathematical derivations or theoretical proofs, relying instead on empirical testing.
flowchart TD
A["Research Goal:<br>Evaluate LLMs in Financial Analysis"] --> B{"Base Models:<br>Llama3-8B & Mistral-7B"}
B --> C{"Data Fusion Strategy"}
C --> D["Parameter Efficient<br>Fine-Tuning PEFT"]
D --> E["Low-Rank<br>Adaptation LoRA"]
E --> F["Task Execution:<br>Classify, Summarize, Trade"]
F --> G["Outcomes:<br>Enhanced Accuracy &<br>Financial Decision Making"]