NIFTY Financial News Headlines Dataset
ArXiv ID: 2405.09747 “View on arXiv”
Authors: Unknown
Abstract
We introduce and make publicly available the NIFTY Financial News Headlines dataset, designed to facilitate and advance research in financial market forecasting using large language models (LLMs). This dataset comprises two distinct versions tailored for different modeling approaches: (i) NIFTY-LM, which targets supervised fine-tuning (SFT) of LLMs with an auto-regressive, causal language-modeling objective, and (ii) NIFTY-RL, formatted specifically for alignment methods (like reinforcement learning from human feedback (RLHF)) to align LLMs via rejection sampling and reward modeling. Each dataset version provides curated, high-quality data incorporating comprehensive metadata, market indices, and deduplicated financial news headlines systematically filtered and ranked to suit modern LLM frameworks. We also include experiments demonstrating some applications of the dataset in tasks like stock price movement and the role of LLM embeddings in information acquisition/richness. The NIFTY dataset along with utilities (like truncating prompt’s context length systematically) are available on Hugging Face at https://huggingface.co/datasets/raeidsaqur/NIFTY.
Keywords: large language models, financial dataset, reinforcement learning, NIFTY, stock prediction, Equity
Complexity vs Empirical Score
- Math Complexity: 2.0/10
- Empirical Rigor: 6.5/10
- Quadrant: Street Traders
- Why: The paper introduces a publicly available, structured dataset with accompanying experiments and code for training LLMs, demonstrating a practical focus on implementation and data utility. While the methodology involves complex LLM training paradigms (SFT/RLHF), the mathematics is primarily algorithmic and architectural rather than theoretically dense, with no heavy derivations or novel proofs.
flowchart TD
A["Research Goal:<br>Forecast Markets using LLMs"] --> B["Data Curation<br>NIFTY Dataset Creation"]
B --> C["Model Alignment Methods"]
C --> D["Supervised Fine-Tuning SFT<br>NIFTY-LM"]
C --> E["Alignment via RLHF<br>NIFTY-RL"]
D --> F["Computational Process:<br>Embedding Extraction &<br>Market Prediction"]
E --> F
F --> G["Key Findings:<br>Dataset Efficacy &<br>LLM Embedding Utility"]