false

NIFTY Financial News Headlines Dataset

NIFTY Financial News Headlines Dataset ArXiv ID: 2405.09747 “View on arXiv” Authors: Unknown Abstract We introduce and make publicly available the NIFTY Financial News Headlines dataset, designed to facilitate and advance research in financial market forecasting using large language models (LLMs). This dataset comprises two distinct versions tailored for different modeling approaches: (i) NIFTY-LM, which targets supervised fine-tuning (SFT) of LLMs with an auto-regressive, causal language-modeling objective, and (ii) NIFTY-RL, formatted specifically for alignment methods (like reinforcement learning from human feedback (RLHF)) to align LLMs via rejection sampling and reward modeling. Each dataset version provides curated, high-quality data incorporating comprehensive metadata, market indices, and deduplicated financial news headlines systematically filtered and ranked to suit modern LLM frameworks. We also include experiments demonstrating some applications of the dataset in tasks like stock price movement and the role of LLM embeddings in information acquisition/richness. The NIFTY dataset along with utilities (like truncating prompt’s context length systematically) are available on Hugging Face at https://huggingface.co/datasets/raeidsaqur/NIFTY. ...

May 16, 2024 · 2 min · Research Team

Construction of Domain-specified Japanese Large Language Model for Finance through Continual Pre-training

Construction of Domain-specified Japanese Large Language Model for Finance through Continual Pre-training ArXiv ID: 2404.10555 “View on arXiv” Authors: Unknown Abstract Large language models (LLMs) are now widely used in various fields, including finance. However, Japanese financial-specific LLMs have not been proposed yet. Hence, this study aims to construct a Japanese financial-specific LLM through continual pre-training. Before tuning, we constructed Japanese financial-focused datasets for continual pre-training. As a base model, we employed a Japanese LLM that achieved state-of-the-art performance on Japanese financial benchmarks among the 10-billion-class parameter models. After continual pre-training using the datasets and the base model, the tuned model performed better than the original model on the Japanese financial benchmarks. Moreover, the outputs comparison results reveal that the tuned model’s outputs tend to be better than the original model’s outputs in terms of the quality and length of the answers. These findings indicate that domain-specific continual pre-training is also effective for LLMs. The tuned model is publicly available on Hugging Face. ...

April 16, 2024 · 2 min · Research Team

Construction of a Japanese Financial Benchmark for Large Language Models

Construction of a Japanese Financial Benchmark for Large Language Models ArXiv ID: 2403.15062 “View on arXiv” Authors: Unknown Abstract With the recent development of large language models (LLMs), models that focus on certain domains and languages have been discussed for their necessity. There is also a growing need for benchmarks to evaluate the performance of current LLMs in each domain. Therefore, in this study, we constructed a benchmark comprising multiple tasks specific to the Japanese and financial domains and performed benchmark measurements on some models. Consequently, we confirmed that GPT-4 is currently outstanding, and that the constructed benchmarks function effectively. According to our analysis, our benchmark can differentiate benchmark scores among models in all performance ranges by combining tasks with different difficulties. ...

March 22, 2024 · 2 min · Research Team

Stress index strategy enhanced with financial news sentiment analysis for the equity markets

Stress index strategy enhanced with financial news sentiment analysis for the equity markets ArXiv ID: 2404.00012 “View on arXiv” Authors: Unknown Abstract This paper introduces a new risk-on risk-off strategy for the stock market, which combines a financial stress indicator with a sentiment analysis done by ChatGPT reading and interpreting Bloomberg daily market summaries. Forecasts of market stress derived from volatility and credit spreads are enhanced when combined with the financial news sentiment derived from GPT-4. As a result, the strategy shows improved performance, evidenced by higher Sharpe ratio and reduced maximum drawdowns. The improved performance is consistent across the NASDAQ, the S&P 500 and the six major equity markets, indicating that the method generalises across equities markets. ...

March 12, 2024 · 2 min · Research Team

Ploutos: Towards interpretable stock movement prediction with financial large language model

Ploutos: Towards interpretable stock movement prediction with financial large language model ArXiv ID: 2403.00782 “View on arXiv” Authors: Unknown Abstract Recent advancements in large language models (LLMs) have opened new pathways for many domains. However, the full potential of LLMs in financial investments remains largely untapped. There are two main challenges for typical deep learning-based methods for quantitative finance. First, they struggle to fuse textual and numerical information flexibly for stock movement prediction. Second, traditional methods lack clarity and interpretability, which impedes their application in scenarios where the justification for predictions is essential. To solve the above challenges, we propose Ploutos, a novel financial LLM framework that consists of PloutosGen and PloutosGPT. The PloutosGen contains multiple primary experts that can analyze different modal data, such as text and numbers, and provide quantitative strategies from different perspectives. Then PloutosGPT combines their insights and predictions and generates interpretable rationales. To generate accurate and faithful rationales, the training strategy of PloutosGPT leverage rearview-mirror prompting mechanism to guide GPT-4 to generate rationales, and a dynamic token weighting mechanism to finetune LLM by increasing key tokens weight. Extensive experiments show our framework outperforms the state-of-the-art methods on both prediction accuracy and interpretability. ...

February 18, 2024 · 2 min · Research Team

Can ChatGPT Compute Trustworthy Sentiment Scores from Bloomberg Market Wraps?

Can ChatGPT Compute Trustworthy Sentiment Scores from Bloomberg Market Wraps? ArXiv ID: 2401.05447 “View on arXiv” Authors: Unknown Abstract We used a dataset of daily Bloomberg Financial Market Summaries from 2010 to 2023, reposted on large financial media, to determine how global news headlines may affect stock market movements using ChatGPT and a two-stage prompt approach. We document a statistically significant positive correlation between the sentiment score and future equity market returns over short to medium term, which reverts to a negative correlation over longer horizons. Validation of this correlation pattern across multiple equity markets indicates its robustness across equity regions and resilience to non-linearity, evidenced by comparison of Pearson and Spearman correlations. Finally, we provide an estimate of the optimal horizon that strikes a balance between reactivity to new information and correlation. ...

January 9, 2024 · 2 min · Research Team

Can Large Language Models Beat Wall Street? Unveiling the Potential of AI in Stock Selection

Can Large Language Models Beat Wall Street? Unveiling the Potential of AI in Stock Selection ArXiv ID: 2401.03737 “View on arXiv” Authors: Unknown Abstract This paper introduces MarketSenseAI, an innovative framework leveraging GPT-4’s advanced reasoning for selecting stocks in financial markets. By integrating Chain of Thought and In-Context Learning, MarketSenseAI analyzes diverse data sources, including market trends, news, fundamentals, and macroeconomic factors, to emulate expert investment decision-making. The development, implementation, and validation of the framework are elaborately discussed, underscoring its capability to generate actionable and interpretable investment signals. A notable feature of this work is employing GPT-4 both as a predictive mechanism and signal evaluator, revealing the significant impact of the AI-generated explanations on signal accuracy, reliability and acceptance. Through empirical testing on the competitive S&P 100 stocks over a 15-month period, MarketSenseAI demonstrated exceptional performance, delivering excess alpha of 10% to 30% and achieving a cumulative return of up to 72% over the period, while maintaining a risk profile comparable to the broader market. Our findings highlight the transformative potential of Large Language Models in financial decision-making, marking a significant leap in integrating generative AI into financial analytics and investment strategies. ...

January 8, 2024 · 2 min · Research Team

Deficiency of Large Language Models in Finance: An Empirical Examination of Hallucination

Deficiency of Large Language Models in Finance: An Empirical Examination of Hallucination ArXiv ID: 2311.15548 “View on arXiv” Authors: Unknown Abstract The hallucination issue is recognized as a fundamental deficiency of large language models (LLMs), especially when applied to fields such as finance, education, and law. Despite the growing concerns, there has been a lack of empirical investigation. In this paper, we provide an empirical examination of LLMs’ hallucination behaviors in financial tasks. First, we empirically investigate LLM model’s ability of explaining financial concepts and terminologies. Second, we assess LLM models’ capacity of querying historical stock prices. Third, to alleviate the hallucination issue, we evaluate the efficacy of four practical methods, including few-shot learning, Decoding by Contrasting Layers (DoLa), the Retrieval Augmentation Generation (RAG) method and the prompt-based tool learning method for a function to generate a query command. Finally, our major finding is that off-the-shelf LLMs experience serious hallucination behaviors in financial tasks. Therefore, there is an urgent need to call for research efforts in mitigating LLMs’ hallucination. ...

November 27, 2023 · 2 min · Research Team

Benchmarking Large Language Model Volatility

Benchmarking Large Language Model Volatility ArXiv ID: 2311.15180 “View on arXiv” Authors: Unknown Abstract The impact of non-deterministic outputs from Large Language Models (LLMs) is not well examined for financial text understanding tasks. Through a compelling case study on investing in the US equity market via news sentiment analysis, we uncover substantial variability in sentence-level sentiment classification results, underscoring the innate volatility of LLM outputs. These uncertainties cascade downstream, leading to more significant variations in portfolio construction and return. While tweaking the temperature parameter in the language model decoder presents a potential remedy, it comes at the expense of stifled creativity. Similarly, while ensembling multiple outputs mitigates the effect of volatile outputs, it demands a notable computational investment. This work furnishes practitioners with invaluable insights for adeptly navigating uncertainty in the integration of LLMs into financial decision-making, particularly in scenarios dictated by non-deterministic information. ...

November 26, 2023 · 2 min · Research Team

Towards reducing hallucination in extracting information from financial reports using Large Language Models

Towards reducing hallucination in extracting information from financial reports using Large Language Models ArXiv ID: 2310.10760 “View on arXiv” Authors: Unknown Abstract For a financial analyst, the question and answer (Q&A) segment of the company financial report is a crucial piece of information for various analysis and investment decisions. However, extracting valuable insights from the Q&A section has posed considerable challenges as the conventional methods such as detailed reading and note-taking lack scalability and are susceptible to human errors, and Optical Character Recognition (OCR) and similar techniques encounter difficulties in accurately processing unstructured transcript text, often missing subtle linguistic nuances that drive investor decisions. Here, we demonstrate the utilization of Large Language Models (LLMs) to efficiently and rapidly extract information from earnings report transcripts while ensuring high accuracy transforming the extraction process as well as reducing hallucination by combining retrieval-augmented generation technique as well as metadata. We evaluate the outcomes of various LLMs with and without using our proposed approach based on various objective metrics for evaluating Q&A systems, and empirically demonstrate superiority of our method. ...

October 16, 2023 · 2 min · Research Team