false

Re(Visiting) Time Series Foundation Models in Finance

Re(Visiting) Time Series Foundation Models in Finance ArXiv ID: 2511.18578 “View on arXiv” Authors: Eghbal Rahimikia, Hao Ni, Weiguan Wang Abstract Financial time series forecasting is central to trading, portfolio optimization, and risk management, yet it remains challenging due to noisy, non-stationary, and heterogeneous data. Recent advances in time series foundation models (TSFMs), inspired by large language models, offer a new paradigm for learning generalizable temporal representations from large and diverse datasets. This paper presents the first comprehensive empirical study of TSFMs in global financial markets. Using a large-scale dataset of daily excess returns across diverse markets, we evaluate zero-shot inference, fine-tuning, and pre-training from scratch against strong benchmark models. We find that off-the-shelf pre-trained TSFMs perform poorly in zero-shot and fine-tuning settings, whereas models pre-trained from scratch on financial data achieve substantial forecasting and economic improvements, underscoring the value of domain-specific adaptation. Increasing the dataset size, incorporating synthetic data augmentation, and applying hyperparameter tuning further enhance performance. ...

November 23, 2025 · 2 min · Research Team

One More Question is Enough, Expert Question Decomposition (EQD) Model for Domain Quantitative Reasoning

One More Question is Enough, Expert Question Decomposition (EQD) Model for Domain Quantitative Reasoning ArXiv ID: 2510.01526 “View on arXiv” Authors: Mengyu Wang, Sotirios Sabanis, Miguel de Carvalho, Shay B. Cohen, Tiejun Ma Abstract Domain-specific quantitative reasoning remains a major challenge for large language models (LLMs), especially in fields requiring expert knowledge and complex question answering (QA). In this work, we propose Expert Question Decomposition (EQD), an approach designed to balance the use of domain knowledge with computational efficiency. EQD is built on a two-step fine-tuning framework and guided by a reward function that measures the effectiveness of generated sub-questions in improving QA outcomes. It requires only a few thousand training examples and a single A100 GPU for fine-tuning, with inference time comparable to zero-shot prompting. Beyond its efficiency, EQD outperforms state-of-the-art domain-tuned models and advanced prompting strategies. We evaluate EQD in the financial domain, characterized by specialized knowledge and complex quantitative reasoning, across four benchmark datasets. Our method consistently improves QA performance by 0.6% to 10.5% across different LLMs. Our analysis reveals an important insight: in domain-specific QA, a single supporting question often provides greater benefit than detailed guidance steps. ...

October 1, 2025 · 2 min · Research Team

DELPHYNE: A Pre-Trained Model for General and Financial Time Series

DELPHYNE: A Pre-Trained Model for General and Financial Time Series ArXiv ID: 2506.06288 “View on arXiv” Authors: Xueying Ding, Aakriti Mittal, Achintya Gopal Abstract Time-series data is a vital modality within data science communities. This is particularly valuable in financial applications, where it helps in detecting patterns, understanding market behavior, and making informed decisions based on historical data. Recent advances in language modeling have led to the rise of time-series pre-trained models that are trained on vast collections of datasets and applied to diverse tasks across financial domains. However, across financial applications, existing time-series pre-trained models have not shown boosts in performance over simple finance benchmarks in both zero-shot and fine-tuning settings. This phenomenon occurs because of a i) lack of financial data within the pre-training stage, and ii) the negative transfer effect due to inherently different time-series patterns across domains. Furthermore, time-series data is continuous, noisy, and can be collected at varying frequencies and with varying lags across different variables, making this data more challenging to model than languages. To address the above problems, we introduce a Pre-trained MoDEL for FINance TimE-series (Delphyne). Delphyne achieves competitive performance to existing foundation and full-shot models with few fine-tuning steps on publicly available datasets, and also shows superior performances on various financial tasks. ...

May 12, 2025 · 2 min · Research Team

FinRLlama: A Solution to LLM-Engineered Signals Challenge at FinRL Contest 2024

FinRLlama: A Solution to LLM-Engineered Signals Challenge at FinRL Contest 2024 ArXiv ID: 2502.01992 “View on arXiv” Authors: Unknown Abstract In response to Task II of the FinRL Challenge at ACM ICAIF 2024, this study proposes a novel prompt framework for fine-tuning large language models (LLM) with Reinforcement Learning from Market Feedback (RLMF). Our framework incorporates market-specific features and short-term price dynamics to generate more precise trading signals. Traditional LLMs, while competent in sentiment analysis, lack contextual alignment for financial market applications. To bridge this gap, we fine-tune the LLaMA-3.2-3B-Instruct model using a custom RLMF prompt design that integrates historical market data and reward-based feedback. Our evaluation shows that this RLMF-tuned framework outperforms baseline methods in signal consistency and achieving tighter trading outcomes; awarded as winner of Task II. You can find the code for this project on GitHub. ...

February 4, 2025 · 2 min · Research Team

Generating long-horizon stock buy signals with a neural language model

Generating long-horizon stock “buy” signals with a neural language model ArXiv ID: 2410.18988 “View on arXiv” Authors: Unknown Abstract This paper describes experiments on fine-tuning a small language model to generate forecasts of long-horizon stock price movements. Inputs to the model are narrative text from 10-K reports of large market capitalization companies in the S&P 500 index; the output is a forward-looking buy or sell decision. Price direction is predicted at discrete horizons up to 12 months after the report filing date. The results reported here demonstrate good out-of-sample statistical performance (F1-macro= 0.62) at medium to long investment horizons. In particular, the buy signals generated from 10-K text are found most precise at 6 and 9 months in the future. As measured by the F1 score, the buy signal provides between 4.8 and 9 percent improvement against a random stock selection model. In contrast, sell signals generated by the models do not perform well. This may be attributed to the highly imbalanced out-of-sample data, or perhaps due to management drafting annual reports with a bias toward positive language. Cross-sectional analysis of performance by economic sector suggests that idiosyncratic reporting styles within industries are correlated with varying degrees and time scales of price movement predictability. ...

October 9, 2024 · 2 min · Research Team

Optimizing Performance: How Compact Models Match or Exceed GPT's Classification Capabilities through Fine-Tuning

Optimizing Performance: How Compact Models Match or Exceed GPT’s Classification Capabilities through Fine-Tuning ArXiv ID: 2409.11408 “View on arXiv” Authors: Unknown Abstract In this paper, we demonstrate that non-generative, small-sized models such as FinBERT and FinDRoBERTa, when fine-tuned, can outperform GPT-3.5 and GPT-4 models in zero-shot learning settings in sentiment analysis for financial news. These fine-tuned models show comparable results to GPT-3.5 when it is fine-tuned on the task of determining market sentiment from daily financial news summaries sourced from Bloomberg. To fine-tune and compare these models, we created a novel database, which assigns a market score to each piece of news without human interpretation bias, systematically identifying the mentioned companies and analyzing whether their stocks have gone up, down, or remained neutral. Furthermore, the paper shows that the assumptions of Condorcet’s Jury Theorem do not hold suggesting that fine-tuned small models are not independent of the fine-tuned GPT models, indicating behavioural similarities. Lastly, the resulted fine-tuned models are made publicly available on HuggingFace, providing a resource for further research in financial sentiment analysis and text classification. ...

August 22, 2024 · 2 min · Research Team

Fine-Tuning Large Language Models for Stock Return Prediction Using Newsflow

Fine-Tuning Large Language Models for Stock Return Prediction Using Newsflow ArXiv ID: 2407.18103 “View on arXiv” Authors: Unknown Abstract Large language models (LLMs) and their fine-tuning techniques have demonstrated superior performance in various language understanding and generation tasks. This paper explores fine-tuning LLMs for stock return forecasting with financial newsflow. In quantitative investing, return forecasting is fundamental for subsequent tasks like stock picking, portfolio optimization, etc. We formulate the model to include text representation and forecasting modules. We propose to compare the encoder-only and decoder-only LLMs, considering they generate text representations in distinct ways. The impact of these different representations on forecasting performance remains an open question. Meanwhile, we compare two simple methods of integrating LLMs’ token-level representations into the forecasting module. The experiments on real news and investment universes reveal that: (1) aggregated representations from LLMs’ token-level embeddings generally produce return predictions that enhance the performance of long-only and long-short portfolios; (2) in the relatively large investment universe, the decoder LLMs-based prediction model leads to stronger portfolios, whereas in the small universes, there are no consistent winners. Among the three LLMs studied (DeBERTa, Mistral, Llama), Mistral performs more robustly across different universes; (3) return predictions derived from LLMs’ text representations are a strong signal for portfolio construction, outperforming conventional sentiment scores. ...

July 25, 2024 · 2 min · Research Team

FinLlama: Financial Sentiment Classification for Algorithmic Trading Applications

FinLlama: Financial Sentiment Classification for Algorithmic Trading Applications ArXiv ID: 2403.12285 “View on arXiv” Authors: Unknown Abstract There are multiple sources of financial news online which influence market movements and trader’s decisions. This highlights the need for accurate sentiment analysis, in addition to having appropriate algorithmic trading techniques, to arrive at better informed trading decisions. Standard lexicon based sentiment approaches have demonstrated their power in aiding financial decisions. However, they are known to suffer from issues related to context sensitivity and word ordering. Large Language Models (LLMs) can also be used in this context, but they are not finance-specific and tend to require significant computational resources. To facilitate a finance specific LLM framework, we introduce a novel approach based on the Llama 2 7B foundational model, in order to benefit from its generative nature and comprehensive language manipulation. This is achieved by fine-tuning the Llama2 7B model on a small portion of supervised financial sentiment analysis data, so as to jointly handle the complexities of financial lexicon and context, and further equipping it with a neural network based decision mechanism. Such a generator-classifier scheme, referred to as FinLlama, is trained not only to classify the sentiment valence but also quantify its strength, thus offering traders a nuanced insight into financial news articles. Complementing this, the implementation of parameter-efficient fine-tuning through LoRA optimises trainable parameters, thus minimising computational and memory requirements, without sacrificing accuracy. Simulation results demonstrate the ability of the proposed FinLlama to provide a framework for enhanced portfolio management decisions and increased market returns. These results underpin the ability of FinLlama to construct high-return portfolios which exhibit enhanced resilience, even during volatile periods and unpredictable market events. ...

March 18, 2024 · 2 min · Research Team

Shai: A large language model for asset management

Shai: A large language model for asset management ArXiv ID: 2312.14203 “View on arXiv” Authors: Unknown Abstract This paper introduces “Shai” a 10B level large language model specifically designed for the asset management industry, built upon an open-source foundational model. With continuous pre-training and fine-tuning using a targeted corpus, Shai demonstrates enhanced performance in tasks relevant to its domain, outperforming baseline models. Our research includes the development of an innovative evaluation framework, which integrates professional qualification exams, tailored tasks, open-ended question answering, and safety assessments, to comprehensively assess Shai’s capabilities. Furthermore, we discuss the challenges and implications of utilizing large language models like GPT-4 for performance assessment in asset management, suggesting a combination of automated evaluation and human judgment. Shai’s development, showcasing the potential and versatility of 10B-level large language models in the financial sector with significant performance and modest computational requirements, hopes to provide practical insights and methodologies to assist industry peers in their similar endeavors. ...

December 21, 2023 · 2 min · Research Team