false

Advancing Financial Engineering with Foundation Models: Progress, Applications, and Challenges

Advancing Financial Engineering with Foundation Models: Progress, Applications, and Challenges ArXiv ID: 2507.18577 “View on arXiv” Authors: Liyuan Chen, Shuoling Liu, Jiangpeng Yan, Xiaoyu Wang, Henglin Liu, Chuang Li, Kecheng Jiao, Jixuan Ying, Yang Veronica Liu, Qiang Yang, Xiu Li Abstract The advent of foundation models (FMs), large-scale pre-trained models with strong generalization capabilities, has opened new frontiers for financial engineering. While general-purpose FMs such as GPT-4 and Gemini have demonstrated promising performance in tasks ranging from financial report summarization to sentiment-aware forecasting, many financial applications remain constrained by unique domain requirements such as multimodal reasoning, regulatory compliance, and data privacy. These challenges have spurred the emergence of financial foundation models (FFMs): a new class of models explicitly designed for finance. This survey presents a comprehensive overview of FFMs, with a taxonomy spanning three key modalities: financial language foundation models (FinLFMs), financial time-series foundation models (FinTSFMs), and financial visual-language foundation models (FinVLFMs). We review their architectures, training methodologies, datasets, and real-world applications. Furthermore, we identify critical challenges in data availability, algorithmic scalability, and infrastructure constraints and offer insights into future research opportunities. We hope this survey can serve as both a comprehensive reference for understanding FFMs and a practical roadmap for future innovation. ...

July 7, 2025 · 2 min · Research Team

Benchmarking Pre-Trained Time Series Models for Electricity Price Forecasting

Benchmarking Pre-Trained Time Series Models for Electricity Price Forecasting ArXiv ID: 2506.08113 “View on arXiv” Authors: Timothée Hornek Amir Sartipi, Igor Tchappi, Gilbert Fridgen Abstract Accurate electricity price forecasting (EPF) is crucial for effective decision-making in power trading on the spot market. While recent advances in generative artificial intelligence (GenAI) and pre-trained large language models (LLMs) have inspired the development of numerous time series foundation models (TSFMs) for time series forecasting, their effectiveness in EPF remains uncertain. To address this gap, we benchmark several state-of-the-art pretrained models–Chronos-Bolt, Chronos-T5, TimesFM, Moirai, Time-MoE, and TimeGPT–against established statistical and machine learning (ML) methods for EPF. Using 2024 day-ahead auction (DAA) electricity prices from Germany, France, the Netherlands, Austria, and Belgium, we generate daily forecasts with a one-day horizon. Chronos-Bolt and Time-MoE emerge as the strongest among the TSFMs, performing on par with traditional models. However, the biseasonal MSTL model, which captures daily and weekly seasonality, stands out for its consistent performance across countries and evaluation metrics, with no TSFM statistically outperforming it. ...

June 9, 2025 · 2 min · Research Team

Foundation Time-Series AI Model for Realized Volatility Forecasting

Foundation Time-Series AI Model for Realized Volatility Forecasting ArXiv ID: 2505.11163 “View on arXiv” Authors: Anubha Goel, Puneet Pasricha, Martin Magris, Juho Kanniainen Abstract Time series foundation models (FMs) have emerged as a popular paradigm for zero-shot multi-domain forecasting. These models are trained on numerous diverse datasets and claim to be effective forecasters across multiple different time series domains, including financial data. In this study, we evaluate the effectiveness of FMs, specifically the TimesFM model, for volatility forecasting, a core task in financial risk management. We first evaluate TimesFM in its pretrained (zero-shot) form, followed by our custom fine-tuning procedure based on incremental learning, and compare the resulting models against standard econometric benchmarks. While the pretrained model provides a reasonable baseline, our findings show that incremental fine-tuning, which allows the model to adapt to new financial return data over time, is essential for learning volatility patterns effectively. Fine-tuned variants not only improve forecast accuracy but also statistically outperform traditional models, as demonstrated through Diebold-Mariano and Giacomini-White tests. These results highlight the potential of foundation models as scalable and adaptive tools for financial forecasting-capable of delivering strong performance in dynamic market environments when paired with targeted fine-tuning strategies. ...

May 16, 2025 · 2 min · Research Team