false

Portfolio Optimization via Transfer Learning

Portfolio Optimization via Transfer Learning ArXiv ID: 2511.21221 “View on arXiv” Authors: Kexin Wang, Xiaomeng Zhang, Xinyu Zhang Abstract Recognizing that asset markets generally exhibit shared informational characteristics, we develop a portfolio strategy based on transfer learning that leverages cross-market information to enhance the investment performance in the market of interest by forward validation. Our strategy asymptotically identifies and utilizes the informative datasets, selectively incorporating valid information while discarding the misleading information. This enables our strategy to achieve the maximum Sharpe ratio asymptotically. The promising performance is demonstrated by numerical studies and case studies of two portfolios: one consisting of stocks dual-listed in A-shares and H-shares, and another comprising equities from various industries of the United States. ...

November 26, 2025 · 2 min · Research Team

It Looks All the Same to Me: Cross-index Training for Long-term Financial Series Prediction

“It Looks All the Same to Me”: Cross-index Training for Long-term Financial Series Prediction ArXiv ID: 2511.08658 “View on arXiv” Authors: Stanislav Selitskiy Abstract We investigate a number of Artificial Neural Network architectures (well-known and more ``exotic’’) in application to the long-term financial time-series forecasts of indexes on different global markets. The particular area of interest of this research is to examine the correlation of these indexes’ behaviour in terms of Machine Learning algorithms cross-training. Would training an algorithm on an index from one global market produce similar or even better accuracy when such a model is applied for predicting another index from a different market? The demonstrated predominately positive answer to this question is another argument in favour of the long-debated Efficient Market Hypothesis of Eugene Fama. ...

November 11, 2025 · 2 min · Research Team

Meta-Learning Neural Process for Implied Volatility Surfaces with SABR-induced Priors

Meta-Learning Neural Process for Implied Volatility Surfaces with SABR-induced Priors ArXiv ID: 2509.11928 “View on arXiv” Authors: Jirong Zhuang, Xuan Wu Abstract We treat implied volatility surface (IVS) reconstruction as a learning problem guided by two principles. First, we adopt a meta-learning view that trains across trading days to learn a procedure that maps sparse option quotes to a full IVS via conditional prediction, avoiding per-day calibration at test time. Second, we impose a structural prior via transfer learning: pre-train on SABR-generated dataset to encode geometric prior, then fine-tune on historical market dataset to align with empirical patterns. We implement both principles in a single attention-based Neural Process (Volatility Neural Process, VolNP) that produces a complete IVS from a sparse context set in one forward pass. On SPX options, the VolNP outperforms SABR, SSVI, and Gaussian process. Relative to an ablation trained only on market data, the SABR-induced prior reduces RMSE by about 40% and suppresses large errors, with pronounced gains at long maturities where quotes are sparse. The resulting model is fast (single pass), stable (no daily recalibration), and practical for deployment at scale. ...

September 15, 2025 · 3 min · Research Team

Evaluating Transfer Learning Methods on Real-World Data Streams: A Case Study in Financial Fraud Detection

Evaluating Transfer Learning Methods on Real-World Data Streams: A Case Study in Financial Fraud Detection ArXiv ID: 2508.02702 “View on arXiv” Authors: Ricardo Ribeiro Pereira, Jacopo Bono, Hugo Ferreira, Pedro Ribeiro, Carlos Soares, Pedro Bizarro Abstract When the available data for a target domain is limited, transfer learning (TL) methods can be used to develop models on related data-rich domains, before deploying them on the target domain. However, these TL methods are typically designed with specific, static assumptions on the amount of available labeled and unlabeled target data. This is in contrast with many real world applications, where the availability of data and corresponding labels varies over time. Since the evaluation of the TL methods is typically also performed under the same static data availability assumptions, this would lead to unrealistic expectations concerning their performance in real world settings. To support a more realistic evaluation and comparison of TL algorithms and models, we propose a data manipulation framework that (1) simulates varying data availability scenarios over time, (2) creates multiple domains through resampling of a given dataset and (3) introduces inter-domain variability by applying realistic domain transformations, e.g., creating a variety of potentially time-dependent covariate and concept shifts. These capabilities enable simulation of a large number of realistic variants of the experiments, in turn providing more information about the potential behavior of algorithms when deployed in dynamic settings. We demonstrate the usefulness of the proposed framework by performing a case study on a proprietary real-world suite of card payment datasets. Given the confidential nature of the case study, we also illustrate the use of the framework on the publicly available Bank Account Fraud (BAF) dataset. By providing a methodology for evaluating TL methods over time and in realistic data availability scenarios, our framework facilitates understanding of the behavior of models and algorithms. This leads to better decision making when deploying models for new domains in real-world environments. ...

July 29, 2025 · 3 min · Research Team

Transfer Learning Across Fixed-Income Product Classes

Transfer Learning Across Fixed-Income Product Classes ArXiv ID: 2505.07676 “View on arXiv” Authors: Nicolas Camenzind, Damir Filipovic Abstract We propose a framework for transfer learning of discount curves across different fixed-income product classes. Motivated by challenges in estimating discount curves from sparse or noisy data, we extend kernel ridge regression (KR) to a vector-valued setting, formulating a convex optimization problem in a vector-valued reproducing kernel Hilbert space (RKHS). Each component of the solution corresponds to the discount curve implied by a specific product class. We introduce an additional regularization term motivated by economic principles, promoting smoothness of spread curves between product classes, and show that it leads to a valid separable kernel structure. A main theoretical contribution is a decomposition of the vector-valued RKHS norm induced by separable kernels. We further provide a Gaussian process interpretation of vector-valued KR, enabling quantification of estimation uncertainty. Illustrative examples demonstrate that transfer learning significantly improves extrapolation performance and tightens confidence intervals compared to single-curve estimation. ...

May 12, 2025 · 2 min · Research Team

Realized Volatility Forecasting for New Issues and Spin-Offs using Multi-Source Transfer Learning

Realized Volatility Forecasting for New Issues and Spin-Offs using Multi-Source Transfer Learning ArXiv ID: 2503.12648 “View on arXiv” Authors: Unknown Abstract Forecasting the volatility of financial assets is essential for various financial applications. This paper addresses the challenging task of forecasting the volatility of financial assets with limited historical data, such as new issues or spin-offs, by proposing a multi-source transfer learning approach. Specifically, we exploit complementary source data of assets with a substantial historical data record by selecting source time series instances that are most similar to the limited target data of the new issue/spin-off. Based on these instances and the target data, we estimate linear and non-linear realized volatility models and compare their forecasting performance to forecasts of models trained exclusively on the target data, and models trained on the entire source and target data. The results show that our transfer learning approach outperforms the alternative models and that the integration of complementary data is also beneficial immediately after the initial trading day of the new issue/spin-off. ...

March 16, 2025 · 2 min · Research Team

Transfer learning for financial data predictions: a systematic review

Transfer learning for financial data predictions: a systematic review ArXiv ID: 2409.17183 “View on arXiv” Authors: Unknown Abstract Literature highlighted that financial time series data pose significant challenges for accurate stock price prediction, because these data are characterized by noise and susceptibility to news; traditional statistical methodologies made assumptions, such as linearity and normality, which are not suitable for the non-linear nature of financial time series; on the other hand, machine learning methodologies are able to capture non linear relationship in the data. To date, neural network is considered the main machine learning tool for the financial prices prediction. Transfer Learning, as a method aimed at transferring knowledge from source tasks to target tasks, can represent a very useful methodological tool for getting better financial prediction capability. Current reviews on the above body of knowledge are mainly focused on neural network architectures, for financial prediction, with very little emphasis on the transfer learning methodology; thus, this paper is aimed at going deeper on this topic by developing a systematic review with respect to application of Transfer Learning for financial market predictions and to challenges/potential future directions of the transfer learning methodologies for stock market predictions. ...

September 24, 2024 · 2 min · Research Team

Enhancement of price trend trading strategies via image-induced importance weights

Enhancement of price trend trading strategies via image-induced importance weights ArXiv ID: 2408.08483 “View on arXiv” Authors: Unknown Abstract We open up the “black-box” to identify the predictive general price patterns in price chart images via the deep learning image analysis techniques. Our identified price patterns lead to the construction of image-induced importance (triple-I) weights, which are applied to weighted moving average the existing price trend trading signals according to their level of importance in predicting price movements. From an extensive empirical analysis on the Chinese stock market, we show that the triple-I weighting scheme can significantly enhance the price trend trading signals for proposing portfolios, with a thoughtful robustness study in terms of network specifications, image structures, and stock sizes. Moreover, we demonstrate that the triple-I weighting scheme is able to propose long-term portfolios from a time-scale transfer learning, enhance the news-based trading strategies through a non-technical transfer learning, and increase the overall strength of numerous trading rules for portfolio selection. ...

August 16, 2024 · 2 min · Research Team

Transfer Learning for Portfolio Optimization

Transfer Learning for Portfolio Optimization ArXiv ID: 2307.13546 “View on arXiv” Authors: Unknown Abstract In this work, we explore the possibility of utilizing transfer learning techniques to address the financial portfolio optimization problem. We introduce a novel concept called “transfer risk”, within the optimization framework of transfer learning. A series of numerical experiments are conducted from three categories: cross-continent transfer, cross-sector transfer, and cross-frequency transfer. In particular, 1. a strong correlation between the transfer risk and the overall performance of transfer learning methods is established, underscoring the significance of transfer risk as a viable indicator of “transferability”; 2. transfer risk is shown to provide a computationally efficient way to identify appropriate source tasks in transfer learning, enhancing the efficiency and effectiveness of the transfer learning approach; 3. additionally, the numerical experiments offer valuable new insights for portfolio management across these different settings. ...

July 25, 2023 · 2 min · Research Team

Deep into The Domain Shift: Transfer Learning through Dependence Regularization

Deep into The Domain Shift: Transfer Learning through Dependence Regularization ArXiv ID: 2305.19499 “View on arXiv” Authors: Unknown Abstract Classical Domain Adaptation methods acquire transferability by regularizing the overall distributional discrepancies between features in the source domain (labeled) and features in the target domain (unlabeled). They often do not differentiate whether the domain differences come from the marginals or the dependence structures. In many business and financial applications, the labeling function usually has different sensitivities to the changes in the marginals versus changes in the dependence structures. Measuring the overall distributional differences will not be discriminative enough in acquiring transferability. Without the needed structural resolution, the learned transfer is less optimal. This paper proposes a new domain adaptation approach in which one can measure the differences in the internal dependence structure separately from those in the marginals. By optimizing the relative weights among them, the new regularization strategy greatly relaxes the rigidness of the existing approaches. It allows a learning machine to pay special attention to places where the differences matter the most. Experiments on three real-world datasets show that the improvements are quite notable and robust compared to various benchmark domain adaptation models. ...

May 31, 2023 · 2 min · Research Team