false

Dynamic Investment Strategies Through Market Classification and Volatility: A Machine Learning Approach

Dynamic Investment Strategies Through Market Classification and Volatility: A Machine Learning Approach ArXiv ID: 2504.02841 “View on arXiv” Authors: Unknown Abstract This study introduces a dynamic investment framework to enhance portfolio management in volatile markets, offering clear advantages over traditional static strategies. Evaluates four conventional approaches : equal weighted, minimum variance, maximum diversification, and equal risk contribution under dynamic conditions. Using K means clustering, the market is segmented into ten volatility-based states, with transitions forecasted by a Bayesian Markov switching model employing Dirichlet priors and Gibbs sampling. This enables real-time asset allocation adjustments. Tested across two asset sets, the dynamic portfolio consistently achieves significantly higher risk-adjusted returns and substantially higher total returns, outperforming most static methods. By integrating classical optimization with machine learning and Bayesian techniques, this research provides a robust strategy for optimizing investment outcomes in unpredictable market environments. ...

March 19, 2025 · 2 min · Research Team

HQNN-FSP: A Hybrid Classical-Quantum Neural Network for Regression-Based Financial Stock Market Prediction

HQNN-FSP: A Hybrid Classical-Quantum Neural Network for Regression-Based Financial Stock Market Prediction ArXiv ID: 2503.15403 “View on arXiv” Authors: Unknown Abstract Financial time-series forecasting remains a challenging task due to complex temporal dependencies and market fluctuations. This study explores the potential of hybrid quantum-classical approaches to assist in financial trend prediction by leveraging quantum resources for improved feature representation and learning. A custom Quantum Neural Network (QNN) regressor is introduced, designed with a novel ansatz tailored for financial applications. Two hybrid optimization strategies are proposed: (1) a sequential approach where classical recurrent models (RNN/LSTM) extract temporal dependencies before quantum processing, and (2) a joint learning framework that optimizes classical and quantum parameters simultaneously. Systematic evaluation using TimeSeriesSplit, k-fold cross-validation, and predictive error analysis highlights the ability of these hybrid models to integrate quantum computing into financial forecasting workflows. The findings demonstrate how quantum-assisted learning can contribute to financial modeling, offering insights into the practical role of quantum resources in time-series analysis. ...

March 19, 2025 · 2 min · Research Team

Optimal Data Splitting for Holdout Cross-Validation in Large Covariance Matrix Estimation

Optimal Data Splitting for Holdout Cross-Validation in Large Covariance Matrix Estimation ArXiv ID: 2503.15186 “View on arXiv” Authors: Unknown Abstract Cross-validation is a statistical tool that can be used to improve large covariance matrix estimation. Although its efficiency is observed in practical applications and a convergence result towards the error of the non linear shrinkage is available in the high-dimensional regime, formal proofs that take into account the finite sample size effects are currently lacking. To carry on analytical analysis, we focus on the holdout method, a single iteration of cross-validation, rather than the traditional $k$-fold approach. We derive a closed-form expression for the expected estimation error when the population matrix follows a white inverse Wishart distribution, and we observe the optimal train-test split scales as the square root of the matrix dimension. For general population matrices, we connected the error to the variance of eigenvalues distribution, but approximations are necessary. In this framework and in the high-dimensional asymptotic regime, both the holdout and $k$-fold cross-validation methods converge to the optimal estimator when the train-test ratio scales with the square root of the matrix dimension which is coherent with the existing theory. ...

March 19, 2025 · 2 min · Research Team

Statistical applications of the 20/60/20 rule in risk management and portfolio optimization

Statistical applications of the 20/60/20 rule in risk management and portfolio optimization ArXiv ID: 2504.02840 “View on arXiv” Authors: Unknown Abstract This paper explores the applications of the 20/60/20 rule-a heuristic method that segments data into top-performing, average-performing, and underperforming groups-in mathematical finance. We review the statistical foundations of this rule and demonstrate its usefulness in risk management and portfolio optimization. Our study highlights three key applications. First, we apply the rule to stock market data, showing that it enables effective population clustering. Second, we introduce a novel, easy-to-implement method for extracting heavy-tail characteristics in risk management. Third, we integrate spatial reasoning based on the 20/60/20 rule into portfolio optimization, enhancing robustness and improving performance. To support our findings, we develop a new measure for quantifying tail heaviness and employ conditional statistics to reconstruct the unconditional distribution from the core data segment. This reconstructed distribution is tested on real financial data to evaluate whether the 20/60/20 segmentation effectively balances capturing extreme risks with maintaining the stability of central returns. Our results offer insights into financial data behavior under heavy-tailed conditions and demonstrate the potential of the 20/60/20 rule as a complementary tool for decision-making in finance. ...

March 19, 2025 · 2 min · Research Team

A Note on the Asymptotic Properties of the GLS Estimator in Multivariate Regression with Heteroskedastic and Autocorrelated Errors

A Note on the Asymptotic Properties of the GLS Estimator in Multivariate Regression with Heteroskedastic and Autocorrelated Errors ArXiv ID: 2503.13950 “View on arXiv” Authors: Unknown Abstract We study the asymptotic properties of the GLS estimator in multivariate regression with heteroskedastic and autocorrelated errors. We derive Wald statistics for linear restrictions and assess their performance. The statistics remains robust to heteroskedasticity and autocorrelation. Keywords: Generalized Least Squares (GLS), Wald Statistics, Heteroskedasticity and Autocorrelation Consistency (HAC), Multivariate Regression, Linear Restrictions, Equities ...

March 18, 2025 · 1 min · Research Team

Determining a credit transition matrix from cumulative default probabilities

Determining a credit transition matrix from cumulative default probabilities ArXiv ID: 2503.14646 “View on arXiv” Authors: Unknown Abstract To quantify the changes in the credit rating of a bond is an important mathematical problem for the credit rating industry. To think of the credit rating as the state a Markov chain is an interesting proposal leading to challenges in mathematical modeling. Since cumulative default rates are more readily measurable than credit migrations, a natural question is whether the credit transition matrix (CTM) can be determined from the knowledge of the cumulative default probabilities. Here we use a connection between the CTM and the cumulative default probabilities to setup an ill-posed, linear inverse problem with box constraints, which we solve by an entropy minimization procedure. This approach is interesting on several counts. On the one hand, we may have less data that unknowns, and on the other hand, even when we have as much data as unknowns, the matrix connecting them may not be invertible, which makes the problem ill-posed. Besides developing the tools to solve the problem, we apply it to several test cases to check the performance of the method. The results are quite satisfactory. ...

March 18, 2025 · 2 min · Research Team

Rolling Forward: Enhancing LightGCN with Causal Graph Convolution for Credit Bond Recommendation

Rolling Forward: Enhancing LightGCN with Causal Graph Convolution for Credit Bond Recommendation ArXiv ID: 2503.14213 “View on arXiv” Authors: Unknown Abstract Graph Neural Networks have significantly advanced research in recommender systems over the past few years. These methods typically capture global interests using aggregated past interactions and rely on static embeddings of users and items over extended periods of time. While effective in some domains, these methods fall short in many real-world scenarios, especially in finance, where user interests and item popularity evolve rapidly over time. To address these challenges, we introduce a novel extension to Light Graph Convolutional Network (LightGCN) designed to learn temporal node embeddings that capture dynamic interests. Our approach employs causal convolution to maintain a forward-looking model architecture. By preserving the chronological order of user-item interactions and introducing a dynamic update mechanism for embeddings through a sliding window, the proposed model generates well-timed and contextually relevant recommendations. Extensive experiments on a real-world dataset from BNP Paribas demonstrate that our approach significantly enhances the performance of LightGCN while maintaining the simplicity and efficiency of its architecture. Our findings provide new insights into designing graph-based recommender systems in time-sensitive applications, particularly for financial product recommendations. ...

March 18, 2025 · 2 min · Research Team

Deep Hedging of Green PPAs in Electricity Markets

Deep Hedging of Green PPAs in Electricity Markets ArXiv ID: 2503.13056 “View on arXiv” Authors: Unknown Abstract In power markets, Green Power Purchase Agreements have become an important contractual tool of the energy transition from fossil fuels to renewable sources such as wind or solar radiation. Trading Green PPAs exposes agents to price risks and weather risks. Also, developed electricity markets feature the so-called cannibalisation effect : large infeeds induce low prices and vice versa. As weather is a non-tradable entity the question arises how to hedge and risk-manage in this highly incom-plete setting. We propose a ‘‘deep hedging’’ framework utilising machine learning methods to construct hedging strategies. The resulting strategies outperform static and dynamic benchmark strategies with respect to different risk measures. ...

March 17, 2025 · 2 min · Research Team

The deep multi-FBSDE method: a robust deep learning method for coupled FBSDEs

The deep multi-FBSDE method: a robust deep learning method for coupled FBSDEs ArXiv ID: 2503.13193 “View on arXiv” Authors: Unknown Abstract We introduce the deep multi-FBSDE method for robust approximation of coupled forward-backward stochastic differential equations (FBSDEs), focusing on cases where the deep BSDE method of Han, Jentzen, and E (2018) fails to converge. To overcome the convergence issues, we consider a family of FBSDEs that are equivalent to the original problem in the sense that they satisfy the same associated partial differential equation (PDE). Our algorithm proceeds in two phases: first, we approximate the initial condition for the FBSDE family, and second, we approximate the original FBSDE using the initial condition approximated in the first phase. Numerical experiments show that our method converges even when the standard deep BSDE method does not. ...

March 17, 2025 · 2 min · Research Team

Decision by Supervised Learning with Deep Ensembles: A Practical Framework for Robust Portfolio Optimization

Decision by Supervised Learning with Deep Ensembles: A Practical Framework for Robust Portfolio Optimization ArXiv ID: 2503.13544 “View on arXiv” Authors: Unknown Abstract We propose Decision by Supervised Learning (DSL), a practical framework for robust portfolio optimization. DSL reframes portfolio construction as a supervised learning problem: models are trained to predict optimal portfolio weights, using cross-entropy loss and portfolios constructed by maximizing the Sharpe or Sortino ratio. To further enhance stability and reliability, DSL employs Deep Ensemble methods, substantially reducing variance in portfolio allocations. Through comprehensive backtesting across diverse market universes and neural architectures, shows superior performance compared to both traditional strategies and leading machine learning-based methods, including Prediction-Focused Learning and End-to-End Learning. We show that increasing the ensemble size leads to higher median returns and more stable risk-adjusted performance. The code is available at https://github.com/DSLwDE/DSLwDE. ...

March 16, 2025 · 2 min · Research Team