false

Physics-Informed Singular-Value Learning for Cross-Covariances Forecasting in Financial Markets

Physics-Informed Singular-Value Learning for Cross-Covariances Forecasting in Financial Markets ArXiv ID: 2601.07687 “View on arXiv” Authors: Efstratios Manolakis, Christian Bongiorno, Rosario Nunzio Mantegna Abstract A new wave of work on covariance cleaning and nonlinear shrinkage has delivered asymptotically optimal analytical solutions for large covariance matrices. Building on this progress, these ideas have been generalized to empirical cross-covariance matrices, whose singular-value shrinkage characterizes comovements between one set of assets and another. Existing analytical cross-covariance cleaners are derived under strong stationarity and large-sample assumptions, and they typically rely on mesoscopic regularity conditions such as bounded spectra; macroscopic common modes (e.g., a global market factor) violate these conditions. When applied to real equity returns, where dependence structures drift over time and global modes are prominent, we find that these theoretically optimal formulas do not translate into robust out-of-sample performance. We address this gap by designing a random-matrix-inspired neural architecture that operates in the empirical singular-vector basis and learns a nonlinear mapping from empirical singular values to their corresponding cleaned values. By construction, the network can recover the analytical solution as a special case, yet it remains flexible enough to adapt to non-stationary dynamics and mode-driven distortions. Trained on a long history of equity returns, the proposed method achieves a more favorable bias-variance trade-off than purely analytical cleaners and delivers systematically lower out-of-sample cross-covariance prediction errors. Our results demonstrate that combining random-matrix theory with machine learning makes asymptotic theories practically effective in realistic time-varying markets. ...

January 12, 2026 · 2 min · Research Team

Convergence of the generalization error for deep gradient flow methods for PDEs

Convergence of the generalization error for deep gradient flow methods for PDEs ArXiv ID: 2512.25017 “View on arXiv” Authors: Chenguang Liu, Antonis Papapantoleon, Jasper Rou Abstract The aim of this article is to provide a firm mathematical foundation for the application of deep gradient flow methods (DGFMs) for the solution of (high-dimensional) partial differential equations (PDEs). We decompose the generalization error of DGFMs into an approximation and a training error. We first show that the solution of PDEs that satisfy reasonable and verifiable assumptions can be approximated by neural networks, thus the approximation error tends to zero as the number of neurons tends to infinity. Then, we derive the gradient flow that the training process follows in the ``wide network limit’’ and analyze the limit of this flow as the training time tends to infinity. These results combined show that the generalization error of DGFMs tends to zero as the number of neurons and the training time tend to infinity. ...

December 31, 2025 · 2 min · Research Team

An Efficient Machine Learning Framework for Option Pricing via Fourier Transform

An Efficient Machine Learning Framework for Option Pricing via Fourier Transform ArXiv ID: 2512.16115 “View on arXiv” Authors: Liying Zhang, Ying Gao Abstract The increasing need for rapid recalibration of option pricing models in dynamic markets places stringent computational demands on data generation and valuation algorithms. In this work, we propose a hybrid algorithmic framework that integrates the smooth offset algorithm (SOA) with supervised machine learning models for the fast pricing of multiple path-independent options under exponential Lévy dynamics. Building upon the SOA-generated dataset, we train neural networks, random forests, and gradient boosted decision trees to construct surrogate pricing operators. Extensive numerical experiments demonstrate that, once trained, these surrogates achieve order-of-magnitude acceleration over direct SOA evaluation. Importantly, the proposed framework overcomes key numerical limitations inherent to fast Fourier transform-based methods, including the consistency of input data and the instability in deep out-of-the-money option pricing. ...

December 18, 2025 · 2 min · Research Team

Stochastic Dominance Constrained Optimization with S-shaped Utilities: Poor-Performance-Region Algorithm and Neural Network

Stochastic Dominance Constrained Optimization with S-shaped Utilities: Poor-Performance-Region Algorithm and Neural Network ArXiv ID: 2512.00299 “View on arXiv” Authors: Zeyun Hu, Yang Liu Abstract We investigate the static portfolio selection problem of S-shaped and non-concave utility maximization under first-order and second-order stochastic dominance (SD) constraints. In many S-shaped utility optimization problems, one should require a liquidation boundary to guarantee the existence of a finite concave envelope function. A first-order SD (FSD) constraint can replace this requirement and provide an alternative for risk management. We explicitly solve the optimal solution under a general S-shaped utility function with a first-order stochastic dominance constraint. However, the second-order SD (SSD) constrained problem under non-concave utilities is difficult to solve analytically due to the invalidity of Sion’s maxmin theorem. For this sake, we propose a numerical algorithm to obtain a plausible and sub-optimal solution for general non-concave utilities. The key idea is to detect the poor performance region with respect to the SSD constraints, characterize its structure and modify the distribution on that region to obtain (sub-)optimality. A key financial insight is that the decision maker should follow the SD constraint on the poor performance scenario while conducting the unconstrained optimal strategy otherwise. We provide numerical experiments to show that our algorithm effectively finds a sub-optimal solution in many cases. Finally, we develop an algorithm-guided piecewise-neural-network framework to learn the solution of the SSD problem, which demonstrates accelerated convergence compared to standard neural network approaches. ...

November 29, 2025 · 2 min · Research Team

It Looks All the Same to Me: Cross-index Training for Long-term Financial Series Prediction

“It Looks All the Same to Me”: Cross-index Training for Long-term Financial Series Prediction ArXiv ID: 2511.08658 “View on arXiv” Authors: Stanislav Selitskiy Abstract We investigate a number of Artificial Neural Network architectures (well-known and more ``exotic’’) in application to the long-term financial time-series forecasts of indexes on different global markets. The particular area of interest of this research is to examine the correlation of these indexes’ behaviour in terms of Machine Learning algorithms cross-training. Would training an algorithm on an index from one global market produce similar or even better accuracy when such a model is applied for predicting another index from a different market? The demonstrated predominately positive answer to this question is another argument in favour of the long-debated Efficient Market Hypothesis of Eugene Fama. ...

November 11, 2025 · 2 min · Research Team

Can Machine Learning Algorithms Outperform Traditional Models for Option Pricing?

Can Machine Learning Algorithms Outperform Traditional Models for Option Pricing? ArXiv ID: 2510.01446 “View on arXiv” Authors: Georgy Milyushkov Abstract This study investigates the application of machine learning techniques, specifically Neural Networks, Random Forests, and CatBoost for option pricing, in comparison to traditional models such as Black-Scholes and Heston Model. Using both synthetically generated data and real market option data, each model is evaluated in predicting the option price. The results show that machine learning models can capture complex, non-linear relationships in option prices and, in several cases, outperform both Black-Scholes and Heston models. These findings highlight the potential of data-driven methods to improve pricing accuracy and better reflect market dynamics. ...

October 1, 2025 · 2 min · Research Team

Neural Network-Based Algorithmic Trading Systems: Multi-Timeframe Analysis and High-Frequency Execution in Cryptocurrency Markets

Neural Network-Based Algorithmic Trading Systems: Multi-Timeframe Analysis and High-Frequency Execution in Cryptocurrency Markets ArXiv ID: 2508.02356 “View on arXiv” Authors: Wěi Zhāng Abstract This paper explores neural network-based approaches for algorithmic trading in cryptocurrency markets. Our approach combines multi-timeframe trend analysis with high-frequency direction prediction networks, achieving positive risk-adjusted returns through statistical modeling and systematic market exploitation. The system integrates diverse data sources including market data, on-chain metrics, and orderbook dynamics, translating these into unified buy/sell pressure signals. We demonstrate how machine learning models can effectively capture cross-timeframe relationships, enabling sub-second trading decisions with statistical confidence. ...

August 4, 2025 · 2 min · Research Team

Hedging with memory: shallow and deep learning with signatures

Hedging with memory: shallow and deep learning with signatures ArXiv ID: 2508.02759 “View on arXiv” Authors: Eduardo Abi Jaber, Louis-Amand Gérard Abstract We investigate the use of path signatures in a machine learning context for hedging exotic derivatives under non-Markovian stochastic volatility models. In a deep learning setting, we use signatures as features in feedforward neural networks and show that they outperform LSTMs in most cases, with orders of magnitude less training compute. In a shallow learning setting, we compare two regression approaches: the first directly learns the hedging strategy from the expected signature of the price process; the second models the dynamics of volatility using a signature volatility model, calibrated on the expected signature of the volatility. Solving the hedging problem in the calibrated signature volatility model yields more accurate and stable results across different payoffs and volatility dynamics. ...

August 3, 2025 · 2 min · Research Team

Joint deep calibration of the 4-factor PDV model

Joint deep calibration of the 4-factor PDV model ArXiv ID: 2507.09412 “View on arXiv” Authors: Fabio Baschetti, Giacomo Bormetti, Pietro Rossi Abstract Joint calibration to SPX and VIX market data is a delicate task that requires sophisticated modeling and incurs significant computational costs. The latter is especially true when pricing of volatility derivatives hinges on nested Monte Carlo simulation. One such example is the 4-factor Markov Path-Dependent Volatility (PDV) model of Guyon and Lekeufack (2023). Nonetheless, its realism has earned it considerable attention in recent years. Gazzani and Guyon (2025) marked a relevant contribution by learning the VIX as a random variable, i.e., a measurable function of the model parameters and the Markovian factors. A neural network replaces the inner simulation and makes the joint calibration problem accessible. However, the minimization loop remains slow due to expensive outer simulation. The present paper overcomes this limitation by learning SPX implied volatilities, VIX futures, and VIX call option prices. The pricing functions reduce to simple matrix-vector products that can be evaluated on the fly, shrinking calibration times to just a few seconds. ...

July 12, 2025 · 2 min · Research Team

Neural Functionally Generated Portfolios

Neural Functionally Generated Portfolios ArXiv ID: 2506.19715 “View on arXiv” Authors: Michael Monoyios, Olivia Pricilia Abstract We introduce a novel neural-network-based approach to learning the generating function $G(\cdot)$ of a functionally generated portfolio (FGP) from synthetic or real market data. In the neural network setting, the generating function is represented as $G_θ(\cdot)$, where $θ$ is an iterable neural network parameter vector, and $G_θ(\cdot)$ is trained to maximise investment return relative to the market portfolio. We compare the performance of the Neural FGP approach against classical FGP benchmarks. FGPs provide a robust alternative to classical portfolio optimisation by bypassing the need to estimate drifts or covariances. The neural FGP framework extends this by introducing flexibility in the design of the generating function, enabling it to learn from market dynamics while preserving self-financing and pathwise decomposition properties. ...

June 24, 2025 · 2 min · Research Team