Physics-Informed Singular-Value Learning for Cross-Covariances Forecasting in Financial Markets
ArXiv ID: 2601.07687 “View on arXiv”
Authors: Efstratios Manolakis, Christian Bongiorno, Rosario Nunzio Mantegna
Abstract
A new wave of work on covariance cleaning and nonlinear shrinkage has delivered asymptotically optimal analytical solutions for large covariance matrices. Building on this progress, these ideas have been generalized to empirical cross-covariance matrices, whose singular-value shrinkage characterizes comovements between one set of assets and another. Existing analytical cross-covariance cleaners are derived under strong stationarity and large-sample assumptions, and they typically rely on mesoscopic regularity conditions such as bounded spectra; macroscopic common modes (e.g., a global market factor) violate these conditions. When applied to real equity returns, where dependence structures drift over time and global modes are prominent, we find that these theoretically optimal formulas do not translate into robust out-of-sample performance. We address this gap by designing a random-matrix-inspired neural architecture that operates in the empirical singular-vector basis and learns a nonlinear mapping from empirical singular values to their corresponding cleaned values. By construction, the network can recover the analytical solution as a special case, yet it remains flexible enough to adapt to non-stationary dynamics and mode-driven distortions. Trained on a long history of equity returns, the proposed method achieves a more favorable bias-variance trade-off than purely analytical cleaners and delivers systematically lower out-of-sample cross-covariance prediction errors. Our results demonstrate that combining random-matrix theory with machine learning makes asymptotic theories practically effective in realistic time-varying markets.
Keywords: Covariance Matrix Cleaning, Nonlinear Shrinkage, Random Matrix Theory, Neural Networks, Cross-Covariance
Complexity vs Empirical Score
- Math Complexity: 8.5/10
- Empirical Rigor: 6.5/10
- Quadrant: Holy Grail
- Why: The paper is mathematically dense, featuring advanced random matrix theory derivations, singular value decomposition, and the formulation of a physics-informed neural network with symmetry constraints. Empirically, it proposes a specific neural architecture trained on historical equity returns and evaluates out-of-sample prediction errors, indicating a data-heavy, implementation-focused approach.
flowchart TD
A["Research Goal<br>Address drift & macro modes<br>in cross-covariance cleaning"] --> B["Methodology<br>Physics-Informed Singular-Value Learning<br>(Neural Architecture)"]
B --> C["Data Input<br>Long history of equity returns"]
C --> D["Computational Process<br>Learn nonlinear mapping in<br>empirical singular-vector basis"]
D --> E{"Outcome Analysis"}
E -->|Analytical Cleaner| F["High Out-of-Sample Error<br>Non-stationary violation"]
E -->|Proposed Method| G["Lower Prediction Error<br>Optimal Bias-Variance Trade-off"]