false

Denoising Complex Covariance Matrices with Hybrid ResNet and Random Matrix Theory: Cryptocurrency Portfolio Applications

Denoising Complex Covariance Matrices with Hybrid ResNet and Random Matrix Theory: Cryptocurrency Portfolio Applications ArXiv ID: 2510.19130 “View on arXiv” Authors: Andres Garcia-Medina Abstract Covariance matrices estimated from short, noisy, and non-Gaussian financial time series are notoriously unstable. Empirical evidence suggests that such covariance structures often exhibit power-law scaling, reflecting complex, hierarchical interactions among assets. Motivated by this observation, we introduce a power-law covariance model to characterize collective market dynamics and propose a hybrid estimator that integrates Random Matrix Theory (RMT) with deep Residual Neural Networks (ResNets). The RMT component regularizes the eigenvalue spectrum in high-dimensional noisy settings, while the ResNet learns data-driven corrections that recover latent structural dependencies encoded in the eigenvectors. Monte Carlo simulations show that the proposed ResNet-based estimators consistently minimize both Frobenius and minimum-variance losses across a range of population covariance models. Empirical experiments on 89 cryptocurrencies over the period 2020-2025, using a training window ending at the local Bitcoin peak in November 2021 and testing through the subsequent bear market, demonstrate that a two-step estimator combining hierarchical filtering with ResNet corrections produces the most profitable and well-balanced portfolios, remaining robust across market regime shifts. Beyond finance, the proposed hybrid framework applies broadly to high-dimensional systems described by low-rank deformations of Wishart ensembles, where incorporating eigenvector information enables the detection of multiscale and hierarchical structure that is inaccessible to purely eigenvalue-based methods. ...

October 21, 2025 · 2 min · Research Team

Schur Complementary Allocation: A Unification of Hierarchical Risk Parity and Minimum Variance Portfolios

Schur Complementary Allocation: A Unification of Hierarchical Risk Parity and Minimum Variance Portfolios ArXiv ID: 2411.05807 “View on arXiv” Authors: Unknown Abstract Despite many attempts to make optimization-based portfolio construction in the spirit of Markowitz robust and approachable, it is far from universally adopted. Meanwhile, the collection of more heuristic divide-and-conquer approaches was revitalized by Lopez de Prado where Hierarchical Risk Parity (HRP) was introduced. This paper reveals the hidden connection between these seemingly disparate approaches. ...

October 29, 2024 · 2 min · Research Team

NeuralFactors: A Novel Factor Learning Approach to Generative Modeling of Equities

NeuralFactors: A Novel Factor Learning Approach to Generative Modeling of Equities ArXiv ID: 2408.01499 “View on arXiv” Authors: Unknown Abstract The use of machine learning for statistical modeling (and thus, generative modeling) has grown in popularity with the proliferation of time series models, text-to-image models, and especially large language models. Fundamentally, the goal of classical factor modeling is statistical modeling of stock returns, and in this work, we explore using deep generative modeling to enhance classical factor models. Prior work has explored the use of deep generative models in order to model hundreds of stocks, leading to accurate risk forecasting and alpha portfolio construction; however, that specific model does not allow for easy factor modeling interpretation in that the factor exposures cannot be deduced. In this work, we introduce NeuralFactors, a novel machine-learning based approach to factor analysis where a neural network outputs factor exposures and factor returns, trained using the same methodology as variational autoencoders. We show that this model outperforms prior approaches both in terms of log-likelihood performance and computational efficiency. Further, we show that this method is competitive to prior work in generating realistic synthetic data, covariance estimation, risk analysis (e.g., value at risk, or VaR, of portfolios), and portfolio optimization. Finally, due to the connection to classical factor analysis, we analyze how the factors our model learns cluster together and show that the factor exposures could be used for embedding stocks. ...

August 2, 2024 · 2 min · Research Team

Portfolio Optimization with Robust Covariance and Conditional Value-at-Risk Constraints

Portfolio Optimization with Robust Covariance and Conditional Value-at-Risk Constraints ArXiv ID: 2406.00610 “View on arXiv” Authors: Unknown Abstract The measure of portfolio risk is an important input of the Markowitz framework. In this study, we explored various methods to obtain a robust covariance estimators that are less susceptible to financial data noise. We evaluated the performance of large-cap portfolio using various forms of Ledoit Shrinkage Covariance and Robust Gerber Covariance matrix during the period of 2012 to 2022. Out-of-sample performance indicates that robust covariance estimators can outperform the market capitalization-weighted benchmark portfolio, particularly during bull markets. The Gerber covariance with Mean-Absolute-Deviation (MAD) emerged as the top performer. However, robust estimators do not manage tail risk well under extreme market conditions, for example, Covid-19 period. When we aim to control for tail risk, we should add constraint on Conditional Value-at-Risk (CVaR) to make more conservative decision on risk exposure. Additionally, we incorporated unsupervised clustering algorithm K-means to the optimization algorithm (i.e. Nested Clustering Optimization, NCO). It not only helps mitigate numerical instability of the optimization algorithm, but also contributes to lower drawdown as well. ...

June 2, 2024 · 2 min · Research Team