false

Decomposition Pipeline for Large-Scale Portfolio Optimization with Applications to Near-Term Quantum Computing

Decomposition Pipeline for Large-Scale Portfolio Optimization with Applications to Near-Term Quantum Computing ArXiv ID: 2409.10301 “View on arXiv” Authors: Unknown Abstract Industrially relevant constrained optimization problems, such as portfolio optimization and portfolio rebalancing, are often intractable or difficult to solve exactly. In this work, we propose and benchmark a decomposition pipeline targeting portfolio optimization and rebalancing problems with constraints. The pipeline decomposes the optimization problem into constrained subproblems, which are then solved separately and aggregated to give a final result. Our pipeline includes three main components: preprocessing of correlation matrices based on random matrix theory, modified spectral clustering based on Newman’s algorithm, and risk rebalancing. Our empirical results show that our pipeline consistently decomposes real-world portfolio optimization problems into subproblems with a size reduction of approximately 80%. Since subproblems are then solved independently, our pipeline drastically reduces the total computation time for state-of-the-art solvers. Moreover, by decomposing large problems into several smaller subproblems, the pipeline enables the use of near-term quantum devices as solvers, providing a path toward practical utility of quantum computers in portfolio optimization. ...

September 16, 2024 · 2 min · Research Team

Longitudinal market structure detection using a dynamic modularity-spectral algorithm

Longitudinal market structure detection using a dynamic modularity-spectral algorithm ArXiv ID: 2407.04500 “View on arXiv” Authors: Unknown Abstract In this paper, we introduce the Dynamic Modularity-Spectral Algorithm (DynMSA), a novel approach to identify clusters of stocks with high intra-cluster correlations and low inter-cluster correlations by combining Random Matrix Theory with modularity optimisation and spectral clustering. The primary objective is to uncover hidden market structures and find diversifiers based on return correlations, thereby achieving a more effective risk-reducing portfolio allocation. We applied DynMSA to constituents of the S&P 500 and compared the results to sector- and market-based benchmarks. Besides the conception of this algorithm, our contributions further include implementing a sector-based calibration for modularity optimisation and a correlation-based distance function for spectral clustering. Testing revealed that DynMSA outperforms baseline models in intra- and inter-cluster correlation differences, particularly over medium-term correlation look-backs. It also identifies stable clusters and detects regime changes due to exogenous shocks, such as the COVID-19 pandemic. Portfolios constructed using our clusters showed higher Sortino and Sharpe ratios, lower downside volatility, reduced maximum drawdown and higher annualised returns compared to an equally weighted market benchmark. ...

July 5, 2024 · 2 min · Research Team

Systematic Comparable Company Analysis and Computation of Cost of Equity using Clustering

Systematic Comparable Company Analysis and Computation of Cost of Equity using Clustering ArXiv ID: 2405.12991 “View on arXiv” Authors: Unknown Abstract Computing cost of equity for private corporations and performing comparable company analysis (comps) for both public and private corporations is an integral but tedious and time-consuming task, with important applications spanning the finance world, from valuations to internal planning. Performing comps traditionally often times include high ambiguity and subjectivity, leading to unreliability and inconsistency. In this paper, I will present a systematic and faster approach to compute cost of equity for private corporations and perform comps for both public and private corporations using spectral and agglomerative clustering. This leads to a reduction in the time required to perform comps by orders of magnitude and entire process being more consistent and reliable. ...

April 25, 2024 · 2 min · Research Team

Random matrix theory and nested clustered portfolios on Mexican markets

Random matrix theory and nested clustered portfolios on Mexican markets ArXiv ID: 2306.05667 “View on arXiv” Authors: Unknown Abstract This work aims to deal with the optimal allocation instability problem of Markowitz’s modern portfolio theory in high dimensionality. We propose a combined strategy that considers covariance matrix estimators from Random Matrix Theory~(RMT) and the machine learning allocation methodology known as Nested Clustered Optimization~(NCO). The latter methodology is modified and reformulated in terms of the spectral clustering algorithm and Minimum Spanning Tree~(MST) to solve internal problems inherent to the original proposal. Markowitz’s classical mean-variance allocation and the modified NCO machine learning approach are tested on financial instruments listed on the Mexican Stock Exchange~(BMV) in a moving window analysis from 2018 to 2022. The modified NCO algorithm achieves stable allocations by incorporating RMT covariance estimators. In particular, the allocation weights are positive, and their absolute value adds up to the total capital without considering explicit restrictions in the formulation. Our results suggest that can be avoided the risky \emph{“short position”} investment strategy by means of RMT inference and statistical learning techniques. ...

June 9, 2023 · 2 min · Research Team

Robust Detection of Lead-Lag Relationships in Lagged Multi-Factor Models

Robust Detection of Lead-Lag Relationships in Lagged Multi-Factor Models ArXiv ID: 2305.06704 “View on arXiv” Authors: Unknown Abstract In multivariate time series systems, key insights can be obtained by discovering lead-lag relationships inherent in the data, which refer to the dependence between two time series shifted in time relative to one another, and which can be leveraged for the purposes of control, forecasting or clustering. We develop a clustering-driven methodology for robust detection of lead-lag relationships in lagged multi-factor models. Within our framework, the envisioned pipeline takes as input a set of time series, and creates an enlarged universe of extracted subsequence time series from each input time series, via a sliding window approach. This is then followed by an application of various clustering techniques, (such as k-means++ and spectral clustering), employing a variety of pairwise similarity measures, including nonlinear ones. Once the clusters have been extracted, lead-lag estimates across clusters are robustly aggregated to enhance the identification of the consistent relationships in the original universe. We establish connections to the multireference alignment problem for both the homogeneous and heterogeneous settings. Since multivariate time series are ubiquitous in a wide range of domains, we demonstrate that our method is not only able to robustly detect lead-lag relationships in financial markets, but can also yield insightful results when applied to an environmental data set. ...

May 11, 2023 · 2 min · Research Team