false

Decomposition Pipeline for Large-Scale Portfolio Optimization with Applications to Near-Term Quantum Computing

Decomposition Pipeline for Large-Scale Portfolio Optimization with Applications to Near-Term Quantum Computing ArXiv ID: 2409.10301 “View on arXiv” Authors: Unknown Abstract Industrially relevant constrained optimization problems, such as portfolio optimization and portfolio rebalancing, are often intractable or difficult to solve exactly. In this work, we propose and benchmark a decomposition pipeline targeting portfolio optimization and rebalancing problems with constraints. The pipeline decomposes the optimization problem into constrained subproblems, which are then solved separately and aggregated to give a final result. Our pipeline includes three main components: preprocessing of correlation matrices based on random matrix theory, modified spectral clustering based on Newman’s algorithm, and risk rebalancing. Our empirical results show that our pipeline consistently decomposes real-world portfolio optimization problems into subproblems with a size reduction of approximately 80%. Since subproblems are then solved independently, our pipeline drastically reduces the total computation time for state-of-the-art solvers. Moreover, by decomposing large problems into several smaller subproblems, the pipeline enables the use of near-term quantum devices as solvers, providing a path toward practical utility of quantum computers in portfolio optimization. ...

September 16, 2024 · 2 min · Research Team

Robust Reinforcement Learning with Dynamic Distortion Risk Measures

Robust Reinforcement Learning with Dynamic Distortion Risk Measures ArXiv ID: 2409.10096 “View on arXiv” Authors: Unknown Abstract In a reinforcement learning (RL) setting, the agent’s optimal strategy heavily depends on her risk preferences and the underlying model dynamics of the training environment. These two aspects influence the agent’s ability to make well-informed and time-consistent decisions when facing testing environments. In this work, we devise a framework to solve robust risk-aware RL problems where we simultaneously account for environmental uncertainty and risk with a class of dynamic robust distortion risk measures. Robustness is introduced by considering all models within a Wasserstein ball around a reference model. We estimate such dynamic robust risk measures using neural networks by making use of strictly consistent scoring functions, derive policy gradient formulae using the quantile representation of distortion risk measures, and construct an actor-critic algorithm to solve this class of robust risk-aware RL problems. We demonstrate the performance of our algorithm on a portfolio allocation example. ...

September 16, 2024 · 2 min · Research Team

Shocks-adaptive Robust Minimum Variance Portfolio for a Large Universe of Assets

Shocks-adaptive Robust Minimum Variance Portfolio for a Large Universe of Assets ArXiv ID: 2410.01826 “View on arXiv” Authors: Unknown Abstract This paper proposes a robust, shocks-adaptive portfolio in a large-dimensional assets universe where the number of assets could be comparable to or even larger than the sample size. It is well documented that portfolios based on optimizations are sensitive to outliers in return data. We deal with outliers by proposing a robust factor model, contributing methodologically through the development of a robust principal component analysis (PCA) for factor model estimation and a shrinkage estimation for the random error covariance matrix. This approach extends the well-regarded Principal Orthogonal Complement Thresholding (POET) method (Fan et al., 2013), enabling it to effectively handle heavy tails and sudden shocks in data. The novelty of the proposed robust method is its adaptiveness to both global and idiosyncratic shocks, without the need to distinguish them, which is useful in forming portfolio weights when facing outliers. We develop the theoretical results of the robust factor model and the robust minimum variance portfolio. Numerical and empirical results show the superior performance of the new portfolio. ...

September 16, 2024 · 2 min · Research Team

Return Prediction for Mean-Variance Portfolio Selection: How Decision-Focused Learning Shapes Forecasting Models

Return Prediction for Mean-Variance Portfolio Selection: How Decision-Focused Learning Shapes Forecasting Models ArXiv ID: 2409.09684 “View on arXiv” Authors: Unknown Abstract Markowitz laid the foundation of portfolio theory through the mean-variance optimization (MVO) framework. However, the effectiveness of MVO is contingent on the precise estimation of expected returns, variances, and covariances of asset returns, which are typically uncertain. Machine learning models are becoming useful in estimating uncertain parameters, and such models are trained to minimize prediction errors, such as mean squared errors (MSE), which treat prediction errors uniformly across assets. Recent studies have pointed out that this approach would lead to suboptimal decisions and proposed Decision-Focused Learning (DFL) as a solution, integrating prediction and optimization to improve decision-making outcomes. While studies have shown DFL’s potential to enhance portfolio performance, the detailed mechanisms of how DFL modifies prediction models for MVO remain unexplored. This study investigates how DFL adjusts stock return prediction models to optimize decisions in MVO. Theoretically, we show that DFL’s gradient can be interpreted as tilting the MSE-based prediction errors by the inverse covariance matrix, effectively incorporating inter-asset correlations into the learning process, while MSE treats each asset’s error independently. This tilting mechanism leads to systematic prediction biases where DFL overestimates returns for assets included in portfolios while underestimating excluded assets. Our findings reveal why DFL achieves superior portfolio performance despite higher prediction errors. The strategic biases are features, not flaws. ...

September 15, 2024 · 2 min · Research Team

Disentangling the sources of cyber risk premia

Disentangling the sources of cyber risk premia ArXiv ID: 2409.08728 “View on arXiv” Authors: Unknown Abstract We use a methodology based on a machine learning algorithm to quantify firms’ cyber risks based on their disclosures and a dedicated cyber corpus. The model can identify paragraphs related to determined cyber-threat types and accordingly attribute several related cyber scores to the firm. The cyber scores are unrelated to other firms’ characteristics. Stocks with high cyber scores significantly outperform other stocks. The long-short cyber risk factors have positive risk premia, are robust to all factors’ benchmarks, and help price returns. Furthermore, we suggest the market does not distinguish between different types of cyber risks but instead views them as a single, aggregate cyber risk. ...

September 13, 2024 · 2 min · Research Team

Interpool: a liquidity pool designed for interoperability that mints, exchanges, and burns

Interpool: a liquidity pool designed for interoperability that mints, exchanges, and burns ArXiv ID: 2410.00011 “View on arXiv” Authors: Unknown Abstract The lack of proper interoperability poses a significant challenge in leveraging use cases within the blockchain industry. Unlike typical solutions that rely on third parties such as oracles and witnesses, the interpool design operates as a standalone solution that mints, exchanges, and burns (MEB) within the same liquidity pool. This MEB approach ensures that minting is backed by the locked capital supplied by liquidity providers. During the exchange process, the order of transactions in the mempool is optimized to maximize returns, effectively transforming the front-running issue into a solution that forges an external blockchain hash. This forged hash enables a novel protocol, Listrack (Listen and Track), which ensures that ultimate liquidity is always enforced through a solid burning procedure, strengthening a trustless design. Supported by Listrack, atomic swaps become feasible even outside the interpool, thereby enhancing the current design into a comprehensive interoperability solution ...

September 13, 2024 · 2 min · Research Team

KodeXv0.1: A Family of State-of-the-Art Financial Large Language Models

KodeXv0.1: A Family of State-of-the-Art Financial Large Language Models ArXiv ID: 2409.13749 “View on arXiv” Authors: Unknown Abstract Although powerful, current cutting-edge LLMs may not fulfil the needs of highly specialised sectors. We introduce KodeXv0.1, a family of large language models that outclass GPT-4 in financial question answering. We utilise the base variants of Llama 3.1 8B and 70B and adapt them to the financial domain through a custom training regime. To this end, we collect and process a large number of publicly available financial documents such as earnings calls and business reports. These are used to generate a high-quality, synthetic dataset consisting of Context-Question-Answer triplets which closely mirror real-world financial tasks. Using the train split of this dataset, we perform RAG-aware 4bit LoRA instruction tuning runs of Llama 3.1 base variants to produce KodeX-8Bv0.1 and KodeX-70Bv0.1. We then complete extensive model evaluations using FinanceBench, FinQABench and the withheld test split of our dataset. Our results show that KodeX-8Bv0.1 is more reliable in financial contexts than cutting-edge instruct models in the same parameter regime, surpassing them by up to 9.24%. In addition, it is even capable of outperforming state-of-the-art proprietary models such as GPT-4 by up to 7.07%. KodeX-70Bv0.1 represents a further improvement upon this, exceeding GPT-4’s performance on every tested benchmark. ...

September 13, 2024 · 2 min · Research Team

Tuning into Climate Risks: Extracting Innovation from Television News for Clean Energy Firms

Tuning into Climate Risks: Extracting Innovation from Television News for Clean Energy Firms ArXiv ID: 2409.08701 “View on arXiv” Authors: Unknown Abstract This article develops multiple novel climate risk measures (or variables) based on the television news coverage by Bloomberg, CNBC, and Fox Business, and examines how they affect the systematic and idiosyncratic risks of clean energy firms in the United States. The measures are built on climate related keywords and cover the volume of coverage, type of coverage (climate crisis, renewable energy, and government & human initiatives), and media sentiments. We show that an increase in the aggregate measure of climate risk, as indicated by coverage volume, reduces idiosyncratic risk while increasing systematic risk. When climate risk is segregated, we find that systematic risk is positively affected by the physical risk of climate crises and transition risk from government & human initiatives, but no such impact is evident for idiosyncratic risk. Additionally, we observe an asymmetry in risk behavior: negative sentiments tend to decrease idiosyncratic risk and increase systematic risk, while positive sentiments have no significant impact. These findings remain robust to including print media and climate policy uncertainty variables, though some deviations are noted during the COVID-19 period. ...

September 13, 2024 · 2 min · Research Team

On the macroeconomic fundamentals of long-term volatilities and dynamic correlations in COMEX copper futures

On the macroeconomic fundamentals of long-term volatilities and dynamic correlations in COMEX copper futures ArXiv ID: 2409.08355 “View on arXiv” Authors: Unknown Abstract This paper examines the influence of low-frequency macroeconomic variables on the high-frequency returns of copper futures and the long-term correlation with the S&P 500 index, employing GARCH-MIDAS and DCC-MIDAS modeling frameworks. The estimated results of GARCH-MIDAS show that realized volatility (RV), level of interest rates (IR), industrial production (IP) and producer price index (PPI), volatility of Slope, PPI, consumer sentiment index (CSI), and dollar index (DI) have significant impacts on Copper futures returns, among which PPI is the most efficient macroeconomic variable. From comparison among DCC-GARCH and DCC-MIDAS model, the added MIDAS filter of PPI improves the model fitness and have better performance than RV in effecting the long-run relationship between Copper futures and S&P 500. ...

September 12, 2024 · 2 min · Research Team

Portfolio Stress Testing and Value at Risk (VaR) Incorporating Current Market Conditions

Portfolio Stress Testing and Value at Risk (VaR) Incorporating Current Market Conditions ArXiv ID: 2409.18970 “View on arXiv” Authors: Unknown Abstract Value at Risk (VaR) and stress testing are two of the most widely used approaches in portfolio risk management to estimate potential market value losses under adverse market moves. VaR quantifies potential loss in value over a specified horizon (such as one day or ten days) at a desired confidence level (such as 95’th percentile). In scenario design and stress testing, the goal is to construct extreme market scenarios such as those involving severe recession or a specific event of concern (such as a rapid increase in rates or a geopolitical event), and quantify potential impact of such scenarios on the portfolio. The goal of this paper is to propose an approach for incorporating prevailing market conditions in stress scenario design and estimation of VaR so that they provide more accurate and realistic insights about portfolio risk over the near term. The proposed approach is based on historical data where historical observations of market changes are given more weight if a certain period in history is “more similar” to the prevailing market conditions. Clusters of market conditions are identified using a Machine Learning approach called Variational Inference (VI) where for each cluster future changes in portfolio value are similar. VI based algorithm uses optimization techniques to obtain analytical approximations of the posterior probability density of cluster assignments (market regimes) and probabilities of different outcomes for changes in portfolio value. Covid related volatile period around the year 2020 is used to illustrate the performance of the proposed approach and in particular show how VaR and stress scenarios adapt quickly to changing market conditions. Another advantage of the proposed approach is that classification of market conditions into clusters can provide useful insights about portfolio performance under different market conditions. ...

September 12, 2024 · 3 min · Research Team