false

Simulate and Optimise: A two-layer mortgage simulator for designing novel mortgage assistance products

Simulate and Optimise: A two-layer mortgage simulator for designing novel mortgage assistance products ArXiv ID: 2411.00563 “View on arXiv” Authors: Unknown Abstract We develop a novel two-layer approach for optimising mortgage relief products through a simulated multi-agent mortgage environment. While the approach is generic, here the environment is calibrated to the US mortgage market based on publicly available census data and regulatory guidelines. Through the simulation layer, we assess the resilience of households to exogenous income shocks, while the optimisation layer explores strategies to improve the robustness of households to these shocks by making novel mortgage assistance products available to households. Households in the simulation are adaptive, learning to make mortgage-related decisions (such as product enrolment or strategic foreclosures) that maximize their utility, balancing their available liquidity and equity. We show how this novel two-layer simulation approach can successfully design novel mortgage assistance products to improve household resilience to exogenous shocks, and balance the costs of providing such products through post-hoc analysis. Previously, such analysis could only be conducted through expensive pilot studies involving real participants, demonstrating the benefit of the approach for designing and evaluating financial products. ...

November 1, 2024 · 2 min · Research Team

Deep Learning in Long-Short Stock Portfolio Allocation: An Empirical Study

Deep Learning in Long-Short Stock Portfolio Allocation: An Empirical Study ArXiv ID: 2411.13555 “View on arXiv” Authors: Unknown Abstract This paper provides an empirical study explores the application of deep learning algorithms-Multilayer Perceptron (MLP), Convolutional Neural Networks (CNN), Long Short-Term Memory (LSTM), and Transformer-in constructing long-short stock portfolios. Two datasets comprising randomly selected stocks from the S&P500 and NASDAQ indices, each spanning a decade of daily data, are utilized. The models predict daily stock returns based on historical features such as past returns,Relative Strength Index (RSI), trading volume, and volatility. Portfolios are dynamically adjusted by longing stocks with positive predicted returns and shorting those with negative predictions, with equal asset weights. Performance is evaluated over a two-year testing period, focusing on return, Sharpe ratio, and maximum drawdown metrics. The results demonstrate the efficacy of deep learning models in enhancing long-short stock portfolio performance. ...

October 31, 2024 · 2 min · Research Team

Moments by Integrating the Moment-Generating Function

Moments by Integrating the Moment-Generating Function ArXiv ID: 2410.23587 “View on arXiv” Authors: Unknown Abstract We introduce a novel method for obtaining a wide variety of moments of any random variable with a well-defined moment-generating function (MGF). We derive new expressions for fractional moments and fractional absolute moments, both central and non-central moments. The expressions are relatively simple integrals that involve the MGF, but do not require its derivatives. We label the new method CMGF because it uses a complex extension of the MGF and can be used to obtain complex moments. We illustrate the new method with three applications where the MGF is available in closed-form, while the corresponding densities and the derivatives of the MGF are either unavailable or very difficult to obtain. ...

October 31, 2024 · 2 min · Research Team

On Cost-Sensitive Distributionally Robust Log-Optimal Portfolio

On Cost-Sensitive Distributionally Robust Log-Optimal Portfolio ArXiv ID: 2410.23536 “View on arXiv” Authors: Unknown Abstract This paper addresses a novel \emph{“cost-sensitive”} distributionally robust log-optimal portfolio problem, where the investor faces \emph{“ambiguous”} return distributions, and a general convex transaction cost model is incorporated. The uncertainty in the return distribution is quantified using the \emph{“Wasserstein”} metric, which captures distributional ambiguity. We establish conditions that ensure robustly survivable trades for all distributions in the Wasserstein ball under convex transaction costs. By leveraging duality theory, we approximate the infinite-dimensional distributionally robust optimization problem with a finite convex program, enabling computational tractability for mid-sized portfolios. Empirical studies using S&P 500 data validate our theoretical framework: without transaction costs, the optimal portfolio converges to an equal-weighted allocation, while with transaction costs, the portfolio shifts slightly towards the risk-free asset, reflecting the trade-off between cost considerations and optimal allocation. ...

October 31, 2024 · 2 min · Research Team

AI in Investment Analysis: LLMs for Equity Stock Ratings

AI in Investment Analysis: LLMs for Equity Stock Ratings ArXiv ID: 2411.00856 “View on arXiv” Authors: Unknown Abstract Investment Analysis is a cornerstone of the Financial Services industry. The rapid integration of advanced machine learning techniques, particularly Large Language Models (LLMs), offers opportunities to enhance the equity rating process. This paper explores the application of LLMs to generate multi-horizon stock ratings by ingesting diverse datasets. Traditional stock rating methods rely heavily on the expertise of financial analysts, and face several challenges such as data overload, inconsistencies in filings, and delayed reactions to market events. Our study addresses these issues by leveraging LLMs to improve the accuracy and consistency of stock ratings. Additionally, we assess the efficacy of using different data modalities with LLMs for the financial domain. We utilize varied datasets comprising fundamental financial, market, and news data from January 2022 to June 2024, along with GPT-4-32k (v0613) (with a training cutoff in Sep. 2021 to prevent information leakage). Our results show that our benchmark method outperforms traditional stock rating methods when assessed by forward returns, specially when incorporating financial fundamentals. While integrating news data improves short-term performance, substituting detailed news summaries with sentiment scores reduces token use without loss of performance. In many cases, omitting news data entirely enhances performance by reducing bias. Our research shows that LLMs can be leveraged to effectively utilize large amounts of multimodal financial data, as showcased by their effectiveness at the stock rating prediction task. Our work provides a reproducible and efficient framework for generating accurate stock ratings, serving as a cost-effective alternative to traditional methods. Future work will extend to longer timeframes, incorporate diverse data, and utilize newer models for enhanced insights. ...

October 30, 2024 · 2 min · Research Team

Continuous Risk Factor Models: Analyzing Asset Correlations through Energy Distance

Continuous Risk Factor Models: Analyzing Asset Correlations through Energy Distance ArXiv ID: 2410.23447 “View on arXiv” Authors: Unknown Abstract This paper introduces a novel approach to financial risk analysis that does not rely on traditional price and market data, instead using market news to model assets as distributions over a metric space of risk factors. By representing asset returns as integrals over the scalar field of these risk factors, we derive the covariance structure between asset returns. Utilizing encoder-only language models to embed this news data, we explore the relationships between asset return distributions through the concept of Energy Distance, establishing connections between distributional differences and excess returns co-movements. This data-agnostic approach provides new insights into portfolio diversification, risk management, and the construction of hedging strategies. Our findings have significant implications for both theoretical finance and practical risk management, offering a more robust framework for modelling complex financial systems without depending on conventional market data. ...

October 30, 2024 · 2 min · Research Team

Emerging countries' counter-currency cycles in the face of crises and dominant currencies

Emerging countries’ counter-currency cycles in the face of crises and dominant currencies ArXiv ID: 2410.23002 “View on arXiv” Authors: Unknown Abstract This article examines how emerging economies use countercyclical monetary policies to manage economic crises and fluctuations in dominant currencies, such as the US dollar and the euro. Global economic cycles are marked by phases of expansion and recession, often exacerbated by major financial crises. These crises, such as those of 1997, 2008 and the disruption caused by the COVID-19 pandemic, have a particular impact on emerging economies due to their heightened vulnerability to foreign capital flows and exports.Counter-cyclical monetary policies, including interest rate adjustments, foreign exchange interventions and capital controls, are essential to stabilize these economies. These measures aim to mitigate the effects of economic shocks, maintain price stability and promote sustainable growth. This article presents a theoretical analysis of economic cycles and financial crises, highlighting the role of dominant currencies in global economic stability. Currencies such as the dollar and the euro strongly influence emerging economies, notably through exchange rate variations and international capital movements. Analysis of the monetary strategies of emerging economies, through case studies of Brazil, India and Nigeria, reveals how these countries use tools such as interest rates, foreign exchange interventions and capital controls to manage the impacts of crises and fluctuations in dominant currencies. The article also highlights the challenges and limitations faced by these countries, including structural and institutional constraints and the reactions of international financial markets.Finally, an econometric analysis using a Vector AutoRegression (VAR) model illustrates the impact of monetary policies on key economic variables, such as GDP, interest rates, inflation and exchange rates. The results show that emerging economies, although sensitive to external shocks, can adjust their policies to stabilize economic growth in the medium and long term. ...

October 30, 2024 · 2 min · Research Team

Rebalancing-versus-Rebalancing: Improving the fidelity of Loss-versus-Rebalancing

Rebalancing-versus-Rebalancing: Improving the fidelity of Loss-versus-Rebalancing ArXiv ID: 2410.23404 “View on arXiv” Authors: Unknown Abstract Automated Market Makers (AMMs) hold assets and are constantly being rebalanced by external arbitrageurs to match external market prices. Loss-versus-rebalancing (LVR) is a pivotal metric for measuring how an AMM pool performs for its liquidity providers (LPs) relative to an idealised benchmark where rebalancing is done not via the action of arbitrageurs but instead by trading with a perfect centralised exchange with no fees, spread or slippage. This renders it an imperfect tool for judging rebalancing efficiency between execution platforms. We introduce Rebalancing-versus-rebalancing (RVR), a higher-fidelity model that better captures the frictions present in centralised rebalancing. We perform a battery of experiments comparing managing a portfolio on AMMs vs this new and more realistic centralised exchange benchmark-RVR. We are also particularly interested in dynamic AMMs that run strategies beyond fixed weight allocations-Temporal Function Market Makers. This is particularly important for asset managers evaluating execution management systems. In this paper we simulate more than 1000 different strategies settings as well as testing hundreds of different variations in centralised exchange (CEX) fees, AMM fees & gas costs. We find that, under this modeling approach, AMM pools (even with no retail/noise traders) often offer superior execution and rebalancing efficiency compared to centralised rebalancing, for all but the lowest CEX fee levels. We also take a simple approach to model noise traders & find that even a small amount of noise volume increases modeled AMM performance such that CEX rebalancing finds it hard to compete. This indicates that decentralised AMM-based asset management can offer superior performance and execution management for asset managers looking to rebalance portfolios, offering an alternative use case for dynamic AMMs beyond core liquidity providing. ...

October 30, 2024 · 3 min · Research Team

Evaluating utility in synthetic banking microdata applications

Evaluating utility in synthetic banking microdata applications ArXiv ID: 2410.22519 “View on arXiv” Authors: Unknown Abstract Financial regulators such as central banks collect vast amounts of data, but access to the resulting fine-grained banking microdata is severely restricted by banking secrecy laws. Recent developments have resulted in mechanisms that generate faithful synthetic data, but current evaluation frameworks lack a focus on the specific challenges of banking institutions and microdata. We develop a framework that considers the utility and privacy requirements of regulators, and apply this to financial usage indices, term deposit yield curves, and credit card transition matrices. Using the Central Bank of Paraguay’s data, we provide the first implementation of synthetic banking microdata using a central bank’s collected information, with the resulting synthetic datasets for all three domain applications being publicly available and featuring information not yet released in statistical disclosure. We find that applications less susceptible to post-processing information loss, which are based on frequency tables, are particularly suited for this approach, and that marginal-based inference mechanisms to outperform generative adversarial network models for these applications. Our results demonstrate that synthetic data generation is a promising privacy-enhancing technology for financial regulators seeking to complement their statistical disclosure, while highlighting the crucial role of evaluating such endeavors in terms of utility and privacy requirements. ...

October 29, 2024 · 2 min · Research Team

Fast Deep Hedging with Second-Order Optimization

Fast Deep Hedging with Second-Order Optimization ArXiv ID: 2410.22568 “View on arXiv” Authors: Unknown Abstract Hedging exotic options in presence of market frictions is an important risk management task. Deep hedging can solve such hedging problems by training neural network policies in realistic simulated markets. Training these neural networks may be delicate and suffer from slow convergence, particularly for options with long maturities and complex sensitivities to market parameters. To address this, we propose a second-order optimization scheme for deep hedging. We leverage pathwise differentiability to construct a curvature matrix, which we approximate as block-diagonal and Kronecker-factored to efficiently precondition gradients. We evaluate our method on a challenging and practically important problem: hedging a cliquet option on a stock with stochastic volatility by trading in the spot and vanilla options. We find that our second-order scheme can optimize the policy in 1/4 of the number of steps that standard adaptive moment-based optimization takes. ...

October 29, 2024 · 2 min · Research Team