false

Quantum Risk Analysis of Financial Derivatives

Quantum Risk Analysis of Financial Derivatives ArXiv ID: 2404.10088 “View on arXiv” Authors: Unknown Abstract We introduce two quantum algorithms to compute the Value at Risk (VaR) and Conditional Value at Risk (CVaR) of financial derivatives using quantum computers: the first by applying existing ideas from quantum risk analysis to derivative pricing, and the second based on a novel approach using Quantum Signal Processing (QSP). Previous work in the literature has shown that quantum advantage is possible in the context of individual derivative pricing and that advantage can be leveraged in a straightforward manner in the estimation of the VaR and CVaR. The algorithms we introduce in this work aim to provide an additional advantage by encoding the derivative price over multiple market scenarios in superposition and computing the desired values by applying appropriate transformations to the quantum system. We perform complexity and error analysis of both algorithms, and show that while the two algorithms have the same asymptotic scaling the QSP-based approach requires significantly fewer quantum resources for the same target accuracy. Additionally, by numerically simulating both quantum and classical VaR algorithms, we demonstrate that the quantum algorithm can extract additional advantage from a quantum computer compared to individual derivative pricing. Specifically, we show that under certain conditions VaR estimation can lower the latest published estimates of the logical clock rate required for quantum advantage in derivative pricing by up to $\sim 30$x. In light of these results, we are encouraged that our formulation of derivative pricing in the QSP framework may be further leveraged for quantum advantage in other relevant financial applications, and that quantum computers could be harnessed more efficiently by considering problems in the financial sector at a higher level. ...

April 15, 2024 · 3 min · Research Team

Developing An Attention-Based Ensemble Learning Framework for Financial Portfolio Optimisation

Developing An Attention-Based Ensemble Learning Framework for Financial Portfolio Optimisation ArXiv ID: 2404.08935 “View on arXiv” Authors: Unknown Abstract In recent years, deep or reinforcement learning approaches have been applied to optimise investment portfolios through learning the spatial and temporal information under the dynamic financial market. Yet in most cases, the existing approaches may produce biased trading signals based on the conventional price data due to a lot of market noises, which possibly fails to balance the investment returns and risks. Accordingly, a multi-agent and self-adaptive portfolio optimisation framework integrated with attention mechanisms and time series, namely the MASAAT, is proposed in this work in which multiple trading agents are created to observe and analyse the price series and directional change data that recognises the significant changes of asset prices at different levels of granularity for enhancing the signal-to-noise ratio of price series. Afterwards, by reconstructing the tokens of financial data in a sequence, the attention-based cross-sectional analysis module and temporal analysis module of each agent can effectively capture the correlations between assets and the dependencies between time points. Besides, a portfolio generator is integrated into the proposed framework to fuse the spatial-temporal information and then summarise the portfolios suggested by all trading agents to produce a newly ensemble portfolio for reducing biased trading actions and balancing the overall returns and risks. The experimental results clearly demonstrate that the MASAAT framework achieves impressive enhancement when compared with many well-known portfolio optimsation approaches on three challenging data sets of DJIA, S&P 500 and CSI 300. More importantly, our proposal has potential strengths in many possible applications for future study. ...

April 13, 2024 · 2 min · Research Team

DEX Specs: A Mean Field Approach to DeFi Currency Exchanges

DEX Specs: A Mean Field Approach to DeFi Currency Exchanges ArXiv ID: 2404.09090 “View on arXiv” Authors: Unknown Abstract We investigate the behavior of liquidity providers (LPs) by modeling a decentralized cryptocurrency exchange (DEX) based on Uniswap v3. LPs with heterogeneous characteristics choose optimal liquidity positions subject to uncertainty regarding the size of exogenous incoming transactions and the prices of assets in the wider market. They engage in a game among themselves, and the resulting liquidity distribution determines the exchange rate dynamics and potential arbitrage opportunities of the pool. We calibrate the distribution of LP characteristics based on Uniswap data and the equilibrium strategy resulting from this mean-field game produces pool exchange rate dynamics and liquidity evolution consistent with observed pool behavior. We subsequently introduce Maximal Extractable Value (MEV) bots who perform Just-In-Time (JIT) liquidity attacks, and develop a Stackelberg game between LPs and bots. This addition results in more accurate simulated pool exchange rate dynamics and stronger predictive power regarding the evolution of the pool liquidity distribution. ...

April 13, 2024 · 2 min · Research Team

Enhancing path-integral approximation for non-linear diffusion with neural network

Enhancing path-integral approximation for non-linear diffusion with neural network ArXiv ID: 2404.08903 “View on arXiv” Authors: Unknown Abstract Enhancing the existing solution for pricing of fixed income instruments within Black-Karasinski model structure, with neural network at various parameterisation points to demonstrate that the method is able to achieve superior outcomes for multiple calibrations across extended projection horizons. Keywords: Black-Karasinski Model, Fixed Income Pricing, Neural Networks, Interest Rate Models, Fixed Income Complexity vs Empirical Score Math Complexity: 8.5/10 Empirical Rigor: 3.0/10 Quadrant: Lab Rats Why: The paper employs advanced mathematical concepts including path integrals, Taylor series expansions, and PDE approximations, but lacks empirical validation with backtests or statistical metrics, focusing instead on theoretical model formulation. flowchart TD A["Research Goal"] --> B["Data & Calibration"] A --> C["Methodology"] B --> D["Path-Integral Approx."] C --> D D --> E["Neural Network Enh."] E --> F["Computational Process"] F --> G["Key Outcomes"] subgraph Inputs A B C end subgraph Processing D E F end subgraph Results G end

April 13, 2024 · 1 min · Research Team

A backward differential deep learning-based algorithm for solving high-dimensional nonlinear backward stochastic differential equations

A backward differential deep learning-based algorithm for solving high-dimensional nonlinear backward stochastic differential equations ArXiv ID: 2404.08456 “View on arXiv” Authors: Unknown Abstract In this work, we propose a novel backward differential deep learning-based algorithm for solving high-dimensional nonlinear backward stochastic differential equations (BSDEs), where the deep neural network (DNN) models are trained not only on the inputs and labels but also the differentials of the corresponding labels. This is motivated by the fact that differential deep learning can provide an efficient approximation of the labels and their derivatives with respect to inputs. The BSDEs are reformulated as differential deep learning problems by using Malliavin calculus. The Malliavin derivatives of solution to a BSDE satisfy themselves another BSDE, resulting thus in a system of BSDEs. Such formulation requires the estimation of the solution, its gradient, and the Hessian matrix, represented by the triple of processes $\left(Y, Z, Γ\right).$ All the integrals within this system are discretized by using the Euler-Maruyama method. Subsequently, DNNs are employed to approximate the triple of these unknown processes. The DNN parameters are backwardly optimized at each time step by minimizing a differential learning type loss function, which is defined as a weighted sum of the dynamics of the discretized BSDE system, with the first term providing the dynamics of the process $Y$ and the other the process $Z$. An error analysis is carried out to show the convergence of the proposed algorithm. Various numerical experiments up to $50$ dimensions are provided to demonstrate the high efficiency. Both theoretically and numerically, it is demonstrated that our proposed scheme is more efficient compared to other contemporary deep learning-based methodologies, especially in the computation of the process $Γ$. ...

April 12, 2024 · 2 min · Research Team

Strategic Informed Trading and the Value of Private Information

Strategic Informed Trading and the Value of Private Information ArXiv ID: 2404.08757 “View on arXiv” Authors: Unknown Abstract We consider a market of risky financial assets whose participants are an informed trader, a representative uninformed trader, and noisy liquidity providers. We prove the existence of a market-clearing equilibrium when the insider internalizes her power to impact prices, but the uninformed trader takes prices as given. Compared to the associated competitive economy, in equilibrium the insider strategically reveals a noisier signal, and prices are less reactive to publicly available information. Additionally, and in direct contrast to the related literature, in equilibrium the insider’s indirect utility monotonically increases in the signal precision. Therefore, the insider is motivated not only to obtain, but also to refine, her signal. Lastly, we show that compared to the competitive economy, the insider’s internalization of price impact is utility improving for the uninformed trader, but somewhat surprisingly may be utility decreasing for the insider herself. This utility reduction occurs provided the insider is sufficiently risk averse compared to the uninformed trader, and provided the signal is of sufficiently low quality. ...

April 12, 2024 · 2 min · Research Team

Exponentially Weighted Moving Models

Exponentially Weighted Moving Models ArXiv ID: 2404.08136 “View on arXiv” Authors: Unknown Abstract An exponentially weighted moving model (EWMM) for a vector time series fits a new data model each time period, based on an exponentially fading loss function on past observed data. The well known and widely used exponentially weighted moving average (EWMA) is a special case that estimates the mean using a square loss function. For quadratic loss functions EWMMs can be fit using a simple recursion that updates the parameters of a quadratic function. For other loss functions, the entire past history must be stored, and the fitting problem grows in size as time increases. We propose a general method for computing an approximation of EWMM, which requires storing only a window of a fixed number of past samples, and uses an additional quadratic term to approximate the loss associated with the data before the window. This approximate EWMM relies on convex optimization, and solves problems that do not grow with time. We compare the estimates produced by our approximation with the estimates from the exact EWMM method. ...

April 11, 2024 · 2 min · Research Team

RiskLabs: Predicting Financial Risk Using Large Language Model based on Multimodal and Multi-Sources Data

RiskLabs: Predicting Financial Risk Using Large Language Model based on Multimodal and Multi-Sources Data ArXiv ID: 2404.07452 “View on arXiv” Authors: Unknown Abstract The integration of Artificial Intelligence (AI) techniques, particularly large language models (LLMs), in finance has garnered increasing academic attention. Despite progress, existing studies predominantly focus on tasks like financial text summarization, question-answering, and stock movement prediction (binary classification), the application of LLMs to financial risk prediction remains underexplored. Addressing this gap, in this paper, we introduce RiskLabs, a novel framework that leverages LLMs to analyze and predict financial risks. RiskLabs uniquely integrates multimodal financial data, including textual and vocal information from Earnings Conference Calls (ECCs), market-related time series data, and contextual news data to improve financial risk prediction. Empirical results demonstrate RiskLabs’ effectiveness in forecasting both market volatility and variance. Through comparative experiments, we examine the contributions of different data sources to financial risk assessment and highlight the crucial role of LLMs in this process. We also discuss the challenges associated with using LLMs for financial risk prediction and explore the potential of combining them with multimodal data for this purpose. ...

April 11, 2024 · 2 min · Research Team

A Deep Learning Method for Predicting Mergers and Acquisitions: Temporal Dynamic Industry Networks

A Deep Learning Method for Predicting Mergers and Acquisitions: Temporal Dynamic Industry Networks ArXiv ID: 2404.07298 “View on arXiv” Authors: Unknown Abstract Merger and Acquisition (M&A) activities play a vital role in market consolidation and restructuring. For acquiring companies, M&A serves as a key investment strategy, with one primary goal being to attain complementarities that enhance market power in competitive industries. In addition to intrinsic factors, a M&A behavior of a firm is influenced by the M&A activities of its peers, a phenomenon known as the “peer effect.” However, existing research often fails to capture the rich interdependencies among M&A events within industry networks. An effective M&A predictive model should offer deal-level predictions without requiring ad-hoc feature engineering or data rebalancing. Such a model would predict the M&A behaviors of rival firms and provide specific recommendations for both bidder and target firms. However, most current models only predict one side of an M&A deal, lack firm-specific recommendations, and rely on arbitrary time intervals that impair predictive accuracy. Additionally, due to the sparsity of M&A events, existing models require data rebalancing, which introduces bias and limits their real-world applicability. To address these challenges, we propose a Temporal Dynamic Industry Network (TDIN) model, leveraging temporal point processes and deep learning to capture complex M&A interdependencies without ad-hoc data adjustments. The temporal point process framework inherently models event sparsity, eliminating the need for data rebalancing. Empirical evaluations on M&A data from January 1997 to December 2020 validate the effectiveness of our approach in predicting M&A events and offering actionable, deal-level recommendations. ...

April 10, 2024 · 2 min · Research Team

Hedonic Models Incorporating ESG Factors for Time Series of Average Annual Home Prices

Hedonic Models Incorporating ESG Factors for Time Series of Average Annual Home Prices ArXiv ID: 2404.07132 “View on arXiv” Authors: Unknown Abstract Using data from 2000 through 2022, we analyze the predictive capability of the annual numbers of new home constructions and four available environmental, social, and governance factors on the average annual price of homes sold in eight major U.S. cities. We contrast the predictive capability of a P-spline generalized additive model (GAM) against a strictly linear version of the commonly used generalized linear model (GLM). As the data for the annual price and predictor variables constitute non-stationary time series, to avoid spurious correlations in the analysis we transform each time series appropriately to produce stationary series for use in the GAM and GLM models. While arithmetic returns or first differences are adequate transformations for the predictor variables, for the average price response variable we utilize the series of innovations obtained from AR(q)-ARCH(1) fits. Based on the GAM results, we find that the influence of ESG factors varies markedly by city, reflecting geographic diversity. Notably, the presence of air conditioning emerges as a strong factor. Despite limitations on the length of available time series, this study represents a pivotal step toward integrating ESG considerations into predictive real estate models. ...

April 10, 2024 · 2 min · Research Team