false

Hopfield Networks for Asset Allocation

Hopfield Networks for Asset Allocation ArXiv ID: 2407.17645 “View on arXiv” Authors: Unknown Abstract We present the first application of modern Hopfield networks to the problem of portfolio optimization. We performed an extensive study based on combinatorial purged cross-validation over several datasets and compared our results to both traditional and deep-learning-based methods for portfolio selection. Compared to state-of-the-art deep-learning methods such as Long-Short Term Memory networks and Transformers, we find that the proposed approach performs on par or better, while providing faster training times and better stability. Our results show that Modern Hopfield Networks represent a promising approach to portfolio optimization, allowing for an efficient, scalable, and robust solution for asset allocation, risk management, and dynamic rebalancing. ...

July 24, 2024 · 2 min · Research Team

Alleviating Non-identifiability: a High-fidelity Calibration Objective for Financial Market Simulation with Multivariate Time Series Data

Alleviating Non-identifiability: a High-fidelity Calibration Objective for Financial Market Simulation with Multivariate Time Series Data ArXiv ID: 2407.16566 “View on arXiv” Authors: Unknown Abstract The non-identifiability issue has been frequently reported in social simulation works, where different parameters of an agent-based simulation model yield indistinguishable simulated time series data under certain discrepancy metrics. This issue largely undermines the simulation fidelity yet lacks dedicated investigations. This paper theoretically demonstrates that incorporating multiple time series data features during the model calibration phase can exponentially alleviate non-identifiability as the number of features increases. To implement this theoretical finding, a maximization-based aggregation function is proposed based on existing discrepancy metrics to form a new calibration objective function. For verification, the task of calibrating the Financial Market Simulation (FMS), a typical yet complex social simulation, is considered. Empirical studies confirm the significant improvements in alleviating the non-identifiability of calibration tasks. Furthermore, as a model-agnostic method, it achieves much higher simulation fidelity of the chosen FMS model on both synthetic and real market data. Moreover, it is both theoretically and empirically analyzed that as long as the features are selected and not linearly correlated, they can contribute to alleviation, which demonstrates the robustness of the proposed objective. Hence, this work is expected to provide not only a rigorous understanding of non-identifiability in social simulation but also an off-the-shelf high-fidelity calibration objective function for FMS. ...

July 23, 2024 · 2 min · Research Team

Automated Market Making and Decentralized Finance

Automated Market Making and Decentralized Finance ArXiv ID: 2407.16885 “View on arXiv” Authors: Unknown Abstract Automated market makers (AMMs) are a new type of trading venues which are revolutionising the way market participants interact. At present, the majority of AMMs are constant function market makers (CFMMs) where a deterministic trading function determines how markets are cleared. Within CFMMs, we focus on constant product market makers (CPMMs) which implements the concentrated liquidity (CL) feature. In this thesis we formalise and study the trading mechanism of CPMMs with CL, and we develop liquidity provision and liquidity taking strategies. Our models are motivated and tested with market data. We derive optimal strategies for liquidity takers (LTs) who trade orders of large size and execute statistical arbitrages. First, we consider an LT who trades in a CPMM with CL and uses the dynamics of prices in competing venues as market signals. We use Uniswap v3 data to study price, liquidity, and trading cost dynamics, and to motivate the model. Next, we consider an LT who trades a basket of crypto-currencies whose constituents co-move. We use market data to study lead-lag effects, spillover effects, and causality between trading venues. We derive optimal strategies for strategic liquidity providers (LPs) who provide liquidity in CPMM with CL. First, we use stochastic control tools to derive a self-financing and closed-form optimal liquidity provision strategy where the width of the LP’s liquidity range is determined by the profitability of the pool, the dynamics of the LP’s position, and concentration risk. Next, we use a model-free approach to solve the problem of an LP who provides liquidity in multiple CPMMs with CL. We do not specify a model for the stochastic processes observed by LPs, and use a long short-term memory (LSTM) neural network to approximate the optimal liquidity provision strategy. ...

July 23, 2024 · 3 min · Research Team

Multi-Industry Simplex 2.0 : Temporally-Evolving Probabilistic Industry Classification

Multi-Industry Simplex 2.0 : Temporally-Evolving Probabilistic Industry Classification ArXiv ID: 2407.16437 “View on arXiv” Authors: Unknown Abstract Accurate industry classification is critical for many areas of portfolio management, yet the traditional single-industry framework of the Global Industry Classification Standard (GICS) struggles to comprehensively represent risk for highly diversified multi-sector conglomerates like Amazon. Previously, we introduced the Multi-Industry Simplex (MIS), a probabilistic extension of GICS that utilizes topic modeling, a natural language processing approach. Although our initial version, MIS-1, was able to improve upon GICS by providing multi-industry representations, it relied on an overly simple architecture that required prior knowledge about the number of industries and relied on the unrealistic assumption that industries are uncorrelated and independent over time. We improve upon this model with MIS-2, which addresses three key limitations of MIS-1 : we utilize Bayesian Non-Parametrics to automatically infer the number of industries from data, we employ Markov Updating to account for industries that change over time, and we adjust for correlated and hierarchical industries allowing for both broad and niche industries (similar to GICS). Further, we provide an out-of-sample test directly comparing MIS-2 and GICS on the basis of future correlation prediction, where we find evidence that MIS-2 provides a measurable improvement over GICS. MIS-2 provides portfolio managers with a more robust tool for industry classification, empowering them to more effectively identify and manage risk, particularly around multi-sector conglomerates in a rapidly evolving market in which new industries periodically emerge. ...

July 23, 2024 · 2 min · Research Team

On Deep Learning for computing the Dynamic Initial Margin and Margin Value Adjustment

On Deep Learning for computing the Dynamic Initial Margin and Margin Value Adjustment ArXiv ID: 2407.16435 “View on arXiv” Authors: Unknown Abstract The present work addresses the challenge of training neural networks for Dynamic Initial Margin (DIM) computation in counterparty credit risk, a task traditionally burdened by the high costs associated with generating training datasets through nested Monte Carlo (MC) simulations. By condensing the initial market state variables into an input vector, determined through an interest rate model and a parsimonious parameterization of the current interest rate term structure, we construct a training dataset where labels are noisy but unbiased DIM samples derived from single MC paths. A multi-output neural network structure is employed to handle DIM as a time-dependent function, facilitating training across a mesh of monitoring times. The methodology offers significant advantages: it reduces the dataset generation cost to a single MC execution and parameterizes the neural network by initial market state variables, obviating the need for repeated training. Experimental results demonstrate the approach’s convergence properties and robustness across different interest rate models (Vasicek and Hull-White) and portfolio complexities, validating its general applicability and efficiency in more realistic scenarios. ...

July 23, 2024 · 2 min · Research Team

Reinforcement Learning Pair Trading: A Dynamic Scaling approach

Reinforcement Learning Pair Trading: A Dynamic Scaling approach ArXiv ID: 2407.16103 “View on arXiv” Authors: Unknown Abstract Cryptocurrency is a cryptography-based digital asset with extremely volatile prices. Around USD 70 billion worth of cryptocurrency is traded daily on exchanges. Trading cryptocurrency is difficult due to the inherent volatility of the crypto market. This study investigates whether Reinforcement Learning (RL) can enhance decision-making in cryptocurrency algorithmic trading compared to traditional methods. In order to address this question, we combined reinforcement learning with a statistical arbitrage trading technique, pair trading, which exploits the price difference between statistically correlated assets. We constructed RL environments and trained RL agents to determine when and how to trade pairs of cryptocurrencies. We developed new reward shaping and observation/action spaces for reinforcement learning. We performed experiments with the developed reinforcement learner on pairs of BTC-GBP and BTC-EUR data separated by 1 min intervals (n=263,520). The traditional non-RL pair trading technique achieved an annualized profit of 8.33%, while the proposed RL-based pair trading technique achieved annualized profits from 9.94% to 31.53%, depending upon the RL learner. Our results show that RL can significantly outperform manual and traditional pair trading techniques when applied to volatile markets such as~cryptocurrencies. ...

July 23, 2024 · 2 min · Research Team

Stablecoin Runs and Disclosure Policy in the Presence of Large Sales

Stablecoin Runs and Disclosure Policy in the Presence of Large Sales ArXiv ID: 2408.07227 “View on arXiv” Authors: Unknown Abstract Stablecoins have historically depegged due from par to large sales, possibly of speculative nature, or poor reserve asset quality. Using a global game which addresses both concerns, we show that the selling pressure on stablecoin holders increases in the presence of a large sale. While precise public knowledge reduces (increases) the probability of a run when fundamentals are strong (weak), interestingly, more precise private signals increase (reduce) the probability of a run when fundamentals are strong (weak), potentially explaining the stability of opaque stablecoins. The total run probability can be decomposed into components representing risks from large sales and poor collateral. By analyzing how these risk components vary with respect to information uncertainty and fundamentals, we can split the fundamental space into regions based on the type of risk a stablecoin issuer is more prone to. We suggest testable implications and connect our model’s implications to real-world applications, including depegging events and the no-questions-asked property of money. ...

July 23, 2024 · 2 min · Research Team

The Hybrid Forecast of S&P 500 Volatility ensembled from VIX, GARCH and LSTM models

The Hybrid Forecast of S&P 500 Volatility ensembled from VIX, GARCH and LSTM models ArXiv ID: 2407.16780 “View on arXiv” Authors: Unknown Abstract Predicting the S&P 500 index volatility is crucial for investors and financial analysts as it helps assess market risk and make informed investment decisions. Volatility represents the level of uncertainty or risk related to the size of changes in a security’s value, making it an essential indicator for financial planning. This study explores four methods to improve the accuracy of volatility forecasts for the S&P 500: the established GARCH model, known for capturing historical volatility patterns; an LSTM network that utilizes past volatility and log returns; a hybrid LSTM-GARCH model that combines the strengths of both approaches; and an advanced version of the hybrid model that also factors in the VIX index to gauge market sentiment. This analysis is based on a daily dataset that includes S&P 500 and VIX index data, covering the period from January 3, 2000, to December 21, 2023. Through rigorous testing and comparison, we found that machine learning approaches, particularly the hybrid LSTM models, significantly outperform the traditional GARCH model. Including the VIX index in the hybrid model further enhances its forecasting ability by incorporating real-time market sentiment. The results of this study offer valuable insights for achieving more accurate volatility predictions, enabling better risk management and strategic investment decisions in the volatile environment of the S&P 500. ...

July 23, 2024 · 2 min · Research Team

The Negative Drift of a Limit Order Fill

The Negative Drift of a Limit Order Fill ArXiv ID: 2407.16527 “View on arXiv” Authors: Unknown Abstract Market making refers to a form of trading in financial markets characterized by passive orders which add liquidity to limit order books. Market makers are important for the proper functioning of financial markets worldwide. Given the importance, financial mathematics has endeavored to derive optimal strategies for placing limit orders in this context. This paper identifies a key discrepancy between popular model assumptions and the realities of real markets, specifically regarding the dynamics around limit order fills. Traditionally, market making models rely on an assumption of low-cost random fills, when in reality we observe a high-cost non-random fill behavior. Namely, limit order fills are caused by and coincide with adverse price movements, which create a drag on the market maker’s profit and loss. We refer to this phenomenon as “the negative drift” associated with limit order fills. We describe a discrete market model and prove theoretically that the negative drift exists. We also provide a detailed empirical simulation using one of the most traded financial instruments in the world, the 10 Year US Treasury Bond futures, which also confirms its existence. To our knowledge, this is the first paper to describe and prove this phenomenon in such detail. ...

July 23, 2024 · 2 min · Research Team

Calibrating the Heston model with deep differential networks

Calibrating the Heston model with deep differential networks ArXiv ID: 2407.15536 “View on arXiv” Authors: Unknown Abstract We propose a gradient-based deep learning framework to calibrate the Heston option pricing model (Heston, 1993). Our neural network, henceforth deep differential network (DDN), learns both the Heston pricing formula for plain-vanilla options and the partial derivatives with respect to the model parameters. The price sensitivities estimated by the DDN are not subject to the numerical issues that can be encountered in computing the gradient of the Heston pricing function. Thus, our network is an excellent pricing engine for fast gradient-based calibrations. Extensive tests on selected equity markets show that the DDN significantly outperforms non-differential feedforward neural networks in terms of calibration accuracy. In addition, it dramatically reduces the computational time with respect to global optimizers that do not use gradient information. ...

July 22, 2024 · 2 min · Research Team