false

HLOB -- Information Persistence and Structure in Limit Order Books

HLOB – Information Persistence and Structure in Limit Order Books ArXiv ID: 2405.18938 “View on arXiv” Authors: Unknown Abstract We introduce a novel large-scale deep learning model for Limit Order Book mid-price changes forecasting, and we name it `HLOB’. This architecture (i) exploits the information encoded by an Information Filtering Network, namely the Triangulated Maximally Filtered Graph, to unveil deeper and non-trivial dependency structures among volume levels; and (ii) guarantees deterministic design choices to handle the complexity of the underlying system by drawing inspiration from the groundbreaking class of Homological Convolutional Neural Networks. We test our model against 9 state-of-the-art deep learning alternatives on 3 real-world Limit Order Book datasets, each including 15 stocks traded on the NASDAQ exchange, and we systematically characterize the scenarios where HLOB outperforms state-of-the-art architectures. Our approach sheds new light on the spatial distribution of information in Limit Order Books and on its degradation over increasing prediction horizons, narrowing the gap between microstructural modeling and deep learning-based forecasting in high-frequency financial markets. ...

May 29, 2024 · 2 min · Research Team

Optimizing Broker Performance Evaluation through Intraday Modeling of Execution Cost

Optimizing Broker Performance Evaluation through Intraday Modeling of Execution Cost ArXiv ID: 2405.18936 “View on arXiv” Authors: Unknown Abstract Minimizing execution costs for large orders is a fundamental challenge in finance. Firms often depend on brokers to manage their trades due to limited internal resources for optimizing trading strategies. This paper presents a methodology for evaluating the effectiveness of broker execution algorithms using trading data. We focus on two primary cost components: a linear cost that quantifies short-term execution quality and a quadratic cost associated with the price impact of trades. Using a model with transient price impact, we derive analytical formulas for estimating these costs. Furthermore, we enhance estimation accuracy by introducing novel methods such as weighting price changes based on their expected impact content. Our results demonstrate substantial improvements in estimating both linear and impact costs, providing a robust and efficient framework for selecting the most cost-effective brokers. ...

May 29, 2024 · 2 min · Research Team

A Novel Approach to Queue-Reactive Models: The Importance of Order Sizes

A Novel Approach to Queue-Reactive Models: The Importance of Order Sizes ArXiv ID: 2405.18594 “View on arXiv” Authors: Unknown Abstract In this article, we delve into the applications and extensions of the queue-reactive model for the simulation of limit order books. Our approach emphasizes the importance of order sizes, in conjunction with their type and arrival rate, by integrating the current state of the order book to determine, not only the intensity of order arrivals and their type, but also their sizes. These extensions generate simulated markets that are in line with numerous stylized facts of the market. Our empirical calibration, using futures on German bonds, reveals that the extended queue-reactive model significantly improves the description of order flow properties and the shape of queue distributions. Moreover, our findings demonstrate that the extended model produces simulated markets with a volatility comparable to historical real data, utilizing only endogenous information from the limit order book. This research underscores the potential of the queue-reactive model and its extensions in accurately simulating market dynamics and providing valuable insights into the complex nature of limit order book modeling. ...

May 28, 2024 · 2 min · Research Team

Constrained monotone mean--variance investment-reinsurance under the Cramér--Lundberg model with random coefficients

Constrained monotone mean–variance investment-reinsurance under the Cramér–Lundberg model with random coefficients ArXiv ID: 2405.17841 “View on arXiv” Authors: Unknown Abstract This paper studies an optimal investment-reinsurance problem for an insurer (she) under the Cramér–Lundberg model with monotone mean–variance (MMV) criterion. At any time, the insurer can purchase reinsurance (or acquire new business) and invest in a security market consisting of a risk-free asset and multiple risky assets whose excess return rate and volatility rate are allowed to be random. The trading strategy is subject to a general convex cone constraint, encompassing no-shorting constraint as a special case. The optimal investment-reinsurance strategy and optimal value for the MMV problem are deduced by solving certain backward stochastic differential equations with jumps. In the literature, it is known that models with MMV criterion and mean–variance criterion lead to the same optimal strategy and optimal value when the wealth process is continuous. Our result shows that the conclusion remains true even if the wealth process has compensated Poisson jumps and the market coefficients are random. ...

May 28, 2024 · 2 min · Research Team

Exploring Sectoral Profitability in the Indian Stock Market Using Deep Learning

Exploring Sectoral Profitability in the Indian Stock Market Using Deep Learning ArXiv ID: 2407.01572 “View on arXiv” Authors: Unknown Abstract This paper explores using a deep learning Long Short-Term Memory (LSTM) model for accurate stock price prediction and its implications for portfolio design. Despite the efficient market hypothesis suggesting that predicting stock prices is impossible, recent research has shown the potential of advanced algorithms and predictive models. The study builds upon existing literature on stock price prediction methods, emphasizing the shift toward machine learning and deep learning approaches. Using historical stock prices of 180 stocks across 18 sectors listed on the NSE, India, the LSTM model predicts future prices. These predictions guide buy/sell decisions for each stock and analyze sector profitability. The study’s main contributions are threefold: introducing an optimized LSTM model for robust portfolio design, utilizing LSTM predictions for buy/sell transactions, and insights into sector profitability and volatility. Results demonstrate the efficacy of the LSTM model in accurately predicting stock prices and informing investment decisions. By comparing sector profitability and prediction accuracy, the work provides valuable insights into the dynamics of the current financial markets in India. ...

May 28, 2024 · 2 min · Research Team

Optimizing Sharpe Ratio: Risk-Adjusted Decision-Making in Multi-Armed Bandits

Optimizing Sharpe Ratio: Risk-Adjusted Decision-Making in Multi-Armed Bandits ArXiv ID: 2406.06552 “View on arXiv” Authors: Unknown Abstract Sharpe Ratio (SR) is a critical parameter in characterizing financial time series as it jointly considers the reward and the volatility of any stock/portfolio through its variance. Deriving online algorithms for optimizing the SR is particularly challenging since even offline policies experience constant regret with respect to the best expert Even-Dar et al (2006). Thus, instead of optimizing the usual definition of SR, we optimize regularized square SR (RSSR). We consider two settings for the RSSR, Regret Minimization (RM) and Best Arm Identification (BAI). In this regard, we propose a novel multi-armed bandit (MAB) algorithm for RM called UCB-RSSR for RSSR maximization. We derive a path-dependent concentration bound for the estimate of the RSSR. Based on that, we derive the regret guarantees of UCB-RSSR and show that it evolves as O(log n) for the two-armed bandit case played for a horizon n. We also consider a fixed budget setting for well-known BAI algorithms, i.e., sequential halving and successive rejects, and propose SHVV, SHSR, and SuRSR algorithms. We derive the upper bound for the error probability of all proposed BAI algorithms. We demonstrate that UCB-RSSR outperforms the only other known SR optimizing bandit algorithm, U-UCB Cassel et al (2023). We also establish its efficacy with respect to other benchmarks derived from the GRA-UCB and MVTS algorithms. We further demonstrate the performance of proposed BAI algorithms for multiple different setups. Our research highlights that our proposed algorithms will find extensive applications in risk-aware portfolio management problems. Consequently, our research highlights that our proposed algorithms will find extensive applications in risk-aware portfolio management problems. ...

May 28, 2024 · 2 min · Research Team

Efficient mid-term forecasting of hourly electricity load using generalized additive models

Efficient mid-term forecasting of hourly electricity load using generalized additive models ArXiv ID: 2405.17070 “View on arXiv” Authors: Unknown Abstract Accurate mid-term (weeks to one year) hourly electricity load forecasts are essential for strategic decision-making in power plant operation, ensuring supply security and grid stability, planning and building energy storage systems, and energy trading. While numerous models effectively predict short-term (hours to a few days) hourly load, mid-term forecasting solutions remain scarce. In mid-term load forecasting, capturing the multifaceted characteristics of load, including daily, weekly and annual seasonal patterns, as well as autoregressive effects, weather and holiday impacts, and socio-economic non-stationarities, presents significant modeling challenges. To address these challenges, we propose a novel forecasting method using Generalized Additive Models (GAMs) built from interpretable P-splines that is enhanced with autoregressive post-processing. This model incorporates smoothed temperatures, Error-Trend-Seasonal (ETS) modeled and persistently forecasted non-stationary socio-economic states, a nuanced representation of effects from vacation periods, fixed date and weekday holidays, and seasonal information as inputs. The proposed model is evaluated using load data from 24 European countries over more than 9 years (2015-2024). This analysis demonstrates that the model not only has significantly enhanced forecasting accuracy compared to state-of-the-art methods but also offers valuable insights into the influence of individual components on predicted load, given its full interpretability. Achieving performance akin to day-ahead Transmission System Operator (TSO) forecasts, with computation times of just a few seconds for several years of hourly data, underscores the potential of the model for practical application in the power system industry. ...

May 27, 2024 · 3 min · Research Team

DeTEcT: Dynamic and Probabilistic Parameters Extension

DeTEcT: Dynamic and Probabilistic Parameters Extension ArXiv ID: 2405.16688 “View on arXiv” Authors: Unknown Abstract This paper presents a theoretical extension of the DeTEcT framework proposed by Sadykhov et al., DeTEcT, where a formal analysis framework was introduced for modelling wealth distribution in token economies. DeTEcT is a framework for analysing economic activity, simulating macroeconomic scenarios, and algorithmically setting policies in token economies. This paper proposes four ways of parametrizing the framework, where dynamic vs static parametrization is considered along with the probabilistic vs non-probabilistic. Using these parametrization techniques, we demonstrate that by adding restrictions to the framework it is possible to derive the existing wealth distribution models from DeTEcT. In addition to exploring parametrization techniques, this paper studies how money supply in DeTEcT framework can be transformed to become dynamic, and how this change will affect the dynamics of wealth distribution. The motivation for studying dynamic money supply is that it enables DeTEcT to be applied to modelling token economies without maximum supply (i.e., Ethereum), and it adds constraints to the framework in the form of symmetries. ...

May 26, 2024 · 2 min · Research Team

Gaussian Recombining Split Tree

Gaussian Recombining Split Tree ArXiv ID: 2405.16333 “View on arXiv” Authors: Unknown Abstract Binomial trees are widely used in the financial sector for valuing securities with early exercise characteristics, such as American stock options. However, while effective in many scenarios, pricing options with CRR binomial trees are limited. Major limitations are volatility estimation, constant volatility assumption, subjectivity in parameter choices, and impracticality of instantaneous delta hedging. This paper presents a novel tree: Gaussian Recombining Split Tree (GRST), which is recombining and does not need log-normality or normality market assumption. GRST generates a discrete probability mass function of market data distribution, which approximates a Gaussian distribution with known parameters at any chosen time interval. GRST Mixture builds upon the GRST concept while being flexible to fit a large class of market distributions and when given a 1-D time series data and moments of distributions at each time interval, fits a Gaussian mixture with the same mixture component probabilities applied at each time interval. Gaussian Recombining Split Tre Mixture comprises several GRST tied using Gaussian mixture component probabilities at the first node. Our extensive empirical analysis shows that the option prices from the GRST align closely with the market. ...

May 25, 2024 · 2 min · Research Team

Identifying Extreme Events in the Stock Market: A Topological Data Analysis

Identifying Extreme Events in the Stock Market: A Topological Data Analysis ArXiv ID: 2405.16052 “View on arXiv” Authors: Unknown Abstract This paper employs Topological Data Analysis (TDA) to detect extreme events (EEs) in the stock market at a continental level. Previous approaches, which analyzed stock indices separately, could not detect EEs for multiple time series in one go. TDA provides a robust framework for such analysis and identifies the EEs during the crashes for different indices. The TDA analysis shows that $L^1$, $L^2$ norms and Wasserstein distance ($W_D$) of the world leading indices rise abruptly during the crashes, surpassing a threshold of $μ+4σ$ where $μ$ and $σ$ are the mean and the standard deviation of norm or $W_D$, respectively. Our study identified the stock index crashes of the 2008 financial crisis and the COVID-19 pandemic across continents as EEs. Given that different sectors in an index behave differently, a sector-wise analysis was conducted during the COVID-19 pandemic for the Indian stock market. The sector-wise results show that after the occurrence of EE, we have observed strong crashes surpassing $μ+2σ$ for an extended period for the banking sector. While for the pharmaceutical sector, no significant spikes were noted. Hence, TDA also proves successful in identifying the duration of shocks after the occurrence of EEs. This also indicates that the Banking sector continued to face stress and remained volatile even after the crash. This study gives us the applicability of TDA as a powerful analytical tool to study EEs in various fields. ...

May 25, 2024 · 3 min · Research Team