false

On Sparse Grid Interpolation for American Option Pricing with Multiple Underlying Assets

On Sparse Grid Interpolation for American Option Pricing with Multiple Underlying Assets ArXiv ID: 2309.08287 “View on arXiv” Authors: Unknown Abstract In this work, we develop a novel efficient quadrature and sparse grid based polynomial interpolation method to price American options with multiple underlying assets. The approach is based on first formulating the pricing of American options using dynamic programming, and then employing static sparse grids to interpolate the continuation value function at each time step. To achieve high efficiency, we first transform the domain from $\mathbb{“R”}^d$ to $(-1,1)^d$ via a scaled tanh map, and then remove the boundary singularity of the resulting multivariate function over $(-1,1)^d$ by a bubble function and simultaneously, to significantly reduce the number of interpolation points. We rigorously establish that with a proper choice of the bubble function, the resulting function has bounded mixed derivatives up to a certain order, which provides theoretical underpinnings for the use of sparse grids. Numerical experiments for American arithmetic and geometric basket put options with the number of underlying assets up to 16 are presented to validate the effectiveness of the approach. ...

September 15, 2023 · 2 min · Research Team

Analysis of frequent trading effects of various machine learning models

Analysis of frequent trading effects of various machine learning models ArXiv ID: 2311.10719 “View on arXiv” Authors: Unknown Abstract In recent years, high-frequency trading has emerged as a crucial strategy in stock trading. This study aims to develop an advanced high-frequency trading algorithm and compare the performance of three different mathematical models: the combination of the cross-entropy loss function and the quasi-Newton algorithm, the FCNN model, and the vector machine. The proposed algorithm employs neural network predictions to generate trading signals and execute buy and sell operations based on specific conditions. By harnessing the power of neural networks, the algorithm enhances the accuracy and reliability of the trading strategy. To assess the effectiveness of the algorithm, the study evaluates the performance of the three mathematical models. The combination of the cross-entropy loss function and the quasi-Newton algorithm is a widely utilized logistic regression approach. The FCNN model, on the other hand, is a deep learning algorithm that can extract and classify features from stock data. Meanwhile, the vector machine is a supervised learning algorithm recognized for achieving improved classification results by mapping data into high-dimensional spaces. By comparing the performance of these three models, the study aims to determine the most effective approach for high-frequency trading. This research makes a valuable contribution by introducing a novel methodology for high-frequency trading, thereby providing investors with a more accurate and reliable stock trading strategy. ...

September 14, 2023 · 2 min · Research Team

Applying Deep Learning to Calibrate Stochastic Volatility Models

Applying Deep Learning to Calibrate Stochastic Volatility Models ArXiv ID: 2309.07843 “View on arXiv” Authors: Unknown Abstract Stochastic volatility models, where the volatility is a stochastic process, can capture most of the essential stylized facts of implied volatility surfaces and give more realistic dynamics of the volatility smile/skew. However, they come with the significant issue that they take too long to calibrate. Alternative calibration methods based on Deep Learning (DL) techniques have been recently used to build fast and accurate solutions to the calibration problem. Huge and Savine developed a Differential Machine Learning (DML) approach, where Machine Learning models are trained on samples of not only features and labels but also differentials of labels to features. The present work aims to apply the DML technique to price vanilla European options (i.e. the calibration instruments), more specifically, puts when the underlying asset follows a Heston model and then calibrate the model on the trained network. DML allows for fast training and accurate pricing. The trained neural network dramatically reduces Heston calibration’s computation time. In this work, we also introduce different regularisation techniques, and we apply them notably in the case of the DML. We compare their performance in reducing overfitting and improving the generalisation error. The DML performance is also compared to the classical DL (without differentiation) one in the case of Feed-Forward Neural Networks. We show that the DML outperforms the DL. The complete code for our experiments is provided in the GitHub repository: https://github.com/asridi/DML-Calibration-Heston-Model ...

September 14, 2023 · 3 min · Research Team

Market-GAN: Adding Control to Financial Market Data Generation with Semantic Context

Market-GAN: Adding Control to Financial Market Data Generation with Semantic Context ArXiv ID: 2309.07708 “View on arXiv” Authors: Unknown Abstract Financial simulators play an important role in enhancing forecasting accuracy, managing risks, and fostering strategic financial decision-making. Despite the development of financial market simulation methodologies, existing frameworks often struggle with adapting to specialized simulation context. We pinpoint the challenges as i) current financial datasets do not contain context labels; ii) current techniques are not designed to generate financial data with context as control, which demands greater precision compared to other modalities; iii) the inherent difficulties in generating context-aligned, high-fidelity data given the non-stationary, noisy nature of financial data. To address these challenges, our contributions are: i) we proposed the Contextual Market Dataset with market dynamics, stock ticker, and history state as context, leveraging a market dynamics modeling method that combines linear regression and Dynamic Time Warping clustering to extract market dynamics; ii) we present Market-GAN, a novel architecture incorporating a Generative Adversarial Networks (GAN) for the controllable generation with context, an autoencoder for learning low-dimension features, and supervisors for knowledge transfer; iii) we introduce a two-stage training scheme to ensure that Market-GAN captures the intrinsic market distribution with multiple objectives. In the pertaining stage, with the use of the autoencoder and supervisors, we prepare the generator with a better initialization for the adversarial training stage. We propose a set of holistic evaluation metrics that consider alignment, fidelity, data usability on downstream tasks, and market facts. We evaluate Market-GAN with the Dow Jones Industrial Average data from 2000 to 2023 and showcase superior performance in comparison to 4 state-of-the-art time-series generative models. ...

September 14, 2023 · 3 min · Research Team

Profit and loss attribution: An empirical study

Profit and loss attribution: An empirical study ArXiv ID: 2309.07667 “View on arXiv” Authors: Unknown Abstract The profit and loss (p&l) attrition for each business year into different risk or risk factors (e.g., interest rates, credit spreads, foreign exchange rate etc.) is a regulatory requirement, e.g., under Solvency 2. Three different decomposition principles are prevalent: one-at-a-time (OAT), sequential updating (SU) and average sequential updating (ASU) decompositions. In this research, using financial market data from 2003 to 2022, we demonstrate that the OAT decomposition can generate significant unexplained p&l and that the SU decompositions depends significantly on the order or labeling of the risk factors. On the basis of an investment in a foreign stock, we further explain that the SU decomposition is not able to identify all relevant risk factors. This potentially effects the hedging strategy of the portfolio manager. In conclusion, we suggest to use the ASU decomposition in practice. ...

September 14, 2023 · 2 min · Research Team

From Deep Filtering to Deep Econometrics

From Deep Filtering to Deep Econometrics ArXiv ID: 2311.06256 “View on arXiv” Authors: Unknown Abstract Calculating true volatility is an essential task for option pricing and risk management. However, it is made difficult by market microstructure noise. Particle filtering has been proposed to solve this problem as it favorable statistical properties, but relies on assumptions about underlying market dynamics. Machine learning methods have also been proposed but lack interpretability, and often lag in performance. In this paper we implement the SV-PF-RNN: a hybrid neural network and particle filter architecture. Our SV-PF-RNN is designed specifically with stochastic volatility estimation in mind. We then show that it can improve on the performance of a basic particle filter. ...

September 13, 2023 · 2 min · Research Team

Harnessing Deep Q-Learning for Enhanced Statistical Arbitrage in High-Frequency Trading: A Comprehensive Exploration

Harnessing Deep Q-Learning for Enhanced Statistical Arbitrage in High-Frequency Trading: A Comprehensive Exploration ArXiv ID: 2311.10718 “View on arXiv” Authors: Unknown Abstract The realm of High-Frequency Trading (HFT) is characterized by rapid decision-making processes that capitalize on fleeting market inefficiencies. As the financial markets become increasingly competitive, there is a pressing need for innovative strategies that can adapt and evolve with changing market dynamics. Enter Reinforcement Learning (RL), a branch of machine learning where agents learn by interacting with their environment, making it an intriguing candidate for HFT applications. This paper dives deep into the integration of RL in statistical arbitrage strategies tailored for HFT scenarios. By leveraging the adaptive learning capabilities of RL, we explore its potential to unearth patterns and devise trading strategies that traditional methods might overlook. We delve into the intricate exploration-exploitation trade-offs inherent in RL and how they manifest in the volatile world of HFT. Furthermore, we confront the challenges of applying RL in non-stationary environments, typical of financial markets, and investigate methodologies to mitigate associated risks. Through extensive simulations and backtests, our research reveals that RL not only enhances the adaptability of trading strategies but also shows promise in improving profitability metrics and risk-adjusted returns. This paper, therefore, positions RL as a pivotal tool for the next generation of HFT-based statistical arbitrage, offering insights for both researchers and practitioners in the field. ...

September 13, 2023 · 2 min · Research Team

Weak Markovian Approximations of Rough Heston

Weak Markovian Approximations of Rough Heston ArXiv ID: 2309.07023 “View on arXiv” Authors: Unknown Abstract The rough Heston model is a very popular recent model in mathematical finance; however, the lack of Markov and semimartingale properties poses significant challenges in both theory and practice. A way to resolve this problem is to use Markovian approximations of the model. Several previous works have shown that these approximations can be very accurate even when the number of additional factors is very low. Existing error analysis is largely based on the strong error, corresponding to the $L^2$ distance between the kernels. Extending earlier results by [“Abi Jaber and El Euch, SIAM Journal on Financial Mathematics 10(2):309–349, 2019”], we show that the weak error of the Markovian approximations can be bounded using the $L^1$-error in the kernel approximation for general classes of payoff functions for European style options. Moreover, we give specific Markovian approximations which converge super-polynomially in the number of dimensions, and illustrate their numerical superiority in option pricing compared to previously existing approximations. The new approximations also work for the hyper-rough case $H > -1/2$. ...

September 13, 2023 · 2 min · Research Team

A monotone numerical integration method for mean-variance portfolio optimization under jump-diffusion models

A monotone numerical integration method for mean-variance portfolio optimization under jump-diffusion models ArXiv ID: 2309.05977 “View on arXiv” Authors: Unknown Abstract We develop a efficient, easy-to-implement, and strictly monotone numerical integration method for Mean-Variance (MV) portfolio optimization in realistic contexts, which involve jump-diffusion dynamics of the underlying controlled processes, discrete rebalancing, and the application of investment constraints, namely no-bankruptcy and leverage. A crucial element of the MV portfolio optimization formulation over each rebalancing interval is a convolution integral, which involves a conditional density of the logarithm of the amount invested in the risky asset. Using a known closed-form expression for the Fourier transform of this density, we derive an infinite series representation for the conditional density where each term is strictly positive and explicitly computable. As a result, the convolution integral can be readily approximated through a monotone integration scheme, such as a composite quadrature rule typically available in most programming languages. The computational complexity of our method is an order of magnitude lower than that of existing monotone finite difference methods. To further enhance efficiency, we propose an implementation of the scheme via Fast Fourier Transforms, exploiting the Toeplitz matrix structure. The proposed monotone scheme is proven to be both $\ell_{"\infty"}$-stable and pointwise consistent, and we rigorously establish its pointwise convergence to the unique solution of the MV portfolio optimization problem. We also intuitively demonstrate that, as the rebalancing time interval approaches zero, the proposed scheme converges to a continuously observed impulse control formulation for MV optimization expressed as a Hamilton-Jacobi-Bellman Quasi-Variational Inequality. Numerical results show remarkable agreement with benchmark solutions obtained through finite differences and Monte Carlo simulation, underscoring the effectiveness of our approach. ...

September 12, 2023 · 3 min · Research Team

Arguably Adequate Aqueduct Algorithm: Crossing A Bridge-Less Block-Chain Chasm

Arguably Adequate Aqueduct Algorithm: Crossing A Bridge-Less Block-Chain Chasm ArXiv ID: 2311.10717 “View on arXiv” Authors: Unknown Abstract We consider the problem of being a cross-chain wealth management platform with deposits, redemptions and investment assets across multiple networks. We discuss the need for blockchain bridges to facilitates fund flows across platforms. We point out several issues with existing bridges. We develop an algorithm - tailored to overcome current constraints - that dynamically changes the utilization of bridge capacities and hence the amounts to be transferred across networks. We illustrate several scenarios using numerical simulations. ...

September 12, 2023 · 1 min · Research Team