false

Mathematical Modeling of Option Pricing with an Extended Black-Scholes Framework

Mathematical Modeling of Option Pricing with an Extended Black-Scholes Framework ArXiv ID: 2504.03175 “View on arXiv” Authors: Unknown Abstract This study investigates enhancing option pricing by extending the Black-Scholes model to include stochastic volatility and interest rate variability within the Partial Differential Equation (PDE). The PDE is solved using the finite difference method. The extended Black-Scholes model and a machine learning-based LSTM model are developed and evaluated for pricing Google stock options. Both models were backtested using historical market data. While the LSTM model exhibited higher predictive accuracy, the finite difference method demonstrated superior computational efficiency. This work provides insights into model performance under varying market conditions and emphasizes the potential of hybrid approaches for robust financial modeling. ...

April 4, 2025 · 2 min · Research Team

On the relative performance of some parametric and nonparametric estimators of option prices

On the relative performance of some parametric and nonparametric estimators of option prices ArXiv ID: 2412.00135 “View on arXiv” Authors: Unknown Abstract We examine the empirical performance of some parametric and nonparametric estimators of prices of options with a fixed time to maturity, focusing on variance-gamma and Heston models on one side, and on expansions in Hermite functions on the other side. The latter class of estimators can be seen as perturbations of the classical Black-Scholes model. The comparison between parametric and Hermite-based models having the same “degrees of freedom” is emphasized. The main criterion is the out-of-sample relative pricing error on a dataset of historical option prices on the S&P500 index. Prior to the main empirical study, the approximation of variance-gamma and Heston densities by series of Hermite functions is studied, providing explicit expressions for the coefficients of the expansion in the former case, and integral expressions involving the explicit characteristic function in the latter case. Moreover, these approximations are investigated numerically on a few test cases, indicating that expansions in Hermite functions with few terms achieve competitive accuracy in the estimation of Heston densities and the pricing of (European) options, but they perform less effectively with variance-gamma densities. On the other hand, the main large-scale empirical study show that parsimonious Hermite estimators can even outperform the Heston model in terms of pricing errors. These results underscore the trade-offs inherent in model selection and calibration, and their empirical fit in practical applications. ...

November 28, 2024 · 2 min · Research Team

Robust and Fast Bass local volatility

Robust and Fast Bass local volatility ArXiv ID: 2411.04321 “View on arXiv” Authors: Unknown Abstract The Bass Local Volatility Model (Bass-LV), as studied in [“Conze and Henry-Labordere, 2021”], stands out for its ability to eliminate the need for interpolation between maturities. This offers a significant advantage over traditional LV models. However, its performance highly depends on accurate construction of state price densities and the corresponding marginal distributions and efficient numerical convolutions which are necessary when solving the associated fixed point problems. In this paper, we propose a new approach combining local quadratic estimation and lognormal mixture tails for the construction of state price densities. We investigate computational efficiency of trapezoidal rule based schemes for numerical convolutions and show that they outperform commonly used Gauss-Hermite quadrature. We demonstrate the performance of the proposed method, both in standard option pricing models, as well as through a detailed market case study. ...

November 6, 2024 · 2 min · Research Team

Fast Deep Hedging with Second-Order Optimization

Fast Deep Hedging with Second-Order Optimization ArXiv ID: 2410.22568 “View on arXiv” Authors: Unknown Abstract Hedging exotic options in presence of market frictions is an important risk management task. Deep hedging can solve such hedging problems by training neural network policies in realistic simulated markets. Training these neural networks may be delicate and suffer from slow convergence, particularly for options with long maturities and complex sensitivities to market parameters. To address this, we propose a second-order optimization scheme for deep hedging. We leverage pathwise differentiability to construct a curvature matrix, which we approximate as block-diagonal and Kronecker-factored to efficiently precondition gradients. We evaluate our method on a challenging and practically important problem: hedging a cliquet option on a stock with stochastic volatility by trading in the spot and vanilla options. We find that our second-order scheme can optimize the policy in 1/4 of the number of steps that standard adaptive moment-based optimization takes. ...

October 29, 2024 · 2 min · Research Team

Solving The Dynamic Volatility Fitting Problem: A Deep Reinforcement Learning Approach

Solving The Dynamic Volatility Fitting Problem: A Deep Reinforcement Learning Approach ArXiv ID: 2410.11789 “View on arXiv” Authors: Unknown Abstract The volatility fitting is one of the core problems in the equity derivatives business. Through a set of deterministic rules, the degrees of freedom in the implied volatility surface encoding (parametrization, density, diffusion) are defined. Whilst very effective, this approach widespread in the industry is not natively tailored to learn from shifts in market regimes and discover unsuspected optimal behaviors. In this paper, we change the classical paradigm and apply the latest advances in Deep Reinforcement Learning(DRL) to solve the fitting problem. In particular, we show that variants of Deep Deterministic Policy Gradient (DDPG) and Soft Actor Critic (SAC) can achieve at least as good as standard fitting algorithms. Furthermore, we explain why the reinforcement learning framework is appropriate to handle complex objective functions and is natively adapted for online learning. ...

October 15, 2024 · 2 min · Research Team

A deep primal-dual BSDE method for optimal stopping problems

A deep primal-dual BSDE method for optimal stopping problems ArXiv ID: 2409.06937 “View on arXiv” Authors: Unknown Abstract We present a new deep primal-dual backward stochastic differential equation framework based on stopping time iteration to solve optimal stopping problems. A novel loss function is proposed to learn the conditional expectation, which consists of subnetwork parameterization of a continuation value and spatial gradients from present up to the stopping time. Notable features of the method include: (i) The martingale part in the loss function reduces the variance of stochastic gradients, which facilitates the training of the neural networks as well as alleviates the error propagation of value function approximation; (ii) this martingale approximates the martingale in the Doob-Meyer decomposition, and thus leads to a true upper bound for the optimal value in a non-nested Monte Carlo way. We test the proposed method in American option pricing problems, where the spatial gradient network yields the hedging ratio directly. ...

September 11, 2024 · 2 min · Research Team

Robust financial calibration: a Bayesian approach for neural SDEs

Robust financial calibration: a Bayesian approach for neural SDEs ArXiv ID: 2409.06551 “View on arXiv” Authors: Unknown Abstract The paper presents a Bayesian framework for the calibration of financial models using neural stochastic differential equations (neural SDEs), for which we also formulate a global universal approximation theorem based on Barron-type estimates. The method is based on the specification of a prior distribution on the neural network weights and an adequately chosen likelihood function. The resulting posterior distribution can be seen as a mixture of different classical neural SDE models yielding robust bounds on the implied volatility surface. Both, historical financial time series data and option price data are taken into consideration, which necessitates a methodology to learn the change of measure between the risk-neutral and the historical measure. The key ingredient for a robust numerical optimization of the neural networks is to apply a Langevin-type algorithm, commonly used in the Bayesian approaches to draw posterior samples. ...

September 10, 2024 · 2 min · Research Team

MLP, XGBoost, KAN, TDNN, and LSTM-GRU Hybrid RNN with Attention for SPX and NDX European Call Option Pricing

MLP, XGBoost, KAN, TDNN, and LSTM-GRU Hybrid RNN with Attention for SPX and NDX European Call Option Pricing ArXiv ID: 2409.06724 “View on arXiv” Authors: Unknown Abstract We explore the performance of various artificial neural network architectures, including a multilayer perceptron (MLP), Kolmogorov-Arnold network (KAN), LSTM-GRU hybrid recursive neural network (RNN) models, and a time-delay neural network (TDNN) for pricing European call options. In this study, we attempt to leverage the ability of supervised learning methods, such as ANNs, KANs, and gradient-boosted decision trees, to approximate complex multivariate functions in order to calibrate option prices based on past market data. The motivation for using ANNs and KANs is the Universal Approximation Theorem and Kolmogorov-Arnold Representation Theorem, respectively. Specifically, we use S&P 500 (SPX) and NASDAQ 100 (NDX) index options traded during 2015-2023 with times to maturity ranging from 15 days to over 4 years (OptionMetrics IvyDB US dataset). Black & Scholes’s (BS) PDE \cite{“Black1973”} model’s performance in pricing the same options compared to real data is used as a benchmark. This model relies on strong assumptions, and it has been observed and discussed in the literature that real data does not match its predictions. Supervised learning methods are widely used as an alternative for calibrating option prices due to some of the limitations of this model. In our experiments, the BS model underperforms compared to all of the others. Also, the best TDNN model outperforms the best MLP model on all error metrics. We implement a simple self-attention mechanism to enhance the RNN models, significantly improving their performance. The best-performing model overall is the LSTM-GRU hybrid RNN model with attention. Also, the KAN model outperforms the TDNN and MLP models. We analyze the performance of all models by ticker, moneyness category, and over/under/correctly-priced percentage. ...

August 26, 2024 · 3 min · Research Team

EX-DRL: Hedging Against Heavy Losses with EXtreme Distributional Reinforcement Learning

EX-DRL: Hedging Against Heavy Losses with EXtreme Distributional Reinforcement Learning ArXiv ID: 2408.12446 “View on arXiv” Authors: Unknown Abstract Recent advancements in Distributional Reinforcement Learning (DRL) for modeling loss distributions have shown promise in developing hedging strategies in derivatives markets. A common approach in DRL involves learning the quantiles of loss distributions at specified levels using Quantile Regression (QR). This method is particularly effective in option hedging due to its direct quantile-based risk assessment, such as Value at Risk (VaR) and Conditional Value at Risk (CVaR). However, these risk measures depend on the accurate estimation of extreme quantiles in the loss distribution’s tail, which can be imprecise in QR-based DRL due to the rarity and extremity of tail data, as highlighted in the literature. To address this issue, we propose EXtreme DRL (EX-DRL), which enhances extreme quantile prediction by modeling the tail of the loss distribution with a Generalized Pareto Distribution (GPD). This method introduces supplementary data to mitigate the scarcity of extreme quantile observations, thereby improving estimation accuracy through QR. Comprehensive experiments on gamma hedging options demonstrate that EX-DRL improves existing QR-based models by providing more precise estimates of extreme quantiles, thereby improving the computation and reliability of risk metrics for complex financial risk management. ...

August 22, 2024 · 2 min · Research Team

Enhancing Black-Scholes Delta Hedging via Deep Learning

Enhancing Black-Scholes Delta Hedging via Deep Learning ArXiv ID: 2407.19367 “View on arXiv” Authors: Unknown Abstract This paper proposes a deep delta hedging framework for options, utilizing neural networks to learn the residuals between the hedging function and the implied Black-Scholes delta. This approach leverages the smoother properties of these residuals, enhancing deep learning performance. Utilizing ten years of daily S&P 500 index option data, our empirical analysis demonstrates that learning the residuals, using the mean squared one-step hedging error as the loss function, significantly improves hedging performance over directly learning the hedging function, often by more than 100%. Adding input features when learning the residuals enhances hedging performance more for puts than calls, with market sentiment being less crucial. Furthermore, learning the residuals with three years of data matches the hedging performance of directly learning with ten years of data, proving that our method demands less data. ...

July 28, 2024 · 2 min · Research Team