Unsupervised learning-based calibration scheme for Rough Bergomi model
ArXiv ID: 2412.02135 “View on arXiv”
Authors: Unknown
Abstract
Current deep learning-based calibration schemes for rough volatility models are based on the supervised learning framework, which can be costly due to a large amount of training data being generated. In this work, we propose a novel unsupervised learning-based scheme for the rough Bergomi (rBergomi) model which does not require accessing training data. The main idea is to use the backward stochastic differential equation (BSDE) derived in [“Bayer, Qiu and Yao, {“SIAM J. Financial Math.”}, 2022”] and simultaneously learn the BSDE solutions with the model parameters. We establish that the mean squares error between the option prices under the learned model parameters and the historical data is bounded by the loss function. Moreover, the loss can be made arbitrarily small under suitable conditions on the fitting ability of the rBergomi model to the market and the universal approximation capability of neural networks. Numerical experiments for both simulated and historical data confirm the efficiency of scheme.
Keywords: Unsupervised Learning, Rough Bergomi Model, Backward Stochastic Differential Equation (BSDE), Volatility Calibration, Neural Networks, Options (Derivatives)
Complexity vs Empirical Score
- Math Complexity: 8.0/10
- Empirical Rigor: 4.0/10
- Quadrant: Lab Rats
- Why: The paper employs advanced stochastic calculus, backward stochastic differential equations (BSDEs), and rigorous convergence analysis, indicating high mathematical complexity. However, it focuses on a theoretical proof of concept with simulated and historical data validation, lacking the backtest-ready implementation details or datasets typical of high empirical rigor.
flowchart TD
A["Research Goal: Unsupervised calibration for Rough Bergomi model"] --> B["Input: Historical market option prices"]
B --> C["Key Methodology: Formulate BSDE from Bayer, Qiu, Yao 2022"]
C --> D["Compute: Loss function L = MSE of model vs. market prices"]
D --> E["Optimize: Simultaneously learn Neural Network parameters and model parameters"]
E --> F{"Check Convergence?"}
F -- No --> D
F -- Yes --> G["Output: Calibrated model parameters"]
G --> H["Key Findings: <br/>• Efficient scheme for synthetic/historical data<br/>• Loss bounded by fitting error<br/>• Loss approaches 0 under NN approximation"]