A Risk-Neutral Neural Operator for Arbitrage-Free SPX-VIX Term Structures

ArXiv ID: 2511.06451 “View on arXiv”

Authors: Jian’an Zhang

Abstract

We propose ARBITER, a risk-neutral neural operator for learning joint SPX-VIX term structures under no-arbitrage constraints. ARBITER maps market states to an operator that outputs implied volatility and variance curves while enforcing static arbitrage (calendar, vertical, butterfly), Lipschitz bounds, and monotonicity. The model couples operator learning with constrained decoders and is trained with extragradient-style updates plus projection. We introduce evaluation metrics for derivatives term structures (NAS, CNAS, NI, Dual-Gap, Stability Rate) and show gains over Fourier Neural Operator, DeepONet, and state-space sequence models on historical SPX and VIX data. Ablation studies indicate that tying the SPX and VIX legs reduces Dual-Gap and improves NI, Lipschitz projection stabilizes calibration, and selective state updates improve long-horizon generalization. We provide identifiability and approximation results and describe practical recipes for arbitrage-free interpolation and extrapolation across maturities and strikes.

Keywords: Neural Operator, No-Arbitrage Constraints, SPX-VIX Term Structure, Constrained Optimization, Option Pricing

Complexity vs Empirical Score

  • Math Complexity: 9.0/10
  • Empirical Rigor: 8.5/10
  • Quadrant: Holy Grail
  • Why: The paper presents dense advanced mathematics, including neural operators, spectral projections, and convex optimization theory, justifying a high math score. It also demonstrates strong empirical rigor with specific evaluation metrics (NAS, CNAS, NI, etc.), historical data experiments, ablation studies, and confidence intervals, making it highly backtest-ready.
  flowchart TD
    A["Research Goal: Develop a risk-neutral neural operator\nfor arbitrage-free SPX-VIX term structures"] --> B["Methodology: ARBITER Framework"]
    B --> C["Data: Historical SPX & VIX Options\n(States, Volatility, Variance)"]
    C --> D["Computational Process:\nOperator Learning + Constrained Decoders\n(Enforcing no-arbitrage constraints)"]
    D --> E["Training: Extragradient + Projection\n(Static arbitrage, Lipschitz, Monotonicity)"]
    E --> F["Key Findings/Outcomes:\n1. Superior metrics (NAS, CNAS, NI, Dual-Gap)\n2. Reduced arbitrage violations\n3. Improved long-horizon generalization\n4. Practical recipes for interpolation/extrapolation"]