false

Tail-Safe Stochastic-Control SPX-VIX Hedging: A White-Box Bridge Between AI Sensitivities and Arbitrage-Free Market Dynamics

Tail-Safe Stochastic-Control SPX-VIX Hedging: A White-Box Bridge Between AI Sensitivities and Arbitrage-Free Market Dynamics ArXiv ID: 2510.15937 “View on arXiv” Authors: Jian’an Zhang Abstract We present a white-box, risk-sensitive framework for jointly hedging SPX and VIX exposures under transaction costs and regime shifts. The approach couples an arbitrage-free market teacher with a control layer that enforces safety as constraints. On the market side, we integrate an SSVI-based implied-volatility surface and a Cboe-compliant VIX computation (including wing pruning and 30-day interpolation), and connect prices to dynamics via a clipped, convexity-preserving Dupire local-volatility extractor. On the control side, we pose hedging as a small quadratic program with control-barrier-function (CBF) boxes for inventory, rate, and tail risk; a sufficient-descent execution gate that trades only when risk drop justifies cost; and three targeted tail-safety upgrades: a correlation/expiry-aware VIX weight, guarded no-trade bands, and expiry-aware micro-trade thresholds with cooldown. We prove existence/uniqueness and KKT regularity of the per-step QP, forward invariance of safety sets, one-step risk descent when the gate opens, and no chattering with bounded trade rates. For the dynamics layer, we establish positivity and second-order consistency of the discrete Dupire estimator and give an index-coherence bound linking the teacher VIX to a CIR-style proxy with explicit quadrature and projection errors. In a reproducible synthetic environment mirroring exchange rules and execution frictions, the controller reduces expected shortfall while suppressing nuisance turnover, and the teacher-surface construction keeps index-level residuals small and stable. ...

October 9, 2025 · 2 min · Research Team

Tail-Safe Hedging: Explainable Risk-Sensitive Reinforcement Learning with a White-Box CBF--QP Safety Layer in Arbitrage-Free Markets

Tail-Safe Hedging: Explainable Risk-Sensitive Reinforcement Learning with a White-Box CBF–QP Safety Layer in Arbitrage-Free Markets ArXiv ID: 2510.04555 “View on arXiv” Authors: Jian’an Zhang Abstract We introduce Tail-Safe, a deployability-oriented framework for derivatives hedging that unifies distributional, risk-sensitive reinforcement learning with a white-box control-barrier-function (CBF) quadratic-program (QP) safety layer tailored to financial constraints. The learning component combines an IQN-based distributional critic with a CVaR objective (IQN–CVaR–PPO) and a Tail-Coverage Controller that regulates quantile sampling through temperature tilting and tail boosting to stabilize small-$α$ estimation. The safety component enforces discrete-time CBF inequalities together with domain-specific constraints – ellipsoidal no-trade bands, box and rate limits, and a sign-consistency gate – solved as a convex QP whose telemetry (active sets, tightness, rate utilization, gate scores, slack, and solver status) forms an auditable trail for governance. We provide guarantees of robust forward invariance of the safe set under bounded model mismatch, a minimal-deviation projection interpretation of the QP, a KL-to-DRO upper bound linking per-state KL regularization to worst-case CVaR, concentration and sample-complexity results for the temperature-tilted CVaR estimator, and a CVaR trust-region improvement inequality under KL limits, together with feasibility persistence under expiry-aware tightening. Empirically, in arbitrage-free, microstructure-aware synthetic markets (SSVI $\to$ Dupire $\to$ VIX with ABIDES/MockLOB execution), Tail-Safe improves left-tail risk without degrading central performance and yields zero hard-constraint violations whenever the QP is feasible with zero slack. Telemetry is mapped to governance dashboards and incident workflows to support explainability and auditability. Limitations include reliance on synthetic data and simplified execution to isolate methodological contributions. ...

October 6, 2025 · 3 min · Research Team