Deep Neural Operator Learning for Probabilistic Models
ArXiv ID: 2511.07235 “View on arXiv”
Authors: Erhan Bayraktar, Qi Feng, Zecheng Zhang, Zhaoyu Zhang
Abstract
We propose a deep neural-operator framework for a general class of probability models. Under global Lipschitz conditions on the operator over the entire Euclidean space-and for a broad class of probabilistic models-we establish a universal approximation theorem with explicit network-size bounds for the proposed architecture. The underlying stochastic processes are required only to satisfy integrability and general tail-probability conditions. We verify these assumptions for both European and American option-pricing problems within the forward-backward SDE (FBSDE) framework, which in turn covers a broad class of operators arising from parabolic PDEs, with or without free boundaries. Finally, we present a numerical example for a basket of American options, demonstrating that the learned model produces optimal stopping boundaries for new strike prices without retraining.
Keywords: Neural Operators, Universal Approximation, FBSDE, American Options, Stochastic Processes
Complexity vs Empirical Score
- Math Complexity: 8.5/10
- Empirical Rigor: 6.0/10
- Quadrant: Holy Grail
- Why: The paper presents advanced mathematical theory with universal approximation theorems, Lipschitz conditions, and explicit network-size bounds, indicating high mathematical density; while it includes a numerical example for American options, it lacks full backtest-ready code or extensive datasets, placing it in the high math/high rigor quadrant.
flowchart TD
A["Research Goal"] -->|Develop universal approximator for probability models| B["Methodology: Neural Operator Framework"]
B --> C["Input: FBSDE System"]
C --> D["Process: Train with Lipschitz & Integrability Constraints"]
D --> E["Output: Learned Operator"]
E --> F["Findings: Universal Approximation & Generalization"]
F -->|Example: American Basket Options| G["Outcome: Optimal Stopping Boundaries for New Strikes"]