false

Deep Neural Operator Learning for Probabilistic Models

Deep Neural Operator Learning for Probabilistic Models ArXiv ID: 2511.07235 “View on arXiv” Authors: Erhan Bayraktar, Qi Feng, Zecheng Zhang, Zhaoyu Zhang Abstract We propose a deep neural-operator framework for a general class of probability models. Under global Lipschitz conditions on the operator over the entire Euclidean space-and for a broad class of probabilistic models-we establish a universal approximation theorem with explicit network-size bounds for the proposed architecture. The underlying stochastic processes are required only to satisfy integrability and general tail-probability conditions. We verify these assumptions for both European and American option-pricing problems within the forward-backward SDE (FBSDE) framework, which in turn covers a broad class of operators arising from parabolic PDEs, with or without free boundaries. Finally, we present a numerical example for a basket of American options, demonstrating that the learned model produces optimal stopping boundaries for new strike prices without retraining. ...

November 10, 2025 · 2 min · Research Team

One model to solve them all: 2BSDE families via neural operators

One model to solve them all: 2BSDE families via neural operators ArXiv ID: 2511.01125 “View on arXiv” Authors: Takashi Furuya, Anastasis Kratsios, Dylan Possamaï, Bogdan Raonić Abstract We introduce a mild generative variant of the classical neural operator model, which leverages Kolmogorov–Arnold networks to solve infinite families of second-order backward stochastic differential equations ($2$BSDEs) on regular bounded Euclidean domains with random terminal time. Our first main result shows that the solution operator associated with a broad range of $2$BSDE families is approximable by appropriate neural operator models. We then identify a structured subclass of (infinite) families of $2$BSDEs whose neural operator approximation requires only a polynomial number of parameters in the reciprocal approximation rate, as opposed to the exponential requirement in general worst-case neural operator guarantees. ...

November 3, 2025 · 2 min · Research Team

Neural Operators Can Play Dynamic Stackelberg Games

Neural Operators Can Play Dynamic Stackelberg Games ArXiv ID: 2411.09644 “View on arXiv” Authors: Unknown Abstract Dynamic Stackelberg games are a broad class of two-player games in which the leader acts first, and the follower chooses a response strategy to the leader’s strategy. Unfortunately, only stylized Stackelberg games are explicitly solvable since the follower’s best-response operator (as a function of the control of the leader) is typically analytically intractable. This paper addresses this issue by showing that the \textit{“follower’s best-response operator”} can be approximately implemented by an \textit{“attention-based neural operator”}, uniformly on compact subsets of adapted open-loop controls for the leader. We further show that the value of the Stackelberg game where the follower uses the approximate best-response operator approximates the value of the original Stackelberg game. Our main result is obtained using our universal approximation theorem for attention-based neural operators between spaces of square-integrable adapted stochastic processes, as well as stability results for a general class of Stackelberg games. ...

November 14, 2024 · 2 min · Research Team

Simultaneously Solving FBSDEs and their Associated Semilinear Elliptic PDEs with Small Neural Operators

Simultaneously Solving FBSDEs and their Associated Semilinear Elliptic PDEs with Small Neural Operators ArXiv ID: 2410.14788 “View on arXiv” Authors: Unknown Abstract Forward-backwards stochastic differential equations (FBSDEs) play an important role in optimal control, game theory, economics, mathematical finance, and in reinforcement learning. Unfortunately, the available FBSDE solvers operate on \textit{“individual”} FBSDEs, meaning that they cannot provide a computationally feasible strategy for solving large families of FBSDEs, as these solvers must be re-run several times. \textit{“Neural operators”} (NOs) offer an alternative approach for \textit{“simultaneously solving”} large families of decoupled FBSDEs by directly approximating the solution operator mapping \textit{“inputs:”} terminal conditions and dynamics of the backwards process to \textit{“outputs:”} solutions to the associated FBSDE. Though universal approximation theorems (UATs) guarantee the existence of such NOs, these NOs are unrealistically large. Upon making only a few simple theoretically-guided tweaks to the standard convolutional NO build, we confirm that ``small’’ NOs can uniformly approximate the solution operator to structured families of FBSDEs with random terminal time, uniformly on suitable compact sets determined by Sobolev norms using a logarithmic depth, a constant width, and a polynomial rank in the reciprocal approximation error. This result is rooted in our second result, and main contribution to the NOs for PDE literature, showing that our convolutional NOs of similar depth and width but grow only \textit{“quadratically”} (at a dimension-free rate) when uniformly approximating the solution operator of the associated class of semilinear Elliptic PDEs to these families of FBSDEs. A key insight into how NOs work we uncover is that the convolutional layers of our NO can approximately implement the fixed point iteration used to prove the existence of a unique solution to these semilinear Elliptic PDEs. ...

October 18, 2024 · 3 min · Research Team

Operator Deep Smoothing for Implied Volatility

Operator Deep Smoothing for Implied Volatility ArXiv ID: 2406.11520 “View on arXiv” Authors: Unknown Abstract We devise a novel method for nowcasting implied volatility based on neural operators. Better known as implied volatility smoothing in the financial industry, nowcasting of implied volatility means constructing a smooth surface that is consistent with the prices presently observed on a given option market. Option price data arises highly dynamically in ever-changing spatial configurations, which poses a major limitation to foundational machine learning approaches using classical neural networks. While large models in language and image processing deliver breakthrough results on vast corpora of raw data, in financial engineering the generalization from big historical datasets has been hindered by the need for considerable data pre-processing. In particular, implied volatility smoothing has remained an instance-by-instance, hands-on process both for neural network-based and traditional parametric strategies. Our general operator deep smoothing approach, instead, directly maps observed data to smoothed surfaces. We adapt the graph neural operator architecture to do so with high accuracy on ten years of raw intraday S&P 500 options data, using a single model instance. The trained operator adheres to critical no-arbitrage constraints and is robust with respect to subsampling of inputs (occurring in practice in the context of outlier removal). We provide extensive historical benchmarks and showcase the generalization capability of our approach in a comparison with classical neural networks and SVI, an industry standard parametrization for implied volatility. The operator deep smoothing approach thus opens up the use of neural networks on large historical datasets in financial engineering. ...

June 17, 2024 · 2 min · Research Team