false

One model to solve them all: 2BSDE families via neural operators

One model to solve them all: 2BSDE families via neural operators ArXiv ID: 2511.01125 “View on arXiv” Authors: Takashi Furuya, Anastasis Kratsios, Dylan Possamaï, Bogdan Raonić Abstract We introduce a mild generative variant of the classical neural operator model, which leverages Kolmogorov–Arnold networks to solve infinite families of second-order backward stochastic differential equations ($2$BSDEs) on regular bounded Euclidean domains with random terminal time. Our first main result shows that the solution operator associated with a broad range of $2$BSDE families is approximable by appropriate neural operator models. We then identify a structured subclass of (infinite) families of $2$BSDEs whose neural operator approximation requires only a polynomial number of parameters in the reciprocal approximation rate, as opposed to the exponential requirement in general worst-case neural operator guarantees. ...

November 3, 2025 · 2 min · Research Team

Simultaneously Solving FBSDEs and their Associated Semilinear Elliptic PDEs with Small Neural Operators

Simultaneously Solving FBSDEs and their Associated Semilinear Elliptic PDEs with Small Neural Operators ArXiv ID: 2410.14788 “View on arXiv” Authors: Unknown Abstract Forward-backwards stochastic differential equations (FBSDEs) play an important role in optimal control, game theory, economics, mathematical finance, and in reinforcement learning. Unfortunately, the available FBSDE solvers operate on \textit{“individual”} FBSDEs, meaning that they cannot provide a computationally feasible strategy for solving large families of FBSDEs, as these solvers must be re-run several times. \textit{“Neural operators”} (NOs) offer an alternative approach for \textit{“simultaneously solving”} large families of decoupled FBSDEs by directly approximating the solution operator mapping \textit{“inputs:”} terminal conditions and dynamics of the backwards process to \textit{“outputs:”} solutions to the associated FBSDE. Though universal approximation theorems (UATs) guarantee the existence of such NOs, these NOs are unrealistically large. Upon making only a few simple theoretically-guided tweaks to the standard convolutional NO build, we confirm that ``small’’ NOs can uniformly approximate the solution operator to structured families of FBSDEs with random terminal time, uniformly on suitable compact sets determined by Sobolev norms using a logarithmic depth, a constant width, and a polynomial rank in the reciprocal approximation error. This result is rooted in our second result, and main contribution to the NOs for PDE literature, showing that our convolutional NOs of similar depth and width but grow only \textit{“quadratically”} (at a dimension-free rate) when uniformly approximating the solution operator of the associated class of semilinear Elliptic PDEs to these families of FBSDEs. A key insight into how NOs work we uncover is that the convolutional layers of our NO can approximately implement the fixed point iteration used to prove the existence of a unique solution to these semilinear Elliptic PDEs. ...

October 18, 2024 · 3 min · Research Team

Low-dimensional approximations of the conditional law of Volterra processes: a non-positive curvature approach

Low-dimensional approximations of the conditional law of Volterra processes: a non-positive curvature approach ArXiv ID: 2405.20094 “View on arXiv” Authors: Unknown Abstract Predicting the conditional evolution of Volterra processes with stochastic volatility is a crucial challenge in mathematical finance. While deep neural network models offer promise in approximating the conditional law of such processes, their effectiveness is hindered by the curse of dimensionality caused by the infinite dimensionality and non-smooth nature of these problems. To address this, we propose a two-step solution. Firstly, we develop a stable dimension reduction technique, projecting the law of a reasonably broad class of Volterra process onto a low-dimensional statistical manifold of non-positive sectional curvature. Next, we introduce a sequentially deep learning model tailored to the manifold’s geometry, which we show can approximate the projected conditional law of the Volterra process. Our model leverages an auxiliary hypernetwork to dynamically update its internal parameters, allowing it to encode non-stationary dynamics of the Volterra process, and it can be interpreted as a gating mechanism in a mixture of expert models where each expert is specialized at a specific point in time. Our hypernetwork further allows us to achieve approximation rates that would seemingly only be possible with very large networks. ...

May 30, 2024 · 2 min · Research Team

The Boy's Guide to Pricing & Hedging

The Boy’s Guide to Pricing & Hedging ArXiv ID: ssrn-364760 “View on arXiv” Authors: Unknown Abstract There is often an unfortunate strain of pedantry running through the teaching of quantitative finance, one involving an excess of abstraction, formality, rigor Keywords: quantitative finance education, mathematical finance, pedagogy, practical application, financial education Complexity vs Empirical Score Math Complexity: 4.0/10 Empirical Rigor: 1.0/10 Quadrant: Philosophers Why: The paper focuses on conceptual foundations like replication and the law of one price with minimal mathematical formalism, and it contains no backtesting, datasets, or implementation details. flowchart TD A["Research Goal<br>Bridge gap between<br>abstract theory & practical application"] --> B{"Key Methodology"} B --> C["Analyze pedagogical<br>approaches"] B --> D["Develop practical<br>pricing examples"] B --> E["Simplify hedging<br>strategies"] C --> F["Computational Process<br>Mathematical modeling<br>+ Real-world scenarios"] D --> F E --> F F --> G["Key Findings/Outcomes"] G --> H["Enhanced understanding<br>through practical application"] G --> I["Reduced pedagogical<br>abstraction"] G --> J["Balanced rigorous<br>theory with practice"]

January 17, 2003 · 1 min · Research Team