Deep multi-step mixed algorithm for high dimensional non-linear PDEs and associated BSDEs

ArXiv ID: 2308.14487 “View on arXiv”

Authors: Unknown

Abstract

We propose a new multistep deep learning-based algorithm for the resolution of moderate to high dimensional nonlinear backward stochastic differential equations (BSDEs) and their corresponding parabolic partial differential equations (PDE). Our algorithm relies on the iterated time discretisation of the BSDE and approximates its solution and gradient using deep neural networks and automatic differentiation at each time step. The approximations are obtained by sequential minimisation of local quadratic loss functions at each time step through stochastic gradient descent. We provide an analysis of approximation error in the case of a network architecture with weight constraints requiring only low regularity conditions on the generator of the BSDE. The algorithm increases accuracy from its single step parent model and has reduced complexity when compared to similar models in the literature.

Keywords: backward stochastic differential equations, nonlinear PDE, deep learning, neural networks, automatic differentiation, General / Methodological

Complexity vs Empirical Score

  • Math Complexity: 9.5/10
  • Empirical Rigor: 3.0/10
  • Quadrant: Lab Rats
  • Why: The paper presents advanced theoretical mathematics involving stochastic calculus, PDEs, and neural network approximation theory, with rigorous error analysis. However, it lacks direct empirical finance application, backtesting, or financial data, focusing instead on numerical algorithm validation for PDEs.
  flowchart TD
    A["Research Goal: Solve high-dimensional non-linear PDEs and BSDEs"] --> B["Methodology: Iterated time discretization of BSDE"]
    B --> C["Data/Inputs: Generator, initial conditions, network architecture with weight constraints"]
    C --> D["Computational Process: Sequential minimization of local quadratic loss via Stochastic Gradient Descent"]
    D --> E["Computation: Deep neural networks & automatic differentiation for solution and gradient approximation"]
    E --> F["Outcome 1: Increased accuracy vs. single-step models"]
    E --> G["Outcome 2: Reduced computational complexity compared to similar models"]
    E --> H["Outcome 3: Theoretical error analysis under low regularity conditions"]