false

Convergence of the generalization error for deep gradient flow methods for PDEs

Convergence of the generalization error for deep gradient flow methods for PDEs ArXiv ID: 2512.25017 “View on arXiv” Authors: Chenguang Liu, Antonis Papapantoleon, Jasper Rou Abstract The aim of this article is to provide a firm mathematical foundation for the application of deep gradient flow methods (DGFMs) for the solution of (high-dimensional) partial differential equations (PDEs). We decompose the generalization error of DGFMs into an approximation and a training error. We first show that the solution of PDEs that satisfy reasonable and verifiable assumptions can be approximated by neural networks, thus the approximation error tends to zero as the number of neurons tends to infinity. Then, we derive the gradient flow that the training process follows in the ``wide network limit’’ and analyze the limit of this flow as the training time tends to infinity. These results combined show that the generalization error of DGFMs tends to zero as the number of neurons and the training time tend to infinity. ...

December 31, 2025 · 2 min · Research Team

Deep-MacroFin: Informed Equilibrium Neural Network for Continuous Time Economic Models

Deep-MacroFin: Informed Equilibrium Neural Network for Continuous Time Economic Models ArXiv ID: 2408.10368 “View on arXiv” Authors: Unknown Abstract In this paper, we present Deep-MacroFin, a comprehensive framework designed to solve partial differential equations, with a particular focus on models in continuous time economics. This framework leverages deep learning methodologies, including Multi-Layer Perceptrons and the newly developed Kolmogorov-Arnold Networks. It is optimized using economic information encapsulated by Hamilton-Jacobi-Bellman (HJB) equations and coupled algebraic equations. The application of neural networks holds the promise of accurately resolving high-dimensional problems with fewer computational demands and limitations compared to other numerical methods. This framework can be readily adapted for systems of partial differential equations in high dimensions. Importantly, it offers a more efficient (5$\times$ less CUDA memory and 40$\times$ fewer FLOPs in 100D problems) and user-friendly implementation than existing libraries. We also incorporate a time-stepping scheme to enhance training stability for nonlinear HJB equations, enabling the solution of 50D economic models. ...

August 19, 2024 · 2 min · Research Team

A forward differential deep learning-based algorithm for solving high-dimensional nonlinear backward stochastic differential equations

A forward differential deep learning-based algorithm for solving high-dimensional nonlinear backward stochastic differential equations ArXiv ID: 2408.05620 “View on arXiv” Authors: Unknown Abstract In this work, we present a novel forward differential deep learning-based algorithm for solving high-dimensional nonlinear backward stochastic differential equations (BSDEs). Motivated by the fact that differential deep learning can efficiently approximate the labels and their derivatives with respect to inputs, we transform the BSDE problem into a differential deep learning problem. This is done by leveraging Malliavin calculus, resulting in a system of BSDEs. The unknown solution of the BSDE system is a triple of processes $(Y, Z, Γ)$, representing the solution, its gradient, and the Hessian matrix. The main idea of our algorithm is to discretize the integrals using the Euler-Maruyama method and approximate the unknown discrete solution triple using three deep neural networks. The parameters of these networks are then optimized by globally minimizing a differential learning loss function, which is novelty defined as a weighted sum of the dynamics of the discretized system of BSDEs. Through various high-dimensional examples, we demonstrate that our proposed scheme is more efficient in terms of accuracy and computation time compared to other contemporary forward deep learning-based methodologies. ...

August 10, 2024 · 2 min · Research Team