Enforcing asymptotic behavior with DNNs for approximation and regression in finance

ArXiv ID: 2411.05257 “View on arXiv”

Authors: Unknown

Abstract

We propose a simple methodology to approximate functions with given asymptotic behavior by specifically constructed terms and an unconstrained deep neural network (DNN). The methodology we describe extends to various asymptotic behaviors and multiple dimensions and is easy to implement. In this work we demonstrate it for linear asymptotic behavior in one-dimensional examples. We apply it to function approximation and regression problems where we measure approximation of only function values (Vanilla Machine Learning''-VML) or also approximation of function and derivative values (Differential Machine Learning’’-DML) on several examples. We see that enforcing given asymptotic behavior leads to better approximation and faster convergence.

Keywords: Deep Neural Networks (DNN), Asymptotic Behavior, Function Approximation, Differential Machine Learning, Regression Analysis, General Financial Modelling

Complexity vs Empirical Score

  • Math Complexity: 8.0/10
  • Empirical Rigor: 6.5/10
  • Quadrant: Holy Grail
  • Why: The paper involves advanced mathematics including deep neural networks, splines, and asymptotic analysis, scoring high on math complexity, while it provides concrete implementations and experimental results on financial functions like Black-Scholes, meeting backtest-ready criteria and thus high empirical rigor.
  flowchart TD
    A["Research Goal:<br>Enforce Asymptotic Behavior<br>in DNNs for Finance"] --> B["Methodology: Construct<br>Specific Terms + Unconstrained DNN"]
    B --> C["Data: 1D Financial Examples<br>Linear Asymptotes"]
    C --> D{"Computational Process"}
    D --> E["Vanilla ML: Approximate<br>Function Values"]
    D --> F["Differential ML: Approximate<br>Function & Derivative Values"]
    E & F --> G["Key Findings:<br>Better Approximation &<br>Faster Convergence"]