Large (and Deep) Factor Models
ArXiv ID: 2402.06635 “View on arXiv”
Authors: Unknown
Abstract
We open up the black box behind Deep Learning for portfolio optimization and prove that a sufficiently wide and arbitrarily deep neural network (DNN) trained to maximize the Sharpe ratio of the Stochastic Discount Factor (SDF) is equivalent to a large factor model (LFM): A linear factor pricing model that uses many non-linear characteristics. The nature of these characteristics depends on the architecture of the DNN in an explicit, tractable fashion. This makes it possible to derive end-to-end trained DNN-based SDFs in closed form for the first time. We evaluate LFMs empirically and show how various architectural choices impact SDF performance. We document the virtue of depth complexity: With enough data, the out-of-sample performance of DNN-SDF is increasing in the NN depth, saturating at huge depths of around 100 hidden layers.
Keywords: Stochastic Discount Factor (SDF), Deep Neural Networks (DNN), factor pricing model, Sharpe ratio maximization, non-linear characteristics, Equities (Asset Pricing)
Complexity vs Empirical Score
- Math Complexity: 9.0/10
- Empirical Rigor: 8.0/10
- Quadrant: Holy Grail
- Why: The paper is mathematically dense, leveraging advanced deep learning theory (Neural Tangent Kernel) and deriving analytical closed-form solutions for DNN-SDFs. Empirically, it conducts extensive out-of-sample tests using large financial datasets, analyzes architectural choices like depth, and documents performance metrics such as Sharpe ratios and alphas.
flowchart TD
A["Research Goal: Open the Black Box<br>Is a Deep Neural Network SDF<br>equivalent to a Factor Model?"] --> B["Methodology: Theoretical Construction"]
B --> C["Data: Asset Returns<br>and Characteristics"]
C --> D["Computation: End-to-End Training<br>Maximizing Sharpe Ratio of SDF"]
D --> E["Key Finding 1: DNN SDF =<br>Large Factor Model"]
D --> F["Key Finding 2: Depth Matters<br>Performance increases with depth<br>up to ~100 hidden layers"]