false

A Practitioner's Guide to AI+ML in Portfolio Investing

A Practitioner’s Guide to AI+ML in Portfolio Investing ArXiv ID: 2509.25456 “View on arXiv” Authors: Mehmet Caner Qingliang Fan Abstract In this review, we provide practical guidance on some of the main machine learning tools used in portfolio weight formation. This is not an exhaustive list, but a fraction of the ones used and have some statistical analysis behind it. All this research is essentially tied to precision matrix of excess asset returns. Our main point is that the techniques should be used in conjunction with outlined objective functions. In other words, there should be joint analysis of Machine Learning (ML) technique with the possible portfolio choice-objective functions in terms of test period Sharpe Ratio or returns. The ML method with the best objective function should provide the weight for portfolio formation. Empirically we analyze five time periods of interest, that are out-sample and show performance of some ML-Artificial Intelligence (AI) methods. We see that nodewise regression with Global Minimum Variance portfolio based weights deliver very good Sharpe Ratio and returns across five time periods in this century we analyze. We cover three downturns, and 2 long term investment spans. ...

September 29, 2025 · 2 min · Research Team

Large (and Deep) Factor Models

Large (and Deep) Factor Models ArXiv ID: 2402.06635 “View on arXiv” Authors: Unknown Abstract We open up the black box behind Deep Learning for portfolio optimization and prove that a sufficiently wide and arbitrarily deep neural network (DNN) trained to maximize the Sharpe ratio of the Stochastic Discount Factor (SDF) is equivalent to a large factor model (LFM): A linear factor pricing model that uses many non-linear characteristics. The nature of these characteristics depends on the architecture of the DNN in an explicit, tractable fashion. This makes it possible to derive end-to-end trained DNN-based SDFs in closed form for the first time. We evaluate LFMs empirically and show how various architectural choices impact SDF performance. We document the virtue of depth complexity: With enough data, the out-of-sample performance of DNN-SDF is increasing in the NN depth, saturating at huge depths of around 100 hidden layers. ...

January 20, 2024 · 2 min · Research Team