false

'P' Versus 'Q': Differences and Commonalities between the Two Areas of QuantitativeFinance

‘P’ Versus ‘Q’: Differences and Commonalities between the Two Areas of QuantitativeFinance ArXiv ID: ssrn-1717163 “View on arXiv” Authors: Unknown Abstract There exist two separate branches of finance that require advanced quantitative techniques: the “Q” area of derivatives pricing, whose task is to &quo Keywords: Quantitative Finance, Derivatives Pricing, Stochastic Calculus, Fixed Income, Derivatives Complexity vs Empirical Score Math Complexity: 8.5/10 Empirical Rigor: 1.0/10 Quadrant: Lab Rats Why: The paper delves deep into stochastic calculus, PDEs, and advanced stochastic processes (e.g., Ornstein-Uhlenbeck, Heston model), indicating high mathematical complexity. However, it is purely theoretical/conceptual with no data, code, backtests, or implementation details, resulting in very low empirical rigor. flowchart TD A["Research Question<br>Differences & Commonalities<br>between P & Q Finance"] --> B["Methodology<br>Literature Review & Comparative Analysis"] B --> C["Key Inputs<br>Stochastic Calculus Models &<br>Derivatives Pricing Frameworks"] C --> D{"Computational Process<br>Analysis of Methodologies"} D --> E["P Area<br>Pricing & Risk Management<br>(Stochastic Control, Calibration)"] D --> F["Q Area<br>Derivatives Pricing & Hedging<br>(Risk-Neutral Valuation)"] E & F --> G["Outcomes<br>Unified Quantitative Framework<br>Distinct Methodologies &<br>Common Mathematical Foundations"]

January 25, 2026 · 1 min · Research Team

The 7 Reasons Most Machine Learning Funds Fail (Presentation Slides)

The 7 Reasons Most Machine Learning Funds Fail (Presentation Slides) ArXiv ID: ssrn-3031282 “View on arXiv” Authors: Unknown Abstract The rate of failure in quantitative finance is high, and particularly so in financial machine learning. The few managers who succeed amass a large amount of ass Keywords: Financial Machine Learning, Quantitative Finance, Asset Management, Model Validation, Equities Complexity vs Empirical Score Math Complexity: 2.5/10 Empirical Rigor: 3.0/10 Quadrant: Philosophers Why: The paper discusses high-level conceptual issues in financial ML (like stationarity vs. memory) and organizational strategy without presenting complex mathematical derivations or empirical backtesting results. flowchart TD G["Research Goal: Why do ML funds fail?"] --> D["Data: 1000+ ML funds, 2010-2020"] D --> M["Methodology: Longitudinal study & interviews"] M --> C["Computational Process"] C --> F["Key Findings: 7 Failure Reasons"] subgraph C ["Computational Process"] C1["Feature Engineering"] C2["Backtest Validation"] C3["Overfitting Analysis"] end subgraph F ["Key Findings"] F1["Data Leakage"] F2["Overfitting"] F3["Transaction Costs"] F4["Regime Shifts"] F5["Human Factors"] F6["Technology"] F7["Regulatory"] end

January 25, 2026 · 1 min · Research Team

Convergence of the generalization error for deep gradient flow methods for PDEs

Convergence of the generalization error for deep gradient flow methods for PDEs ArXiv ID: 2512.25017 “View on arXiv” Authors: Chenguang Liu, Antonis Papapantoleon, Jasper Rou Abstract The aim of this article is to provide a firm mathematical foundation for the application of deep gradient flow methods (DGFMs) for the solution of (high-dimensional) partial differential equations (PDEs). We decompose the generalization error of DGFMs into an approximation and a training error. We first show that the solution of PDEs that satisfy reasonable and verifiable assumptions can be approximated by neural networks, thus the approximation error tends to zero as the number of neurons tends to infinity. Then, we derive the gradient flow that the training process follows in the ``wide network limit’’ and analyze the limit of this flow as the training time tends to infinity. These results combined show that the generalization error of DGFMs tends to zero as the number of neurons and the training time tend to infinity. ...

December 31, 2025 · 2 min · Research Team

Generative AI-enhanced Sector-based Investment Portfolio Construction

Generative AI-enhanced Sector-based Investment Portfolio Construction ArXiv ID: 2512.24526 “View on arXiv” Authors: Alina Voronina, Oleksandr Romanko, Ruiwen Cao, Roy H. Kwon, Rafael Mendoza-Arriaga Abstract This paper investigates how Large Language Models (LLMs) from leading providers (OpenAI, Google, Anthropic, DeepSeek, and xAI) can be applied to quantitative sector-based portfolio construction. We use LLMs to identify investable universes of stocks within S&P 500 sector indices and evaluate how their selections perform when combined with classical portfolio optimization methods. Each model was prompted to select and weight 20 stocks per sector, and the resulting portfolios were compared with their respective sector indices across two distinct out-of-sample periods: a stable market phase (January-March 2025) and a volatile phase (April-June 2025). Our results reveal a strong temporal dependence in LLM portfolio performance. During stable market conditions, LLM-weighted portfolios frequently outperformed sector indices on both cumulative return and risk-adjusted (Sharpe ratio) measures. However, during the volatile period, many LLM portfolios underperformed, suggesting that current models may struggle to adapt to regime shifts or high-volatility environments underrepresented in their training data. Importantly, when LLM-based stock selection is combined with traditional optimization techniques, portfolio outcomes improve in both performance and consistency. This study contributes one of the first multi-model, cross-provider evaluations of generative AI algorithms in investment management. It highlights that while LLMs can effectively complement quantitative finance by enhancing stock selection and interpretability, their reliability remains market-dependent. The findings underscore the potential of hybrid AI-quantitative frameworks, integrating LLM reasoning with established optimization techniques, to produce more robust and adaptive investment strategies. ...

December 31, 2025 · 2 min · Research Team

Quantitative Financial Modeling for Sri Lankan Markets: Approach Combining NLP, Clustering and Time-Series Forecasting

Quantitative Financial Modeling for Sri Lankan Markets: Approach Combining NLP, Clustering and Time-Series Forecasting ArXiv ID: 2512.20216 “View on arXiv” Authors: Linuk Perera Abstract This research introduces a novel quantitative methodology tailored for quantitative finance applications, enabling banks, stockbrokers, and investors to predict economic regimes and market signals in emerging markets, specifically Sri Lankan stock indices (S&P SL20 and ASPI) by integrating Environmental, Social, and Governance (ESG) sentiment analysis with macroeconomic indicators and advanced time-series forecasting. Designed to leverage quantitative techniques for enhanced risk assessment, portfolio optimization, and trading strategies in volatile environments, the architecture employs FinBERT, a transformer-based NLP model, to extract sentiment from ESG texts, followed by unsupervised clustering (UMAP/HDBSCAN) to identify 5 latent ESG regimes, validated via PCA. These regimes are mapped to economic conditions using a dense neural network and gradient boosting classifier, achieving 84.04% training and 82.0% validation accuracy. Concurrently, time-series models (SRNN, MLP, LSTM, GRU) forecast daily closing prices, with GRU attaining an R-squared of 0.801 and LSTM delivering 52.78% directional accuracy on intraday data. A strong correlation between S&P SL20 and S&P 500, observed through moving average and volatility trend plots, further bolsters forecasting precision. A rule-based fusion logic merges ESG and time-series outputs for final market signals. By addressing literature gaps that overlook emerging markets and holistic integration, this quant-driven framework combines global correlations and local sentiment analysis to offer scalable, accurate tools for quantitative finance professionals navigating complex markets like Sri Lanka. ...

December 23, 2025 · 2 min · Research Team

A Hybrid Architecture for Options Wheel Strategy Decisions: LLM-Generated Bayesian Networks for Transparent Trading

A Hybrid Architecture for Options Wheel Strategy Decisions: LLM-Generated Bayesian Networks for Transparent Trading ArXiv ID: 2512.01123 “View on arXiv” Authors: Xiaoting Kuang, Boken Lin Abstract Large Language Models (LLMs) excel at understanding context and qualitative nuances but struggle with the rigorous and transparent reasoning required in high-stakes quantitative domains such as financial trading. We propose a model-first hybrid architecture for the options “wheel” strategy that combines the strengths of LLMs with the robustness of a Bayesian Network. Rather than using the LLM as a black-box decision-maker, we employ it as an intelligent model builder. For each trade decision, the LLM constructs a context-specific Bayesian network by interpreting current market conditions, including prices, volatility, trends, and news, and hypothesizing relationships among key variables. The LLM also selects relevant historical data from an 18.75-year, 8,919-trade dataset to populate the network’s conditional probability tables. This selection focuses on scenarios analogous to the present context. The instantiated Bayesian network then performs transparent probabilistic inference, producing explicit probability distributions and risk metrics to support decision-making. A feedback loop enables the LLM to analyze trade outcomes and iteratively refine subsequent network structures and data selection, learning from both successes and failures. Empirically, our hybrid system demonstrates effective performance on the wheel strategy. Over nearly 19 years of out-of-sample testing, it achieves a 15.3% annualized return with significantly superior risk-adjusted performance (Sharpe ratio 1.08 versus 0.62 for market benchmarks) and dramatically lower drawdown (-8.2% versus -60%) while maintaining a 0% assignment rate through strategic option rolling. Crucially, each trade decision is fully explainable, involving on average 27 recorded decision factors (e.g., volatility level, option premium, risk indicators, market context). ...

November 30, 2025 · 3 min · Research Team

Novel Risk Measures for Portfolio Optimization Using Equal-Correlation Portfolio Strategy

Novel Risk Measures for Portfolio Optimization Using Equal-Correlation Portfolio Strategy ArXiv ID: 2508.03704 “View on arXiv” Authors: Biswarup Chakraborty Abstract Portfolio optimization has long been dominated by covariance-based strategies, such as the Markowitz Mean-Variance framework. However, these approaches often fail to ensure a balanced risk structure across assets, leading to concentration in a few securities. In this paper, we introduce novel risk measures grounded in the equal-correlation portfolio strategy, aiming to construct portfolios where each asset maintains an equal correlation with the overall portfolio return. We formulate a mathematical optimization framework that explicitly controls portfolio-wide correlation while preserving desirable risk-return trade-offs. The proposed models are empirically validated using historical stock market data. Our findings show that portfolios constructed via this approach demonstrate superior risk diversification and more stable returns under diverse market conditions. This methodology offers a compelling alternative to conventional diversification techniques and holds practical relevance for institutional investors, asset managers, and quantitative trading strategies. ...

July 20, 2025 · 2 min · Research Team

A Novel Loss Function for Deep Learning Based Daily Stock Trading System

A Novel Loss Function for Deep Learning Based Daily Stock Trading System ArXiv ID: 2502.17493 “View on arXiv” Authors: Unknown Abstract Making consistently profitable financial decisions in a continuously evolving and volatile stock market has always been a difficult task. Professionals from different disciplines have developed foundational theories to anticipate price movement and evaluate securities such as the famed Capital Asset Pricing Model (CAPM). In recent years, the role of artificial intelligence (AI) in asset pricing has been growing. Although the black-box nature of deep learning models lacks interpretability, they have continued to solidify their position in the financial industry. We aim to further enhance AI’s potential and utility by introducing a return-weighted loss function that will drive top growth while providing the ML models a limited amount of information. Using only publicly accessible stock data (open/close/high/low, trading volume, sector information) and several technical indicators constructed from them, we propose an efficient daily trading system that detects top growth opportunities. Our best models achieve 61.73% annual return on daily rebalancing with an annualized Sharpe Ratio of 1.18 over 1340 testing days from 2019 to 2024, and 37.61% annual return with an annualized Sharpe Ratio of 0.97 over 1360 testing days from 2005 to 2010. The main drivers for success, especially independent of any domain knowledge, are the novel return-weighted loss function, the integration of categorical and continuous data, and the ML model architecture. We also demonstrate the superiority of our novel loss function over traditional loss functions via several performance metrics and statistical evidence. ...

February 20, 2025 · 2 min · Research Team

Extracting Alpha from Financial Analyst Networks

Extracting Alpha from Financial Analyst Networks ArXiv ID: 2410.20597 “View on arXiv” Authors: Unknown Abstract We investigate the effectiveness of a momentum trading signal based on the coverage network of financial analysts. This signal builds on the key information-brokerage role financial sell-side analysts play in modern stock markets. The baskets of stocks covered by each analyst can be used to construct a network between firms whose edge weights represent the number of analysts jointly covering both firms. Although the link between financial analysts coverage and co-movement of firms’ stock prices has been investigated in the literature, little effort has been made to systematically learn the most effective combination of signals from firms covered jointly by analysts in order to benefit from any spillover effect. To fill this gap, we build a trading strategy which leverages the analyst coverage network using a graph attention network. More specifically, our model learns to aggregate information from individual firm features and signals from neighbouring firms in a node-level forecasting task. We develop a portfolio based on those predictions which we demonstrate to exhibit an annualized returns of 29.44% and a Sharpe ratio of 4.06 substantially outperforming market baselines and existing graph machine learning based frameworks. We further investigate the performance and robustness of this strategy through extensive empirical analysis. Our paper represents one of the first attempts in using graph machine learning to extract actionable knowledge from the analyst coverage network for practical financial applications. ...

October 27, 2024 · 2 min · Research Team

A Financial Time Series Denoiser Based on Diffusion Model

A Financial Time Series Denoiser Based on Diffusion Model ArXiv ID: 2409.02138 “View on arXiv” Authors: Unknown Abstract Financial time series often exhibit low signal-to-noise ratio, posing significant challenges for accurate data interpretation and prediction and ultimately decision making. Generative models have gained attention as powerful tools for simulating and predicting intricate data patterns, with the diffusion model emerging as a particularly effective method. This paper introduces a novel approach utilizing the diffusion model as a denoiser for financial time series in order to improve data predictability and trading performance. By leveraging the forward and reverse processes of the conditional diffusion model to add and remove noise progressively, we reconstruct original data from noisy inputs. Our extensive experiments demonstrate that diffusion model-based denoised time series significantly enhance the performance on downstream future return classification tasks. Moreover, trading signals derived from the denoised data yield more profitable trades with fewer transactions, thereby minimizing transaction costs and increasing overall trading efficiency. Finally, we show that by using classifiers trained on denoised time series, we can recognize the noising state of the market and obtain excess return. ...

September 2, 2024 · 2 min · Research Team