false

Portfolio optimization in incomplete markets and price constraints determined by maximum entropy in the mean

Portfolio optimization in incomplete markets and price constraints determined by maximum entropy in the mean ArXiv ID: 2507.07053 “View on arXiv” Authors: Argimiro Arratia, Henryk Gzyl Abstract A solution to a portfolio optimization problem is always conditioned by constraints on the initial capital and the price of the available market assets. If a risk neutral measure is known, then the price of each asset is the discounted expected value of the asset’s price under this measure. But if the market is incomplete, the risk neutral measure is not unique, and there is a range of possible prices for each asset, which can be identified with bid-ask ranges. We present in this paper an effective method to determine the current prices of a collection of assets in incomplete markets, and such that these prices comply with the cost constraints for a portfolio optimization problem. Our workhorse is the method of maximum entropy in the mean to adjust a distortion function from bid-ask market data. This distortion function plays the role of a risk neutral measure, which is used to price the assets, and the distorted probability that it determines reproduces bid-ask market values. We carry out numerical examples to study the effect on portfolio returns of the computation of prices of the assets conforming the portfolio with the proposed methodology. ...

July 3, 2025 · 2 min · Research Team

Arbitrage with bounded Liquidity

Arbitrage with bounded Liquidity ArXiv ID: 2507.02027 “View on arXiv” Authors: Christoph Schlegel, Quintus Kilbourn Abstract We derive the arbitrage gains or, equivalently, Loss Versus Rebalancing (LVR) for arbitrage between \textit{“two imperfectly liquid”} markets, extending prior work that assumes the existence of an infinitely liquid reference market. Our result highlights that the LVR depends on the relative liquidity and relative trading volume of the two markets between which arbitrage gains are extracted. Our model assumes that trading costs on at least one of the markets is quadratic. This assumption holds well in practice, with the exception of highly liquid major pairs on centralized exchanges, for which we discuss extensions to other cost functions. ...

July 2, 2025 · 2 min · Research Team

End-to-End Large Portfolio Optimization for Variance Minimization with Neural Networks through Covariance Cleaning

End-to-End Large Portfolio Optimization for Variance Minimization with Neural Networks through Covariance Cleaning ArXiv ID: 2507.01918 “View on arXiv” Authors: Christian Bongiorno, Efstratios Manolakis, Rosario Nunzio Mantegna Abstract We develop a rotation-invariant neural network that provides the global minimum-variance portfolio by jointly learning how to lag-transform historical returns and how to regularise both the eigenvalues and the marginal volatilities of large equity covariance matrices. This explicit mathematical mapping offers clear interpretability of each module’s role, so the model cannot be regarded as a pure black-box. The architecture mirrors the analytical form of the global minimum-variance solution yet remains agnostic to dimension, so a single model can be calibrated on panels of a few hundred stocks and applied, without retraining, to one thousand US equities-a cross-sectional jump that demonstrates robust out-of-sample generalisation. The loss function is the future realized minimum portfolio variance and is optimized end-to-end on real daily returns. In out-of-sample tests from January 2000 to December 2024 the estimator delivers systematically lower realised volatility, smaller maximum drawdowns, and higher Sharpe ratios than the best analytical competitors, including state-of-the-art non-linear shrinkage. Furthermore, although the model is trained end-to-end to produce an unconstrained (long-short) minimum-variance portfolio, we show that its learned covariance representation can be used in general optimizers under long-only constraints with virtually no loss in its performance advantage over competing estimators. These gains persist when the strategy is executed under a highly realistic implementation framework that models market orders at the auctions, empirical slippage, exchange fees, and financing charges for leverage, and they remain stable during episodes of acute market stress. ...

July 2, 2025 · 2 min · Research Team

Machine Learning Based Stress Testing Framework for Indian Financial Market Portfolios

Machine Learning Based Stress Testing Framework for Indian Financial Market Portfolios ArXiv ID: 2507.02011 “View on arXiv” Authors: Vidya Sagar G, Shifat Ali, Siddhartha P. Chakrabarty Abstract This paper presents a machine learning driven framework for sectoral stress testing in the Indian financial market, focusing on financial services, information technology, energy, consumer goods, and pharmaceuticals. Initially, we address the limitations observed in conventional stress testing through dimensionality reduction and latent factor modeling via Principal Component Analysis and Autoencoders. Building on this, we extend the methodology using Variational Autoencoders, which introduces a probabilistic structure to the latent space. This enables Monte Carlo-based scenario generation, allowing for more nuanced, distribution-aware simulation of stressed market conditions. The proposed framework captures complex non-linear dependencies and supports risk estimation through Value-at-Risk and Expected Shortfall. Together, these pipelines demonstrate the potential of Machine Learning approaches to improve the flexibility, robustness, and realism of financial stress testing. ...

July 2, 2025 · 2 min · Research Team

NGAT: A Node-level Graph Attention Network for Long-term Stock Prediction

NGAT: A Node-level Graph Attention Network for Long-term Stock Prediction ArXiv ID: 2507.02018 “View on arXiv” Authors: Yingjie Niu, Mingchuan Zhao, Valerio Poti, Ruihai Dong Abstract Graph representation learning methods have been widely adopted in financial applications to enhance company representations by leveraging inter-firm relationships. However, current approaches face three key challenges: (1) The advantages of relational information are obscured by limitations in downstream task designs; (2) Existing graph models specifically designed for stock prediction often suffer from excessive complexity and poor generalization; (3) Experience-based construction of corporate relationship graphs lacks effective comparison of different graph structures. To address these limitations, we propose a long-term stock prediction task and develop a Node-level Graph Attention Network (NGAT) specifically tailored for corporate relationship graphs. Furthermore, we experimentally demonstrate the limitations of existing graph comparison methods based on model downstream task performance. Experimental results across two datasets consistently demonstrate the effectiveness of our proposed task and model. The project is publicly available on GitHub to encourage reproducibility and future research. ...

July 2, 2025 · 2 min · Research Team

Multifractality in Bitcoin Realised Volatility: Implications for Rough Volatility Modelling

Multifractality in Bitcoin Realised Volatility: Implications for Rough Volatility Modelling ArXiv ID: 2507.00575 “View on arXiv” Authors: Milan Pontiggia Abstract We assess the applicability of rough volatility models to Bitcoin realized volatility using the normalised p-variation framework of Cont and Das (2024). Applying this model-free estimator to high-frequency Bitcoin data from 2017 to 2024 across multiple sampling resolutions, we find that the normalised statistic remains strictly negative, precluding the estimation of a valid roughness index. Stationarity tests and robustness checks reveal no significant evidence of non-stationarity or structural breaks as explanatory factors. Instead, convergent evidence from three complementary diagnostics, namely Multifractal Detrended Fluctuation Analysis, log-log moment scaling, and wavelet leaders, reveals a multifractal structure in Bitcoin volatility. This behaviour violates the homogeneity assumptions underlying rough volatility estimation and accounts for the estimator’s systematic failure. These findings suggest that while rough volatility models perform well in traditional markets, they are structurally misaligned with the empirical features of Bitcoin volatility. ...

July 1, 2025 · 2 min · Research Team

Optimization Method of Multi-factor Investment Model Driven by Deep Learning for Risk Control

Optimization Method of Multi-factor Investment Model Driven by Deep Learning for Risk Control ArXiv ID: 2507.00332 “View on arXiv” Authors: Ruisi Li, Xinhui Gu Abstract Propose a deep learning driven multi factor investment model optimization method for risk control. By constructing a deep learning model based on Long Short Term Memory (LSTM) and combining it with a multi factor investment model, we optimize factor selection and weight determination to enhance the model’s adaptability and robustness to market changes. Empirical analysis shows that the LSTM model is significantly superior to the benchmark model in risk control indicators such as maximum retracement, Sharp ratio and value at risk (VaR), and shows strong adaptability and robustness in different market environments. Furthermore, the model is applied to the actual portfolio to optimize the asset allocation, which significantly improves the performance of the portfolio, provides investors with more scientific and accurate investment decision-making basis, and effectively balances the benefits and risks. ...

July 1, 2025 · 2 min · Research Team

Finding good bets in the lottery, and why you shouldn't take them

Finding good bets in the lottery, and why you shouldn’t take them ArXiv ID: 2507.01993 “View on arXiv” Authors: Aaron Abrams, Skip Garibaldi Abstract We give a criterion under which the expected return on a ticket for certain large lotteries is positive. In this circumstance, we use elementary portfolio analysis to show that an optimal investment strategy includes a very small allocation for such tickets. Keywords: lottery ticket, portfolio analysis, expected return, investment strategy, risk allocation, lottery tickets ...

June 30, 2025 · 1 min · Research Team

Overparametrized models with posterior drift

Overparametrized models with posterior drift ArXiv ID: 2506.23619 “View on arXiv” Authors: Guillaume Coqueret, Martial Laguerre Abstract This paper investigates the impact of posterior drift on out-of-sample forecasting accuracy in overparametrized machine learning models. We document the loss in performance when the loadings of the data generating process change between the training and testing samples. This matters crucially in settings in which regime changes are likely to occur, for instance, in financial markets. Applied to equity premium forecasting, our results underline the sensitivity of a market timing strategy to sub-periods and to the bandwidth parameters that control the complexity of the model. For the average investor, we find that focusing on holding periods of 15 years can generate very heterogeneous returns, especially for small bandwidths. Large bandwidths yield much more consistent outcomes, but are far less appealing from a risk-adjusted return standpoint. All in all, our findings tend to recommend cautiousness when resorting to large linear models for stock market predictions. ...

June 30, 2025 · 2 min · Research Team

FinAI-BERT: A Transformer-Based Model for Sentence-Level Detection of AI Disclosures in Financial Reports

FinAI-BERT: A Transformer-Based Model for Sentence-Level Detection of AI Disclosures in Financial Reports ArXiv ID: 2507.01991 “View on arXiv” Authors: Muhammad Bilal Zafar Abstract The proliferation of artificial intelligence (AI) in financial services has prompted growing demand for tools that can systematically detect AI-related disclosures in corporate filings. While prior approaches often rely on keyword expansion or document-level classification, they fall short in granularity, interpretability, and robustness. This study introduces FinAI-BERT, a domain-adapted transformer-based language model designed to classify AI-related content at the sentence level within financial texts. The model was fine-tuned on a manually curated and balanced dataset of 1,586 sentences drawn from 669 annual reports of U.S. banks (2015 to 2023). FinAI-BERT achieved near-perfect classification performance (accuracy of 99.37 percent, F1 score of 0.993), outperforming traditional baselines such as Logistic Regression, Naive Bayes, Random Forest, and XGBoost. Interpretability was ensured through SHAP-based token attribution, while bias analysis and robustness checks confirmed the model’s stability across sentence lengths, adversarial inputs, and temporal samples. Theoretically, the study advances financial NLP by operationalizing fine-grained, theme-specific classification using transformer architectures. Practically, it offers a scalable, transparent solution for analysts, regulators, and scholars seeking to monitor the diffusion and framing of AI across financial institutions. ...

June 29, 2025 · 2 min · Research Team