false

Enhanced fill probability estimates in institutional algorithmic bond trading using statistical learning algorithms with quantum computers

Enhanced fill probability estimates in institutional algorithmic bond trading using statistical learning algorithms with quantum computers ArXiv ID: 2509.17715 “View on arXiv” Authors: Axel Ciceri, Austin Cottrell, Joshua Freeland, Daniel Fry, Hirotoshi Hirai, Philip Intallura, Hwajung Kang, Chee-Kong Lee, Abhijit Mitra, Kentaro Ohno, Das Pemmaraju, Manuel Proissl, Brian Quanz, Del Rajan, Noriaki Shimada, Kavitha Yograj Abstract The estimation of fill probabilities for trade orders represents a key ingredient in the optimization of algorithmic trading strategies. It is bound by the complex dynamics of financial markets with inherent uncertainties, and the limitations of models aiming to learn from multivariate financial time series that often exhibit stochastic properties with hidden temporal patterns. In this paper, we focus on algorithmic responses to trade inquiries in the corporate bond market and investigate fill probability estimation errors of common machine learning models when given real production-scale intraday trade event data, transformed by a quantum algorithm running on IBM Heron processors, as well as on noiseless quantum simulators for comparison. We introduce a framework to embed these quantum-generated data transforms as a decoupled offline component that can be selectively queried by models in low-latency institutional trade optimization settings. A trade execution backtesting method is employed to evaluate the fill prediction performance of these models in relation to their input data. We observe a relative gain of up to ~ 34% in out-of-sample test scores for those models with access to quantum hardware-transformed data over those using the original trading data or transforms by noiseless quantum simulation. These empirical results suggest that the inherent noise in current quantum hardware contributes to this effect and motivates further studies. Our work demonstrates the emerging potential of quantum computing as a complementary explorative tool in quantitative finance and encourages applied industry research towards practical applications in trading. ...

September 22, 2025 · 3 min · Research Team

Supervised Similarity for High-Yield Corporate Bonds with Quantum Cognition Machine Learning

Supervised Similarity for High-Yield Corporate Bonds with Quantum Cognition Machine Learning ArXiv ID: 2502.01495 “View on arXiv” Authors: Unknown Abstract We investigate the application of quantum cognition machine learning (QCML), a novel paradigm for both supervised and unsupervised learning tasks rooted in the mathematical formalism of quantum theory, to distance metric learning in corporate bond markets. Compared to equities, corporate bonds are relatively illiquid and both trade and quote data in these securities are relatively sparse. Thus, a measure of distance/similarity among corporate bonds is particularly useful for a variety of practical applications in the trading of illiquid bonds, including the identification of similar tradable alternatives, pricing securities with relatively few recent quotes or trades, and explaining the predictions and performance of ML models based on their training data. Previous research has explored supervised similarity learning based on classical tree-based models in this context; here, we explore the application of the QCML paradigm for supervised distance metric learning in the same context, showing that it outperforms classical tree-based models in high-yield (HY) markets, while giving comparable or better performance (depending on the evaluation metric) in investment grade (IG) markets. ...

February 3, 2025 · 2 min · Research Team

Defaultable bond liquidity spread estimation: an option-based approach

Defaultable bond liquidity spread estimation: an option-based approach ArXiv ID: 2501.11427 “View on arXiv” Authors: Unknown Abstract This paper extends an option-theoretic approach to estimate liquidity spreads for corporate bonds. Inspired by Longstaff’s equity market framework and subsequent work by Koziol and Sauerbier on risk-free zero-coupon bonds, the model views liquidity as a look-back option. The model accounts for the interplay of risk-free rate volatility and credit risk. A numerical analysis highlights the impact of these factors on the liquidity spread, particularly for bonds with different maturities and credit ratings. The methodology is applied to estimate the liquidity spread for unquoted bonds, with a specific case study on the Republic of Italy’s debt, leveraging market data to calibrate model parameters and classify liquid versus illiquid emissions. This approach provides a robust tool for pricing illiquid bonds, emphasizing the importance of marketability in debt security valuation. ...

January 20, 2025 · 2 min · Research Team

The VIX as Stochastic Volatility for Corporate Bonds

The VIX as Stochastic Volatility for Corporate Bonds ArXiv ID: 2410.22498 “View on arXiv” Authors: Unknown Abstract Classic stochastic volatility models assume volatility is unobservable. We use the Volatility Index: S&P 500 VIX to observe it, to easier fit the model. We apply it to corporate bonds. We fit autoregression for corporate rates and for risk spreads between these rates and Treasury rates. Next, we divide residuals by VIX. Our main idea is such division makes residuals closer to the ideal case of a Gaussian white noise. This is remarkable, since these residuals and VIX come from separate market segments. Similarly, we model corporate bond returns as a linear function of rates and rate changes. Our article has two main parts: Moody’s AAA and BAA spreads; Bank of America investment-grade and high-yield rates, spreads, and returns. We analyze long-term stability of these models. ...

October 29, 2024 · 2 min · Research Team

Quantile Regression using Random Forest Proximities

Quantile Regression using Random Forest Proximities ArXiv ID: 2408.02355 “View on arXiv” Authors: Unknown Abstract Due to the dynamic nature of financial markets, maintaining models that produce precise predictions over time is difficult. Often the goal isn’t just point prediction but determining uncertainty. Quantifying uncertainty, especially the aleatoric uncertainty due to the unpredictable nature of market drivers, helps investors understand varying risk levels. Recently, quantile regression forests (QRF) have emerged as a promising solution: Unlike most basic quantile regression methods that need separate models for each quantile, quantile regression forests estimate the entire conditional distribution of the target variable with a single model, while retaining all the salient features of a typical random forest. We introduce a novel approach to compute quantile regressions from random forests that leverages the proximity (i.e., distance metric) learned by the model and infers the conditional distribution of the target variable. We evaluate the proposed methodology using publicly available datasets and then apply it towards the problem of forecasting the average daily volume of corporate bonds. We show that using quantile regression using Random Forest proximities demonstrates superior performance in approximating conditional target distributions and prediction intervals to the original version of QRF. We also demonstrate that the proposed framework is significantly more computationally efficient than traditional approaches to quantile regressions. ...

August 5, 2024 · 2 min · Research Team

Enhanced Local Explainability and Trust Scores with Random Forest Proximities

Enhanced Local Explainability and Trust Scores with Random Forest Proximities ArXiv ID: 2310.12428 “View on arXiv” Authors: Unknown Abstract We initiate a novel approach to explain the predictions and out of sample performance of random forest (RF) regression and classification models by exploiting the fact that any RF can be mathematically formulated as an adaptive weighted K nearest-neighbors model. Specifically, we employ a recent result that, for both regression and classification tasks, any RF prediction can be rewritten exactly as a weighted sum of the training targets, where the weights are RF proximities between the corresponding pairs of data points. We show that this linearity facilitates a local notion of explainability of RF predictions that generates attributions for any model prediction across observations in the training set, and thereby complements established feature-based methods like SHAP, which generate attributions for a model prediction across input features. We show how this proximity-based approach to explainability can be used in conjunction with SHAP to explain not just the model predictions, but also out-of-sample performance, in the sense that proximities furnish a novel means of assessing when a given model prediction is more or less likely to be correct. We demonstrate this approach in the modeling of US corporate bond prices and returns in both regression and classification cases. ...

October 19, 2023 · 2 min · Research Team