false

Machine Learning-based Relative Valuation of Municipal Bonds

Machine Learning-based Relative Valuation of Municipal Bonds ArXiv ID: 2408.02273 “View on arXiv” Authors: Unknown Abstract The trading ecosystem of the Municipal (muni) bond is complex and unique. With nearly 2% of securities from over a million securities outstanding trading daily, determining the value or relative value of a bond among its peers is challenging. Traditionally, relative value calculation has been done using rule-based or heuristics-driven approaches, which may introduce human biases and often fail to account for complex relationships between the bond characteristics. We propose a data-driven model to develop a supervised similarity framework for the muni bond market based on CatBoost algorithm. This algorithm learns from a large-scale dataset to identify bonds that are similar to each other based on their risk profiles. This allows us to evaluate the price of a muni bond relative to a cohort of bonds with a similar risk profile. We propose and deploy a back-testing methodology to compare various benchmarks and the proposed methods and show that the similarity-based method outperforms both rule-based and heuristic-based methods. ...

August 5, 2024 · 2 min · Research Team

Peer-induced Fairness: A Causal Approach for Algorithmic Fairness Auditing

Peer-induced Fairness: A Causal Approach for Algorithmic Fairness Auditing ArXiv ID: 2408.02558 “View on arXiv” Authors: Unknown Abstract With the European Union’s Artificial Intelligence Act taking effect on 1 August 2024, high-risk AI applications must adhere to stringent transparency and fairness standards. This paper addresses a crucial question: how can we scientifically audit algorithmic fairness? Current methods typically remain at the basic detection stage of auditing, without accounting for more complex scenarios. We propose a novel framework, ``peer-induced fairness’’, which combines the strengths of counterfactual fairness and peer comparison strategy, creating a reliable and robust tool for auditing algorithmic fairness. Our framework is universal, adaptable to various domains, and capable of handling different levels of data quality, including skewed distributions. Moreover, it can distinguish whether adverse decisions result from algorithmic discrimination or inherent limitations of the subjects, thereby enhancing transparency. This framework can serve as both a self-assessment tool for AI developers and an external assessment tool for auditors to ensure compliance with the EU AI Act. We demonstrate its utility in small and medium-sized enterprises access to finance, uncovering significant unfairness-41.51% of micro-firms face discrimination compared to non-micro firms. These findings highlight the framework’s potential for broader applications in ensuring equitable AI-driven decision-making. ...

August 5, 2024 · 2 min · Research Team

Quantile Regression using Random Forest Proximities

Quantile Regression using Random Forest Proximities ArXiv ID: 2408.02355 “View on arXiv” Authors: Unknown Abstract Due to the dynamic nature of financial markets, maintaining models that produce precise predictions over time is difficult. Often the goal isn’t just point prediction but determining uncertainty. Quantifying uncertainty, especially the aleatoric uncertainty due to the unpredictable nature of market drivers, helps investors understand varying risk levels. Recently, quantile regression forests (QRF) have emerged as a promising solution: Unlike most basic quantile regression methods that need separate models for each quantile, quantile regression forests estimate the entire conditional distribution of the target variable with a single model, while retaining all the salient features of a typical random forest. We introduce a novel approach to compute quantile regressions from random forests that leverages the proximity (i.e., distance metric) learned by the model and infers the conditional distribution of the target variable. We evaluate the proposed methodology using publicly available datasets and then apply it towards the problem of forecasting the average daily volume of corporate bonds. We show that using quantile regression using Random Forest proximities demonstrates superior performance in approximating conditional target distributions and prediction intervals to the original version of QRF. We also demonstrate that the proposed framework is significantly more computationally efficient than traditional approaches to quantile regressions. ...

August 5, 2024 · 2 min · Research Team

A Path Integral Approach for Time-Dependent Hamiltonians with Applications to Derivatives Pricing

A Path Integral Approach for Time-Dependent Hamiltonians with Applications to Derivatives Pricing ArXiv ID: 2408.02064 “View on arXiv” Authors: Unknown Abstract We generalize a semi-classical path integral approach originally introduced by Giachetti and Tognetti [“Phys. Rev. Lett. 55, 912 (1985)”] and Feynman and Kleinert [“Phys. Rev. A 34, 5080 (1986)”] to time-dependent Hamiltonians, thus extending the scope of the method to the pricing of financial derivatives. We illustrate the accuracy of the approach by presenting results for the well-known, but analytically intractable, Black-Karasinski model for the dynamics of interest rates. The accuracy and computational efficiency of this path integral approach makes it a viable alternative to fully-numerical schemes for a variety of applications in derivatives pricing. ...

August 4, 2024 · 2 min · Research Team

Efficient and accurate simulation of the stochastic-alpha-beta-rho model

Efficient and accurate simulation of the stochastic-alpha-beta-rho model ArXiv ID: 2408.01898 “View on arXiv” Authors: Unknown Abstract We propose an efficient, accurate and reliable simulation scheme for the stochastic-alpha-beta-rho (SABR) model. The two challenges of the SABR simulation lie in sampling (i) integrated variance conditional on terminal volatility and (ii) terminal forward price conditional on terminal volatility and integrated variance. For the first sampling procedure, we sample the conditional integrated variance using the moment-matched shifted lognormal approximation. For the second sampling procedure, we approximate the conditional terminal forward price as a constant-elasticity-of-variance (CEV) distribution. Our CEV approximation preserves the martingale condition and precludes arbitrage, which is a key advantage over Islah’s approximation used in most SABR simulation schemes in the literature. We then adopt the exact sampling method of the CEV distribution based on the shifted-Poisson mixture Gamma random variable. Our enhanced procedures avoid the tedious Laplace inversion algorithm for sampling integrated variance and non-efficient inverse transform sampling of the forward price in some of the earlier simulation schemes. Numerical results demonstrate our simulation scheme to be highly efficient, accurate, and reliable. ...

August 4, 2024 · 2 min · Research Team

KAN based Autoencoders for Factor Models

KAN based Autoencoders for Factor Models ArXiv ID: 2408.02694 “View on arXiv” Authors: Unknown Abstract Inspired by recent advances in Kolmogorov-Arnold Networks (KANs), we introduce a novel approach to latent factor conditional asset pricing models. While previous machine learning applications in asset pricing have predominantly used Multilayer Perceptrons with ReLU activation functions to model latent factor exposures, our method introduces a KAN-based autoencoder which surpasses MLP models in both accuracy and interpretability. Our model offers enhanced flexibility in approximating exposures as nonlinear functions of asset characteristics, while simultaneously providing users with an intuitive framework for interpreting latent factors. Empirical backtesting demonstrates our model’s superior ability to explain cross-sectional risk exposures. Moreover, long-short portfolios constructed using our model’s predictions achieve higher Sharpe ratios, highlighting its practical value in investment management. ...

August 4, 2024 · 2 min · Research Team

Investment strategies based on forecasts are (almost) useless

Investment strategies based on forecasts are (almost) useless ArXiv ID: 2408.01772 “View on arXiv” Authors: Unknown Abstract Several studies on portfolio construction reveal that sensible strategies essentially yield the same results as their nonsensical inverted counterparts; moreover, random portfolios managed by Malkiel’s dart-throwing monkey would outperform the cap-weighted benchmark index. Forecasting the future development of stock returns is an important aspect of portfolio assessment. Similar to the ostensible arbitrariness of portfolio selection methods, it is shown that there is no substantial difference between the performances of best'' and trivial’’ forecasts - even under euphemistic model assumptions on the underlying price dynamics. A certain significance of a predictor is found only in the following special case: the best linear unbiased forecast is used, the planning horizon is small, and a critical relation is not satisfied. ...

August 3, 2024 · 2 min · Research Team

Neural Term Structure of Additive Process for Option Pricing

Neural Term Structure of Additive Process for Option Pricing ArXiv ID: 2408.01642 “View on arXiv” Authors: Unknown Abstract The additive process generalizes the Lévy process by relaxing its assumption of time-homogeneous increments and hence covers a larger family of stochastic processes. Recent research in option pricing shows that modeling the underlying log price with an additive process has advantages in easier construction of the risk-neural measure, an explicit option pricing formula and characteristic function, and more flexibility to fit the implied volatility surface. Still, the challenge of calibrating an additive model arises from its time-dependent parameterization, for which one has to prescribe parametric functions for the term structure. For this, we propose the neural term structure model to utilize feedforward neural networks to represent the term structure, which alleviates the difficulty of designing parametric functions and thus attenuates the misspecification risk. Numerical studies with S&P 500 option data are conducted to evaluate the performance of the neural term structure. ...

August 3, 2024 · 2 min · Research Team

Lower Bounds of Uncertainty of Observations of Macroeconomic Variables and Upper Limits on the Accuracy of Their Forecasts

Lower Bounds of Uncertainty of Observations of Macroeconomic Variables and Upper Limits on the Accuracy of Their Forecasts ArXiv ID: 2408.04644 “View on arXiv” Authors: Unknown Abstract This paper defines theoretical lower bounds of uncertainty of observations of macroeconomic variables that depend on statistical moments and correlations of random values and volumes of market trades. Any econometric assessments of macroeconomic variables have greater uncertainty. We consider macroeconomic variables as random that depend on random values and volumes of trades. To predict random macroeconomic variables, one should forecast their probabilities. Upper limits on the accuracy of the forecasts of probabilities of macroeconomic variables, prices, and returns depend on the number of predicted statistical moments. We consider economic obstacles that limit by the first two the number of predicted statistical moments. The accuracy of any forecasts of probabilities of random macroeconomic variables, prices, returns, and market trades doesn’t exceed the accuracy of Gaussian approximations. Any forecasts of macroeconomic variables have uncertainty higher than one determined by predictions of coefficients of variation of random values and volumes of trades. ...

August 2, 2024 · 2 min · Research Team

NeuralBeta: Estimating Beta Using Deep Learning

NeuralBeta: Estimating Beta Using Deep Learning ArXiv ID: 2408.01387 “View on arXiv” Authors: Unknown Abstract Traditional approaches to estimating beta in finance often involve rigid assumptions and fail to adequately capture beta dynamics, limiting their effectiveness in use cases like hedging. To address these limitations, we have developed a novel method using neural networks called NeuralBeta, which is capable of handling both univariate and multivariate scenarios and tracking the dynamic behavior of beta. To address the issue of interpretability, we introduce a new output layer inspired by regularized weighted linear regression, which provides transparency into the model’s decision-making process. We conducted extensive experiments on both synthetic and market data, demonstrating NeuralBeta’s superior performance compared to benchmark methods across various scenarios, especially instances where beta is highly time-varying, e.g., during regime shifts in the market. This model not only represents an advancement in the field of beta estimation, but also shows potential for applications in other financial contexts that assume linear relationships. ...

August 2, 2024 · 2 min · Research Team