false

Can Machine Learning Algorithms Outperform Traditional Models for Option Pricing?

Can Machine Learning Algorithms Outperform Traditional Models for Option Pricing? ArXiv ID: 2510.01446 “View on arXiv” Authors: Georgy Milyushkov Abstract This study investigates the application of machine learning techniques, specifically Neural Networks, Random Forests, and CatBoost for option pricing, in comparison to traditional models such as Black-Scholes and Heston Model. Using both synthetically generated data and real market option data, each model is evaluated in predicting the option price. The results show that machine learning models can capture complex, non-linear relationships in option prices and, in several cases, outperform both Black-Scholes and Heston models. These findings highlight the potential of data-driven methods to improve pricing accuracy and better reflect market dynamics. ...

October 1, 2025 · 2 min · Research Team

Combining supervised and unsupervised learning methods to predict financial market movements

Combining supervised and unsupervised learning methods to predict financial market movements ArXiv ID: 2409.03762 “View on arXiv” Authors: Unknown Abstract The decisions traders make to buy or sell an asset depend on various analyses, with expertise required to identify patterns that can be exploited for profit. In this paper we identify novel features extracted from emergent and well-established financial markets using linear models and Gaussian Mixture Models (GMM) with the aim of finding profitable opportunities. We used approximately six months of data consisting of minute candles from the Bitcoin, Pepecoin, and Nasdaq markets to derive and compare the proposed novel features with commonly used ones. These features were extracted based on the previous 59 minutes for each market and used to identify predictions for the hour ahead. We explored the performance of various machine learning strategies, such as Random Forests (RF) and K-Nearest Neighbours (KNN) to classify market movements. A naive random approach to selecting trading decisions was used as a benchmark, with outcomes assumed to be equally likely. We used a temporal cross-validation approach using test sets of 40%, 30% and 20% of total hours to evaluate the learning algorithms’ performances. Our results showed that filtering the time series facilitates algorithms’ generalisation. The GMM filtering approach revealed that the KNN and RF algorithms produced higher average returns than the random algorithm. ...

August 19, 2024 · 2 min · Research Team

Case-based Explainability for Random Forest: Prototypes, Critics, Counter-factuals and Semi-factuals

Case-based Explainability for Random Forest: Prototypes, Critics, Counter-factuals and Semi-factuals ArXiv ID: 2408.06679 “View on arXiv” Authors: Unknown Abstract The explainability of black-box machine learning algorithms, commonly known as Explainable Artificial Intelligence (XAI), has become crucial for financial and other regulated industrial applications due to regulatory requirements and the need for transparency in business practices. Among the various paradigms of XAI, Explainable Case-Based Reasoning (XCBR) stands out as a pragmatic approach that elucidates the output of a model by referencing actual examples from the data used to train or test the model. Despite its potential, XCBR has been relatively underexplored for many algorithms such as tree-based models until recently. We start by observing that most XCBR methods are defined based on the distance metric learned by the algorithm. By utilizing a recently proposed technique to extract the distance metric learned by Random Forests (RFs), which is both geometry- and accuracy-preserving, we investigate various XCBR methods. These methods amount to identify special points from the training datasets, such as prototypes, critics, counter-factuals, and semi-factuals, to explain the predictions for a given query of the RF. We evaluate these special points using various evaluation metrics to assess their explanatory power and effectiveness. ...

August 13, 2024 · 2 min · Research Team

Enhanced Local Explainability and Trust Scores with Random Forest Proximities

Enhanced Local Explainability and Trust Scores with Random Forest Proximities ArXiv ID: 2310.12428 “View on arXiv” Authors: Unknown Abstract We initiate a novel approach to explain the predictions and out of sample performance of random forest (RF) regression and classification models by exploiting the fact that any RF can be mathematically formulated as an adaptive weighted K nearest-neighbors model. Specifically, we employ a recent result that, for both regression and classification tasks, any RF prediction can be rewritten exactly as a weighted sum of the training targets, where the weights are RF proximities between the corresponding pairs of data points. We show that this linearity facilitates a local notion of explainability of RF predictions that generates attributions for any model prediction across observations in the training set, and thereby complements established feature-based methods like SHAP, which generate attributions for a model prediction across input features. We show how this proximity-based approach to explainability can be used in conjunction with SHAP to explain not just the model predictions, but also out-of-sample performance, in the sense that proximities furnish a novel means of assessing when a given model prediction is more or less likely to be correct. We demonstrate this approach in the modeling of US corporate bond prices and returns in both regression and classification cases. ...

October 19, 2023 · 2 min · Research Team