false

Classifying and Clustering Trading Agents

Classifying and Clustering Trading Agents ArXiv ID: 2505.21662 “View on arXiv” Authors: Mateusz Wilinski, Anubha Goel, Alexandros Iosifidis, Juho Kanniainen Abstract The rapid development of sophisticated machine learning methods, together with the increased availability of financial data, has the potential to transform financial research, but also poses a challenge in terms of validation and interpretation. A good case study is the task of classifying financial investors based on their behavioral patterns. Not only do we have access to both classification and clustering tools for high-dimensional data, but also data identifying individual investors is finally available. The problem, however, is that we do not have access to ground truth when working with real-world data. This, together with often limited interpretability of modern machine learning methods, makes it difficult to fully utilize the available research potential. In order to deal with this challenge we propose to use a realistic agent-based model as a way to generate synthetic data. This way one has access to ground truth, large replicable data, and limitless research scenarios. Using this approach we show how, even when classifying trading agents in a supervised manner is relatively easy, a more realistic task of unsupervised clustering may give incorrect or even misleading results. We complete the results with investigating the details of how supervised techniques were able to successfully distinguish between different trading behaviors. ...

May 27, 2025 · 2 min · Research Team

Unsupervised learning-based calibration scheme for Rough Bergomi model

Unsupervised learning-based calibration scheme for Rough Bergomi model ArXiv ID: 2412.02135 “View on arXiv” Authors: Unknown Abstract Current deep learning-based calibration schemes for rough volatility models are based on the supervised learning framework, which can be costly due to a large amount of training data being generated. In this work, we propose a novel unsupervised learning-based scheme for the rough Bergomi (rBergomi) model which does not require accessing training data. The main idea is to use the backward stochastic differential equation (BSDE) derived in [“Bayer, Qiu and Yao, {“SIAM J. Financial Math.”}, 2022”] and simultaneously learn the BSDE solutions with the model parameters. We establish that the mean squares error between the option prices under the learned model parameters and the historical data is bounded by the loss function. Moreover, the loss can be made arbitrarily small under suitable conditions on the fitting ability of the rBergomi model to the market and the universal approximation capability of neural networks. Numerical experiments for both simulated and historical data confirm the efficiency of scheme. ...

December 3, 2024 · 2 min · Research Team

Dynamic Asset Allocation with Asset-Specific Regime Forecasts

Dynamic Asset Allocation with Asset-Specific Regime Forecasts ArXiv ID: 2406.09578 “View on arXiv” Authors: Unknown Abstract This article introduces a novel hybrid regime identification-forecasting framework designed to enhance multi-asset portfolio construction by integrating asset-specific regime forecasts. Unlike traditional approaches that focus on broad economic regimes affecting the entire asset universe, our framework leverages both unsupervised and supervised learning to generate tailored regime forecasts for individual assets. Initially, we use the statistical jump model, a robust unsupervised regime identification model, to derive regime labels for historical periods, classifying them into bullish or bearish states based on features extracted from an asset return series. Following this, a supervised gradient-boosted decision tree classifier is trained to predict these regimes using a combination of asset-specific return features and cross-asset macro-features. We apply this framework individually to each asset in our universe. Subsequently, return and risk forecasts which incorporate these regime predictions are input into Markowitz mean-variance optimization to determine optimal asset allocation weights. We demonstrate the efficacy of our approach through an empirical study on a multi-asset portfolio comprising twelve risky assets, including global equity, bond, real estate, and commodity indexes spanning from 1991 to 2023. The results consistently show outperformance across various portfolio models, including minimum-variance, mean-variance, and naive-diversified portfolios, highlighting the advantages of integrating asset-specific regime forecasts into dynamic asset allocation. ...

June 13, 2024 · 2 min · Research Team

Combating Financial Crimes with Unsupervised Learning Techniques: Clustering and Dimensionality Reduction for Anti-Money Laundering

Combating Financial Crimes with Unsupervised Learning Techniques: Clustering and Dimensionality Reduction for Anti-Money Laundering ArXiv ID: 2403.00777 “View on arXiv” Authors: Unknown Abstract Anti-Money Laundering (AML) is a crucial task in ensuring the integrity of financial systems. One keychallenge in AML is identifying high-risk groups based on their behavior. Unsupervised learning, particularly clustering, is a promising solution for this task. However, the use of hundreds of features todescribe behavior results in a highdimensional dataset that negatively impacts clustering performance.In this paper, we investigate the effectiveness of combining clustering method agglomerative hierarchicalclustering with four dimensionality reduction techniques -Independent Component Analysis (ICA), andKernel Principal Component Analysis (KPCA), Singular Value Decomposition (SVD), Locality Preserving Projections (LPP)- to overcome the issue of high-dimensionality in AML data and improve clusteringresults. This study aims to provide insights into the most effective way of reducing the dimensionality ofAML data and enhance the accuracy of clustering-based AML systems. The experimental results demonstrate that KPCA outperforms other dimension reduction techniques when combined with agglomerativehierarchical clustering. This superiority is observed in the majority of situations, as confirmed by threedistinct validation indices. ...

February 14, 2024 · 2 min · Research Team

Natural Language Processing for Financial Regulation

Natural Language Processing for Financial Regulation ArXiv ID: 2311.08533 “View on arXiv” Authors: Unknown Abstract This article provides an understanding of Natural Language Processing techniques in the framework of financial regulation, more specifically in order to perform semantic matching search between rules and policy when no dataset is available for supervised learning. We outline how to outperform simple pre-trained sentences-transformer models using freely available resources and explain the mathematical concepts behind the key building blocks of Natural Language Processing. ...

November 14, 2023 · 1 min · Research Team

Microstructure-Empowered Stock Factor Extraction and Utilization

Microstructure-Empowered Stock Factor Extraction and Utilization ArXiv ID: 2308.08135 “View on arXiv” Authors: Unknown Abstract High-frequency quantitative investment is a crucial aspect of stock investment. Notably, order flow data plays a critical role as it provides the most detailed level of information among high-frequency trading data, including comprehensive data from the order book and transaction records at the tick level. The order flow data is extremely valuable for market analysis as it equips traders with essential insights for making informed decisions. However, extracting and effectively utilizing order flow data present challenges due to the large volume of data involved and the limitations of traditional factor mining techniques, which are primarily designed for coarser-level stock data. To address these challenges, we propose a novel framework that aims to effectively extract essential factors from order flow data for diverse downstream tasks across different granularities and scenarios. Our method consists of a Context Encoder and an Factor Extractor. The Context Encoder learns an embedding for the current order flow data segment’s context by considering both the expected and actual market state. In addition, the Factor Extractor uses unsupervised learning methods to select such important signals that are most distinct from the majority within the given context. The extracted factors are then utilized for downstream tasks. In empirical studies, our proposed framework efficiently handles an entire year of stock order flow data across diverse scenarios, offering a broader range of applications compared to existing tick-level approaches that are limited to only a few days of stock data. We demonstrate that our method extracts superior factors from order flow data, enabling significant improvement for stock trend prediction and order execution tasks at the second and minute level. ...

August 16, 2023 · 2 min · Research Team