false

Multi-Agent Stock Prediction Systems: Machine Learning Models, Simulations, and Real-Time Trading Strategies

Multi-Agent Stock Prediction Systems: Machine Learning Models, Simulations, and Real-Time Trading Strategies ArXiv ID: 2502.15853 “View on arXiv” Authors: Unknown Abstract This paper presents a comprehensive study on stock price prediction, leveragingadvanced machine learning (ML) and deep learning (DL) techniques to improve financial forecasting accuracy. The research evaluates the performance of various recurrent neural network (RNN) architectures, including Long Short-Term Memory (LSTM) networks, Gated Recurrent Units (GRU), and attention-based models. These models are assessed for their ability to capture complex temporal dependencies inherent in stock market data. Our findings show that attention-based models outperform other architectures, achieving the highest accuracy by capturing both short and long-term dependencies. This study contributes valuable insights into AI-driven financial forecasting, offering practical guidance for developing more accurate and efficient trading systems. ...

February 21, 2025 · 2 min · Research Team

Network topology of the Euro Area interbank market

Network topology of the Euro Area interbank market ArXiv ID: 2502.15611 “View on arXiv” Authors: Unknown Abstract The rapidly increasing availability of large amounts of granular financial data, paired with the advances of big data related technologies induces the need of suitable analytics that can represent and extract meaningful information from such data. In this paper we propose a multi-layer network approach to distill the Euro Area (EA) banking system in different distinct layers. Each layer of the network represents a specific type of financial relationship between banks, based on various sources of EA granular data collections. The resulting multi-layer network allows one to describe, analyze and compare the topology and structure of EA banks from different perspectives, eventually yielding a more complete picture of the financial market. This granular information representation has the potential to enable researchers and practitioners to better apprehend financial system dynamics as well as to support financial policies to manage and monitor financial risk from a more holistic point of view. ...

February 21, 2025 · 2 min · Research Team

A Novel Loss Function for Deep Learning Based Daily Stock Trading System

A Novel Loss Function for Deep Learning Based Daily Stock Trading System ArXiv ID: 2502.17493 “View on arXiv” Authors: Unknown Abstract Making consistently profitable financial decisions in a continuously evolving and volatile stock market has always been a difficult task. Professionals from different disciplines have developed foundational theories to anticipate price movement and evaluate securities such as the famed Capital Asset Pricing Model (CAPM). In recent years, the role of artificial intelligence (AI) in asset pricing has been growing. Although the black-box nature of deep learning models lacks interpretability, they have continued to solidify their position in the financial industry. We aim to further enhance AI’s potential and utility by introducing a return-weighted loss function that will drive top growth while providing the ML models a limited amount of information. Using only publicly accessible stock data (open/close/high/low, trading volume, sector information) and several technical indicators constructed from them, we propose an efficient daily trading system that detects top growth opportunities. Our best models achieve 61.73% annual return on daily rebalancing with an annualized Sharpe Ratio of 1.18 over 1340 testing days from 2019 to 2024, and 37.61% annual return with an annualized Sharpe Ratio of 0.97 over 1360 testing days from 2005 to 2010. The main drivers for success, especially independent of any domain knowledge, are the novel return-weighted loss function, the integration of categorical and continuous data, and the ML model architecture. We also demonstrate the superiority of our novel loss function over traditional loss functions via several performance metrics and statistical evidence. ...

February 20, 2025 · 2 min · Research Team

Causality Analysis of COVID-19 Induced Crashes in Stock and Commodity Markets: A Topological Perspective

Causality Analysis of COVID-19 Induced Crashes in Stock and Commodity Markets: A Topological Perspective ArXiv ID: 2502.14431 “View on arXiv” Authors: Unknown Abstract The paper presents a comprehensive causality analysis of the US stock and commodity markets during the COVID-19 crash. The dynamics of different sectors are also compared. We use Topological Data Analysis (TDA) on multidimensional time-series to identify crashes in stock and commodity markets. The Wasserstein Distance WD shows distinct spikes signaling the crash for both stock and commodity markets. We then compare the persistence diagrams of stock and commodity markets using the WD metric. A significant spike in the $WD$ between stock and commodity markets is observed during the crisis, suggesting significant topological differences between the markets. Similar spikes are observed between the sectors of the US market as well. Spikes obtained may be due to either a difference in the magnitude of crashes in the two markets (or sectors), or from the temporal lag between the two markets suggesting information flow. We study the Granger-causality between stock and commodity markets and also between different sectors. The results show a bidirectional Granger-causality between commodity and stock during the crash period, demonstrating the greater interdependence of financial markets during the crash. However, the overall analysis shows that the causal direction is from stock to commodity. A pairwise Granger-causal analysis between US sectors is also conducted. There is a significant increase in the interdependence between the sectors during the crash period. TDA combined with Granger-causality effectively analyzes the interdependence and sensitivity of different markets and sectors. ...

February 20, 2025 · 2 min · Research Team

Financial fraud detection system based on improved random forest and gradient boosting machine (GBM)

Financial fraud detection system based on improved random forest and gradient boosting machine (GBM) ArXiv ID: 2502.15822 “View on arXiv” Authors: Unknown Abstract This paper proposes a financial fraud detection system based on improved Random Forest (RF) and Gradient Boosting Machine (GBM). Specifically, the system introduces a novel model architecture called GBM-SSRF (Gradient Boosting Machine with Simplified and Strengthened Random Forest), which cleverly combines the powerful optimization capabilities of the gradient boosting machine (GBM) with improved randomization. The computational efficiency and feature extraction capabilities of the Simplified and Strengthened Random Forest (SSRF) forest significantly improve the performance of financial fraud detection. Although the traditional random forest model has good classification capabilities, it has high computational complexity when faced with large-scale data and has certain limitations in feature selection. As a commonly used ensemble learning method, the GBM model has significant advantages in optimizing performance and handling nonlinear problems. However, GBM takes a long time to train and is prone to overfitting problems when data samples are unbalanced. In response to these limitations, this paper optimizes the random forest based on the structure, reducing the computational complexity and improving the feature selection ability through the structural simplification and enhancement of the random forest. In addition, the optimized random forest is embedded into the GBM framework, and the model can maintain efficiency and stability with the help of GBM’s gradient optimization capability. Experiments show that the GBM-SSRF model not only has good performance, but also has good robustness and generalization capabilities, providing an efficient and reliable solution for financial fraud detection. ...

February 20, 2025 · 2 min · Research Team

Modelling the term-structure of default risk under IFRS 9 within a multistate regression framework

Modelling the term-structure of default risk under IFRS 9 within a multistate regression framework ArXiv ID: 2502.14479 “View on arXiv” Authors: Unknown Abstract The lifetime behaviour of loans is notoriously difficult to model, which can compromise a bank’s financial reserves against future losses, if modelled poorly. Therefore, we present a data-driven comparative study amongst three techniques in modelling a series of default risk estimates over the lifetime of each loan, i.e., its term-structure. The behaviour of loans can be described using a nonstationary and time-dependent semi-Markov model, though we model its elements using a multistate regression-based approach. As such, the transition probabilities are explicitly modelled as a function of a rich set of input variables, including macroeconomic and loan-level inputs. Our modelling techniques are deliberately chosen in ascending order of complexity: 1) a Markov chain; 2) beta regression; and 3) multinomial logistic regression. Using residential mortgage data, our results show that each successive model outperforms the previous, likely as a result of greater sophistication. This finding required devising a novel suite of simple model diagnostics, which can itself be reused in assessing sampling representativeness and the performance of other modelling techniques. These contributions surely advance the current practice within banking when conducting multistate modelling. Consequently, we believe that the estimation of loss reserves will be more timeous and accurate under IFRS 9. ...

February 20, 2025 · 2 min · Research Team

Multi-Layer Deep xVA: Structural Credit Models, Measure Changes and Convergence Analysis

Multi-Layer Deep xVA: Structural Credit Models, Measure Changes and Convergence Analysis ArXiv ID: 2502.14766 “View on arXiv” Authors: Unknown Abstract We propose a structural default model for portfolio-wide valuation adjustments (xVAs) and represent it as a system of coupled backward stochastic differential equations. The framework is divided into four layers, each capturing a key component: (i) clean values, (ii) initial margin and Collateral Valuation Adjustment (ColVA), (iii) Credit/Debit Valuation Adjustments (CVA/DVA) together with Margin Valuation Adjustment (MVA), and (iv) Funding Valuation Adjustment (FVA). Because these layers depend on one another through collateral and default effects, a naive Monte Carlo approach would require deeply nested simulations, making the problem computationally intractable. To address this challenge, we use an iterative deep BSDE approach, handling each layer sequentially so that earlier outputs serve as inputs to the subsequent layers. Initial margin is computed via deep quantile regression to reflect margin requirements over the Margin Period of Risk. We also adopt a change-of-measure method that highlights rare but significant defaults of the bank or counterparty, ensuring that these events are accurately captured in the training process. We further extend Han and Long’s (2020) a posteriori error analysis to BSDEs on bounded domains. Due to the random exit from the domain, we obtain an order of convergence of $\mathcal{“O”}(h^{“1/4-ε”})$ rather than the usual $\mathcal{“O”}(h^{“1/2”})$. Numerical experiments illustrate that this method drastically reduces computational demands and successfully scales to high-dimensional, non-symmetric portfolios. The results confirm its effectiveness and accuracy, offering a practical alternative to nested Monte Carlo simulations in multi-counterparty xVA analyses. ...

February 20, 2025 · 2 min · Research Team

Deep Learning for VWAP Execution in Crypto Markets: Beyond the Volume Curve

Deep Learning for VWAP Execution in Crypto Markets: Beyond the Volume Curve ArXiv ID: 2502.13722 “View on arXiv” Authors: Unknown Abstract Volume-Weighted Average Price (VWAP) is arguably the most prevalent benchmark for trade execution as it provides an unbiased standard for comparing performance across market participants. However, achieving VWAP is inherently challenging due to its dependence on two dynamic factors, volumes and prices. Traditional approaches typically focus on forecasting the market’s volume curve, an assumption that may hold true under steady conditions but becomes suboptimal in more volatile environments or markets such as cryptocurrency where prediction error margins are higher. In this study, I propose a deep learning framework that directly optimizes the VWAP execution objective by bypassing the intermediate step of volume curve prediction. Leveraging automatic differentiation and custom loss functions, my method calibrates order allocation to minimize VWAP slippage, thereby fully addressing the complexities of the execution problem. My results demonstrate that this direct optimization approach consistently achieves lower VWAP slippage compared to conventional methods, even when utilizing a naive linear model presented in arXiv:2410.21448. They validate the observation that strategies optimized for VWAP performance tend to diverge from accurate volume curve predictions and thus underscore the advantage of directly modeling the execution objective. This research contributes a more efficient and robust framework for VWAP execution in volatile markets, illustrating the potential of deep learning in complex financial systems where direct objective optimization is crucial. Although my empirical analysis focuses on cryptocurrency markets, the underlying principles of the framework are readily applicable to other asset classes such as equities. ...

February 19, 2025 · 2 min · Research Team

Gaining efficiency in deep policy gradient method for continuous-time optimal control problems

Gaining efficiency in deep policy gradient method for continuous-time optimal control problems ArXiv ID: 2502.14141 “View on arXiv” Authors: Unknown Abstract In this paper, we propose an efficient implementation of deep policy gradient method (PGM) for optimal control problems in continuous time. The proposed method has the ability to manage the allocation of computational resources, number of trajectories, and complexity of architecture of the neural network. This is, in particular, important for continuous-time problems that require a fine time discretization. Each step of this method focuses on a different time scale and learns a policy, modeled by a neural network, for a discretized optimal control problem. The first step has the coarsest time discretization. As we proceed to other steps, the time discretization becomes finer. The optimal trained policy in each step is also used to provide data for the next step. We accompany the multi-scale deep PGM with a theoretical result on allocation of computational resources to obtain a targeted efficiency and test our methods on the linear-quadratic stochastic optimal control problem. ...

February 19, 2025 · 2 min · Research Team

Stock Price Prediction Using a Hybrid LSTM-GNN Model: Integrating Time-Series and Graph-Based Analysis

Stock Price Prediction Using a Hybrid LSTM-GNN Model: Integrating Time-Series and Graph-Based Analysis ArXiv ID: 2502.15813 “View on arXiv” Authors: Unknown Abstract This paper presents a novel hybrid model that integrates long-short-term memory (LSTM) networks and Graph Neural Networks (GNNs) to significantly enhance the accuracy of stock market predictions. The LSTM component adeptly captures temporal patterns in stock price data, effectively modeling the time series dynamics of financial markets. Concurrently, the GNN component leverages Pearson correlation and association analysis to model inter-stock relational data, capturing complex nonlinear polyadic dependencies influencing stock prices. The model is trained and evaluated using an expanding window validation approach, enabling continuous learning from increasing amounts of data and adaptation to evolving market conditions. Extensive experiments conducted on historical stock data demonstrate that our hybrid LSTM-GNN model achieves a mean square error (MSE) of 0.00144, representing a substantial reduction of 10.6% compared to the MSE of the standalone LSTM model of 0.00161. Furthermore, the hybrid model outperforms traditional and advanced benchmarks, including linear regression, convolutional neural networks (CNN), and dense networks. These compelling results underscore the significant potential of combining temporal and relational data through a hybrid approach, offering a powerful tool for real-time trading and financial analysis. ...

February 19, 2025 · 2 min · Research Team