false

Artificial Intelligence-based Analysis of Change in Public Finance between US and International Markets

Artificial Intelligence-based Analysis of Change in Public Finance between US and International Markets ArXiv ID: 2403.18823 “View on arXiv” Authors: Unknown Abstract Public finances are one of the fundamental mechanisms of economic governance that refer to the financial activities and decisions made by government entities to fund public services, projects, and operations through assets. In today’s globalized landscape, even subtle shifts in one nation’s public debt landscape can have significant impacts on that of international finances, necessitating a nuanced understanding of the correlations between international and national markets to help investors make informed investment decisions. Therefore, by leveraging the capabilities of artificial intelligence, this study utilizes neural networks to depict the correlations between US and International Public Finances and predict the changes in international public finances based on the changes in US public finances. With the neural network model achieving a commendable Mean Squared Error (MSE) value of 2.79, it is able to affirm a discernible correlation and also plot the effect of US market volatility on international markets. To further test the accuracy and significance of the model, an economic analysis was conducted that aimed to correlate the changes seen by the results of the model with historical stock market changes. This model demonstrates significant potential for investors to predict changes in international public finances based on signals from US markets, marking a significant stride in comprehending the intricacies of global public finances and the role of artificial intelligence in decoding its multifaceted patterns for practical forecasting. ...

December 10, 2023 · 2 min · Research Team

Machine-learning regression methods for American-style path-dependent contracts

Machine-learning regression methods for American-style path-dependent contracts ArXiv ID: 2311.16762 “View on arXiv” Authors: Unknown Abstract Evaluating financial products with early-termination clauses, in particular those with path-dependent structures, is challenging. This paper focuses on Asian options, look-back options, and callable certificates. We will compare regression methods for pricing and computing sensitivities, highlighting modern machine learning techniques against traditional polynomial basis functions. Specifically, we will analyze randomized recurrent and feed-forward neural networks, along with a novel approach using signatures of the underlying price process. For option sensitivities like Delta and Gamma, we will incorporate Chebyshev interpolation. Our findings show that machine learning algorithms often match the accuracy and efficiency of traditional methods for Asian and look-back options, while randomized neural networks are best for callable certificates. Furthermore, we apply Chebyshev interpolation for Delta and Gamma calculations for the first time in Asian options and callable certificates. ...

November 28, 2023 · 2 min · Research Team

Earnings Prediction Using Recurrent Neural Networks

Earnings Prediction Using Recurrent Neural Networks ArXiv ID: 2311.10756 “View on arXiv” Authors: Unknown Abstract Firm disclosures about future prospects are crucial for corporate valuation and compliance with global regulations, such as the EU’s MAR and the US’s SEC Rule 10b-5 and RegFD. To comply with disclosure obligations, issuers must identify nonpublic information with potential material impact on security prices as only new, relevant and unexpected information materially affects prices in efficient markets. Financial analysts, assumed to represent public knowledge on firms’ earnings prospects, face limitations in offering comprehensive coverage and unbiased estimates. This study develops a neural network to forecast future firm earnings, using four decades of financial data, addressing analysts’ coverage gaps and potentially revealing hidden insights. The model avoids selectivity and survivorship biases as it allows for missing data. Furthermore, the model is able to produce both fiscal-year-end and quarterly earnings predictions. Its performance surpasses benchmark models from the academic literature by a wide margin and outperforms analysts’ forecasts for fiscal-year-end earnings predictions. ...

November 10, 2023 · 2 min · Research Team

Blending gradient boosted trees and neural networks for point and probabilistic forecasting of hierarchical time series

Blending gradient boosted trees and neural networks for point and probabilistic forecasting of hierarchical time series ArXiv ID: 2310.13029 “View on arXiv” Authors: Unknown Abstract In this paper we tackle the problem of point and probabilistic forecasting by describing a blending methodology of machine learning models that belong to gradient boosted trees and neural networks families. These principles were successfully applied in the recent M5 Competition on both Accuracy and Uncertainty tracks. The keypoints of our methodology are: a) transform the task to regression on sales for a single day b) information rich feature engineering c) create a diverse set of state-of-the-art machine learning models and d) carefully construct validation sets for model tuning. We argue that the diversity of the machine learning models along with the careful selection of validation examples, where the most important ingredients for the effectiveness of our approach. Although forecasting data had an inherent hierarchy structure (12 levels), none of our proposed solutions exploited that hierarchical scheme. Using the proposed methodology, our team was ranked within the gold medal range in both Accuracy and the Uncertainty track. Inference code along with already trained models are available at https://github.com/IoannisNasios/M5_Uncertainty_3rd_place ...

October 19, 2023 · 2 min · Research Team

Neural Network for valuing Bitcoin options under jump-diffusion and market sentiment model

Neural Network for valuing Bitcoin options under jump-diffusion and market sentiment model ArXiv ID: 2310.09622 “View on arXiv” Authors: Unknown Abstract Cryptocurrencies and Bitcoin, in particular, are prone to wild swings resulting in frequent jumps in prices, making them historically popular for traders to speculate. A better understanding of these fluctuations can greatly benefit crypto investors by allowing them to make informed decisions. It is claimed in recent literature that Bitcoin price is influenced by sentiment about the Bitcoin system. Transaction, as well as the popularity, have shown positive evidence as potential drivers of Bitcoin price. This study considers a bivariate jump-diffusion model to describe Bitcoin price dynamics and the number of Google searches affecting the price, representing a sentiment indicator. We obtain a closed formula for the Bitcoin price and derive the Black-Scholes equation for Bitcoin options. We first solve the corresponding Bitcoin option partial differential equation for the pricing process by introducing artificial neural networks and incorporating multi-layer perceptron techniques. The prediction performance and the model validation using various high-volatile stocks were assessed. ...

October 14, 2023 · 2 min · Research Team

Integration of Fractional Order Black-Scholes Merton with Neural Network

Integration of Fractional Order Black-Scholes Merton with Neural Network ArXiv ID: 2310.04464 “View on arXiv” Authors: Unknown Abstract This study enhances option pricing by presenting unique pricing model fractional order Black-Scholes-Merton (FOBSM) which is based on the Black-Scholes-Merton (BSM) model. The main goal is to improve the precision and authenticity of option pricing, matching them more closely with the financial landscape. The approach integrates the strengths of both the BSM and neural network (NN) with complex diffusion dynamics. This study emphasizes the need to take fractional derivatives into account when analyzing financial market dynamics. Since FOBSM captures memory characteristics in sequential data, it is better at simulating real-world systems than integer-order models. Findings reveals that in complex diffusion dynamics, this hybridization approach in option pricing improves the accuracy of price predictions. the key contribution of this work lies in the development of a novel option pricing model (FOBSM) that leverages fractional calculus and neural networks to enhance accuracy in capturing complex diffusion dynamics and memory effects in financial data. ...

October 5, 2023 · 2 min · Research Team

Analysis of frequent trading effects of various machine learning models

Analysis of frequent trading effects of various machine learning models ArXiv ID: 2311.10719 “View on arXiv” Authors: Unknown Abstract In recent years, high-frequency trading has emerged as a crucial strategy in stock trading. This study aims to develop an advanced high-frequency trading algorithm and compare the performance of three different mathematical models: the combination of the cross-entropy loss function and the quasi-Newton algorithm, the FCNN model, and the vector machine. The proposed algorithm employs neural network predictions to generate trading signals and execute buy and sell operations based on specific conditions. By harnessing the power of neural networks, the algorithm enhances the accuracy and reliability of the trading strategy. To assess the effectiveness of the algorithm, the study evaluates the performance of the three mathematical models. The combination of the cross-entropy loss function and the quasi-Newton algorithm is a widely utilized logistic regression approach. The FCNN model, on the other hand, is a deep learning algorithm that can extract and classify features from stock data. Meanwhile, the vector machine is a supervised learning algorithm recognized for achieving improved classification results by mapping data into high-dimensional spaces. By comparing the performance of these three models, the study aims to determine the most effective approach for high-frequency trading. This research makes a valuable contribution by introducing a novel methodology for high-frequency trading, thereby providing investors with a more accurate and reliable stock trading strategy. ...

September 14, 2023 · 2 min · Research Team

Applying Deep Learning to Calibrate Stochastic Volatility Models

Applying Deep Learning to Calibrate Stochastic Volatility Models ArXiv ID: 2309.07843 “View on arXiv” Authors: Unknown Abstract Stochastic volatility models, where the volatility is a stochastic process, can capture most of the essential stylized facts of implied volatility surfaces and give more realistic dynamics of the volatility smile/skew. However, they come with the significant issue that they take too long to calibrate. Alternative calibration methods based on Deep Learning (DL) techniques have been recently used to build fast and accurate solutions to the calibration problem. Huge and Savine developed a Differential Machine Learning (DML) approach, where Machine Learning models are trained on samples of not only features and labels but also differentials of labels to features. The present work aims to apply the DML technique to price vanilla European options (i.e. the calibration instruments), more specifically, puts when the underlying asset follows a Heston model and then calibrate the model on the trained network. DML allows for fast training and accurate pricing. The trained neural network dramatically reduces Heston calibration’s computation time. In this work, we also introduce different regularisation techniques, and we apply them notably in the case of the DML. We compare their performance in reducing overfitting and improving the generalisation error. The DML performance is also compared to the classical DL (without differentiation) one in the case of Feed-Forward Neural Networks. We show that the DML outperforms the DL. The complete code for our experiments is provided in the GitHub repository: https://github.com/asridi/DML-Calibration-Heston-Model ...

September 14, 2023 · 3 min · Research Team

Global Neural Networks and The Data Scaling Effect in Financial Time Series Forecasting

Global Neural Networks and The Data Scaling Effect in Financial Time Series Forecasting ArXiv ID: 2309.02072 “View on arXiv” Authors: Unknown Abstract Neural networks have revolutionized many empirical fields, yet their application to financial time series forecasting remains controversial. In this study, we demonstrate that the conventional practice of estimating models locally in data-scarce environments may underlie the mixed empirical performance observed in prior work. By focusing on volatility forecasting, we employ a dataset comprising over 10,000 global stocks and implement a global estimation strategy that pools information across cross-sections. Our econometric analysis reveals that forecasting accuracy improves markedly as the training dataset becomes larger and more heterogeneous. Notably, even with as little as 12 months of data, globally trained networks deliver robust predictions for individual stocks and portfolios that are not even in the training dataset. Furthermore, our interpretation of the model dynamics shows that these networks not only capture key stylized facts of volatility but also exhibit resilience to outliers and rapid adaptation to market regime changes. These findings underscore the importance of leveraging extensive and diverse datasets in financial forecasting and advocate for a shift from traditional local training approaches to integrated global estimation methods. ...

September 5, 2023 · 2 min · Research Team

Deep multi-step mixed algorithm for high dimensional non-linear PDEs and associated BSDEs

Deep multi-step mixed algorithm for high dimensional non-linear PDEs and associated BSDEs ArXiv ID: 2308.14487 “View on arXiv” Authors: Unknown Abstract We propose a new multistep deep learning-based algorithm for the resolution of moderate to high dimensional nonlinear backward stochastic differential equations (BSDEs) and their corresponding parabolic partial differential equations (PDE). Our algorithm relies on the iterated time discretisation of the BSDE and approximates its solution and gradient using deep neural networks and automatic differentiation at each time step. The approximations are obtained by sequential minimisation of local quadratic loss functions at each time step through stochastic gradient descent. We provide an analysis of approximation error in the case of a network architecture with weight constraints requiring only low regularity conditions on the generator of the BSDE. The algorithm increases accuracy from its single step parent model and has reduced complexity when compared to similar models in the literature. ...

August 28, 2023 · 2 min · Research Team