false

Robust Graph Neural Networks for Stability Analysis in Dynamic Networks

Robust Graph Neural Networks for Stability Analysis in Dynamic Networks ArXiv ID: 2411.11848 “View on arXiv” Authors: Unknown Abstract In the current context of accelerated globalization and digitalization, the complexity and uncertainty of financial markets are increasing, and the identification and prevention of economic risks have become a key link in maintaining the stability of the financial system. Traditional risk identification methods often have limitations because they are difficult to cope with the multi-level and dynamically changing complex relationships in financial networks. With the rapid development of financial technology, graph neural network (GNN) technology, as an emerging deep learning method, has gradually shown great potential in the field of financial risk management. GNN can map transaction behaviors, financial institutions, individuals, and their interactive relationships in financial networks into graph structures, and effectively capture potential patterns and abnormal signals in financial data through embedded representation learning. Using this technology, financial institutions can extract valuable information from complex transaction networks, identify hidden dangers or abnormal behaviors that may cause systemic risks in a timely manner, optimize decision-making processes, and improve the accuracy of risk warnings. This paper explores the economic risk identification algorithm based on the GNN algorithm, aiming to provide financial institutions and regulators with more intelligent technical tools to help maintain the security and stability of the financial market. Improving the efficiency of economic risk identification through innovative technical means is expected to further enhance the risk resistance of the financial system and lay the foundation for building a robust global financial system. ...

October 29, 2024 · 2 min · Research Team

Forecasting Company Fundamentals

Forecasting Company Fundamentals ArXiv ID: 2411.05791 “View on arXiv” Authors: Unknown Abstract Company fundamentals are key to assessing companies’ financial and overall success and stability. Forecasting them is important in multiple fields, including investing and econometrics. While statistical and contemporary machine learning methods have been applied to many time series tasks, there is a lack of comparison of these approaches on this particularly challenging data regime. To this end, we try to bridge this gap and thoroughly evaluate the theoretical properties and practical performance of 24 deterministic and probabilistic company fundamentals forecasting models on real company data. We observe that deep learning models provide superior forecasting performance to classical models, in particular when considering uncertainty estimation. To validate the findings, we compare them to human analyst expectations and find that their accuracy is comparable to the automatic forecasts. We further show how these high-quality forecasts can benefit automated stock allocation. We close by presenting possible ways of integrating domain experts to further improve performance and increase reliability. ...

October 21, 2024 · 2 min · Research Team

Simultaneously Solving FBSDEs and their Associated Semilinear Elliptic PDEs with Small Neural Operators

Simultaneously Solving FBSDEs and their Associated Semilinear Elliptic PDEs with Small Neural Operators ArXiv ID: 2410.14788 “View on arXiv” Authors: Unknown Abstract Forward-backwards stochastic differential equations (FBSDEs) play an important role in optimal control, game theory, economics, mathematical finance, and in reinforcement learning. Unfortunately, the available FBSDE solvers operate on \textit{“individual”} FBSDEs, meaning that they cannot provide a computationally feasible strategy for solving large families of FBSDEs, as these solvers must be re-run several times. \textit{“Neural operators”} (NOs) offer an alternative approach for \textit{“simultaneously solving”} large families of decoupled FBSDEs by directly approximating the solution operator mapping \textit{“inputs:”} terminal conditions and dynamics of the backwards process to \textit{“outputs:”} solutions to the associated FBSDE. Though universal approximation theorems (UATs) guarantee the existence of such NOs, these NOs are unrealistically large. Upon making only a few simple theoretically-guided tweaks to the standard convolutional NO build, we confirm that ``small’’ NOs can uniformly approximate the solution operator to structured families of FBSDEs with random terminal time, uniformly on suitable compact sets determined by Sobolev norms using a logarithmic depth, a constant width, and a polynomial rank in the reciprocal approximation error. This result is rooted in our second result, and main contribution to the NOs for PDE literature, showing that our convolutional NOs of similar depth and width but grow only \textit{“quadratically”} (at a dimension-free rate) when uniformly approximating the solution operator of the associated class of semilinear Elliptic PDEs to these families of FBSDEs. A key insight into how NOs work we uncover is that the convolutional layers of our NO can approximately implement the fixed point iteration used to prove the existence of a unique solution to these semilinear Elliptic PDEs. ...

October 18, 2024 · 3 min · Research Team

Can GANs Learn the Stylized Facts of Financial Time Series?

Can GANs Learn the Stylized Facts of Financial Time Series? ArXiv ID: 2410.09850 “View on arXiv” Authors: Unknown Abstract In the financial sector, a sophisticated financial time series simulator is essential for evaluating financial products and investment strategies. Traditional back-testing methods have mainly relied on historical data-driven approaches or mathematical model-driven approaches, such as various stochastic processes. However, in the current era of AI, data-driven approaches, where models learn the intrinsic characteristics of data directly, have emerged as promising techniques. Generative Adversarial Networks (GANs) have surfaced as promising generative models, capturing data distributions through adversarial learning. Financial time series, characterized ‘stylized facts’ such as random walks, mean-reverting patterns, unexpected jumps, and time-varying volatility, present significant challenges for deep neural networks to learn their intrinsic characteristics. This study examines the ability of GANs to learn diverse and complex temporal patterns (i.e., stylized facts) of both univariate and multivariate financial time series. Our extensive experiments revealed that GANs can capture various stylized facts of financial time series, but their performance varies significantly depending on the choice of generator architecture. This suggests that naively applying GANs might not effectively capture the intricate characteristics inherent in financial time series, highlighting the importance of carefully considering and validating the modeling choices. ...

October 13, 2024 · 2 min · Research Team

Deep Learning Methods for S Shaped Utility Maximisation with a Random Reference Point

Deep Learning Methods for S Shaped Utility Maximisation with a Random Reference Point ArXiv ID: 2410.05524 “View on arXiv” Authors: Unknown Abstract We consider the portfolio optimisation problem where the terminal function is an S-shaped utility applied at the difference between the wealth and a random benchmark process. We develop several numerical methods for solving the problem using deep learning and duality methods. We use deep learning methods to solve the associated Hamilton-Jacobi-Bellman equation for both the primal and dual problems, and the adjoint equation arising from the stochastic maximum principle. We compare the solution of this non-concave problem to that of concavified utility, a random function depending on the benchmark, in both complete and incomplete markets. We give some numerical results for power and log utilities to show the accuracy of the suggested algorithms. ...

October 7, 2024 · 2 min · Research Team

Price predictability in limit order book with deep learning model

Price predictability in limit order book with deep learning model ArXiv ID: 2409.14157 “View on arXiv” Authors: Unknown Abstract This study explores the prediction of high-frequency price changes using deep learning models. Although state-of-the-art methods perform well, their complexity impedes the understanding of successful predictions. We found that an inadequately defined target price process may render predictions meaningless by incorporating past information. The commonly used three-class problem in asset price prediction can generally be divided into volatility and directional prediction. When relying solely on the price process, directional prediction performance is not substantial. However, volume imbalance improves directional prediction performance. ...

September 21, 2024 · 2 min · Research Team

MANA-Net: Mitigating Aggregated Sentiment Homogenization with News Weighting for Enhanced Market Prediction

MANA-Net: Mitigating Aggregated Sentiment Homogenization with News Weighting for Enhanced Market Prediction ArXiv ID: 2409.05698 “View on arXiv” Authors: Unknown Abstract It is widely acknowledged that extracting market sentiments from news data benefits market predictions. However, existing methods of using financial sentiments remain simplistic, relying on equal-weight and static aggregation to manage sentiments from multiple news items. This leads to a critical issue termed ``Aggregated Sentiment Homogenization’’, which has been explored through our analysis of a large financial news dataset from industry practice. This phenomenon occurs when aggregating numerous sentiments, causing representations to converge towards the mean values of sentiment distributions and thereby smoothing out unique and important information. Consequently, the aggregated sentiment representations lose much predictive value of news data. To address this problem, we introduce the Market Attention-weighted News Aggregation Network (MANA-Net), a novel method that leverages a dynamic market-news attention mechanism to aggregate news sentiments for market prediction. MANA-Net learns the relevance of news sentiments to price changes and assigns varying weights to individual news items. By integrating the news aggregation step into the networks for market prediction, MANA-Net allows for trainable sentiment representations that are optimized directly for prediction. We evaluate MANA-Net using the S&P 500 and NASDAQ 100 indices, along with financial news spanning from 2003 to 2018. Experimental results demonstrate that MANA-Net outperforms various recent market prediction methods, enhancing Profit & Loss by 1.1% and the daily Sharpe ratio by 0.252. ...

September 9, 2024 · 2 min · Research Team

Pricing American Options using Machine Learning Algorithms

Pricing American Options using Machine Learning Algorithms ArXiv ID: 2409.03204 “View on arXiv” Authors: Unknown Abstract This study investigates the application of machine learning algorithms, particularly in the context of pricing American options using Monte Carlo simulations. Traditional models, such as the Black-Scholes-Merton framework, often fail to adequately address the complexities of American options, which include the ability for early exercise and non-linear payoff structures. By leveraging Monte Carlo methods in conjunction Least Square Method machine learning was used. This research aims to improve the accuracy and efficiency of option pricing. The study evaluates several machine learning models, including neural networks and decision trees, highlighting their potential to outperform traditional approaches. The results from applying machine learning algorithm in LSM indicate that integrating machine learning with Monte Carlo simulations can enhance pricing accuracy and provide more robust predictions, offering significant insights into quantitative finance by merging classical financial theories with modern computational techniques. The dataset was split into features and the target variable representing bid prices, with an 80-20 train-validation split. LSTM and GRU models were constructed using TensorFlow’s Keras API, each with four hidden layers of 200 neurons and an output layer for bid price prediction, optimized with the Adam optimizer and MSE loss function. The GRU model outperformed the LSTM model across all evaluated metrics, demonstrating lower mean absolute error, mean squared error, and root mean squared error, along with greater stability and efficiency in training. ...

September 5, 2024 · 2 min · Research Team

Less is more: AI Decision-Making using Dynamic Deep Neural Networks for Short-Term Stock Index Prediction

Less is more: AI Decision-Making using Dynamic Deep Neural Networks for Short-Term Stock Index Prediction ArXiv ID: 2408.11740 “View on arXiv” Authors: Unknown Abstract In this paper we introduce a multi-agent deep-learning method which trades in the Futures markets based on the US S&P 500 index. The method (referred to as Model A) is an innovation founded on existing well-established machine-learning models which sample market prices and associated derivatives in order to decide whether the investment should be long/short or closed (zero exposure), on a day-to-day decision. We compare the predictions with some conventional machine-learning methods namely, Long Short-Term Memory, Random Forest and Gradient-Boosted-Trees. Results are benchmarked against a passive model in which the Futures contracts are held (long) continuously with the same exposure (level of investment). Historical tests are based on daily daytime trading carried out over a period of 6 calendar years (2018-23). We find that Model A outperforms the passive investment in key performance metrics, placing it within the top quartile performance of US Large Cap active fund managers. Model A also outperforms the three machine-learning classification comparators over this period. We observe that Model A is extremely efficient (doing less and getting more) with an exposure to the market of only 41.95% compared to the 100% market exposure of the passive investment, and thus provides increased profitability with reduced risk. ...

August 21, 2024 · 2 min · Research Team

Deep-MacroFin: Informed Equilibrium Neural Network for Continuous Time Economic Models

Deep-MacroFin: Informed Equilibrium Neural Network for Continuous Time Economic Models ArXiv ID: 2408.10368 “View on arXiv” Authors: Unknown Abstract In this paper, we present Deep-MacroFin, a comprehensive framework designed to solve partial differential equations, with a particular focus on models in continuous time economics. This framework leverages deep learning methodologies, including Multi-Layer Perceptrons and the newly developed Kolmogorov-Arnold Networks. It is optimized using economic information encapsulated by Hamilton-Jacobi-Bellman (HJB) equations and coupled algebraic equations. The application of neural networks holds the promise of accurately resolving high-dimensional problems with fewer computational demands and limitations compared to other numerical methods. This framework can be readily adapted for systems of partial differential equations in high dimensions. Importantly, it offers a more efficient (5$\times$ less CUDA memory and 40$\times$ fewer FLOPs in 100D problems) and user-friendly implementation than existing libraries. We also incorporate a time-stepping scheme to enhance training stability for nonlinear HJB equations, enabling the solution of 50D economic models. ...

August 19, 2024 · 2 min · Research Team