false

Forecasting Company Fundamentals

Forecasting Company Fundamentals ArXiv ID: 2411.05791 “View on arXiv” Authors: Unknown Abstract Company fundamentals are key to assessing companies’ financial and overall success and stability. Forecasting them is important in multiple fields, including investing and econometrics. While statistical and contemporary machine learning methods have been applied to many time series tasks, there is a lack of comparison of these approaches on this particularly challenging data regime. To this end, we try to bridge this gap and thoroughly evaluate the theoretical properties and practical performance of 24 deterministic and probabilistic company fundamentals forecasting models on real company data. We observe that deep learning models provide superior forecasting performance to classical models, in particular when considering uncertainty estimation. To validate the findings, we compare them to human analyst expectations and find that their accuracy is comparable to the automatic forecasts. We further show how these high-quality forecasts can benefit automated stock allocation. We close by presenting possible ways of integrating domain experts to further improve performance and increase reliability. ...

October 21, 2024 · 2 min · Research Team

Price predictability in limit order book with deep learning model

Price predictability in limit order book with deep learning model ArXiv ID: 2409.14157 “View on arXiv” Authors: Unknown Abstract This study explores the prediction of high-frequency price changes using deep learning models. Although state-of-the-art methods perform well, their complexity impedes the understanding of successful predictions. We found that an inadequately defined target price process may render predictions meaningless by incorporating past information. The commonly used three-class problem in asset price prediction can generally be divided into volatility and directional prediction. When relying solely on the price process, directional prediction performance is not substantial. However, volume imbalance improves directional prediction performance. ...

September 21, 2024 · 2 min · Research Team

Signature of maturity in cryptocurrency volatility

Signature of maturity in cryptocurrency volatility ArXiv ID: 2409.03676 “View on arXiv” Authors: Unknown Abstract We study the fluctuations, particularly the inequality of fluctuations, in cryptocurrency prices over the last ten years. We calculate the inequality in the price fluctuations through different measures, such as the Gini and Kolkata indices, and also the $Q$ factor (given by the ratio between the highest value and the average value) of these fluctuations. We compare the results with the equivalent quantities in some of the more prominent national currencies and see that while the fluctuations (or inequalities in such fluctuations) for cryptocurrencies were initially significantly higher than national currencies, over time the fluctuation levels of cryptocurrencies tend towards the levels characteristic of national currencies. We also compare similar quantities for a few prominent stock prices. ...

September 5, 2024 · 2 min · Research Team

Causal Hierarchy in the Financial Market Network -- Uncovered by the Helmholtz-Hodge-Kodaira Decomposition

Causal Hierarchy in the Financial Market Network – Uncovered by the Helmholtz-Hodge-Kodaira Decomposition ArXiv ID: 2408.12839 “View on arXiv” Authors: Unknown Abstract Granger causality can uncover the cause and effect relationships in financial networks. However, such networks can be convoluted and difficult to interpret, but the Helmholtz-Hodge-Kodaira decomposition can split them into a rotational and gradient component which reveals the hierarchy of Granger causality flow. Using Kenneth French’s business sector return time series, it is revealed that during the Covid crisis, precious metals and pharmaceutical products are causal drivers of the financial network. Moreover, the estimated Granger causality network shows a high connectivity during crisis which means that the research presented here can be especially useful to better understand crises in the market by revealing the dominant drivers of the crisis dynamics. ...

August 23, 2024 · 2 min · Research Team

Comparative analysis of stationarity for Bitcoin and the S&P500

Comparative analysis of stationarity for Bitcoin and the S&P500 ArXiv ID: 2408.02973 “View on arXiv” Authors: Unknown Abstract This paper compares and contrasts stationarity between the conventional stock market and cryptocurrency. The dataset used for the analysis is the intraday price indices of the S&P500 from 1996 to 2023 and the intraday Bitcoin indices from 2019 to 2023, both in USD. We adopt the definition of `wide sense stationary’, which constrains the time independence of the first and second moments of a time series. The testing method used in this paper follows the Wiener-Khinchin Theorem, i.e., that for a wide sense stationary process, the power spectral density and the autocorrelation are a Fourier transform pair. We demonstrate that localized stationarity can be achieved by truncating the time series into segments, and for each segment, detrending and normalizing the price return are required. These results show that the S&P500 price return can achieve stationarity for the full 28-year period with a detrending window of 12 months and a constrained normalization window of 10 minutes. With truncated segments, a larger normalization window can be used to establish stationarity, indicating that within the segment the data is more homogeneous. For Bitcoin price return, the segment with higher volatility presents stationarity with a normalization window of 60 minutes, whereas stationarity cannot be established in other segments. ...

August 6, 2024 · 2 min · Research Team

Contrastive Learning of Asset Embeddings from Financial Time Series

Contrastive Learning of Asset Embeddings from Financial Time Series ArXiv ID: 2407.18645 “View on arXiv” Authors: Unknown Abstract Representation learning has emerged as a powerful paradigm for extracting valuable latent features from complex, high-dimensional data. In financial domains, learning informative representations for assets can be used for tasks like sector classification, and risk management. However, the complex and stochastic nature of financial markets poses unique challenges. We propose a novel contrastive learning framework to generate asset embeddings from financial time series data. Our approach leverages the similarity of asset returns over many subwindows to generate informative positive and negative samples, using a statistical sampling strategy based on hypothesis testing to address the noisy nature of financial data. We explore various contrastive loss functions that capture the relationships between assets in different ways to learn a discriminative representation space. Experiments on real-world datasets demonstrate the effectiveness of the learned asset embeddings on benchmark industry classification and portfolio optimization tasks. In each case our novel approaches significantly outperform existing baselines highlighting the potential for contrastive learning to capture meaningful and actionable relationships in financial data. ...

July 26, 2024 · 2 min · Research Team

HARd to Beat: The Overlooked Impact of Rolling Windows in the Era of Machine Learning

HARd to Beat: The Overlooked Impact of Rolling Windows in the Era of Machine Learning ArXiv ID: 2406.08041 “View on arXiv” Authors: Unknown Abstract We investigate the predictive abilities of the heterogeneous autoregressive (HAR) model compared to machine learning (ML) techniques across an unprecedented dataset of 1,455 stocks. Our analysis focuses on the role of fitting schemes, particularly the training window and re-estimation frequency, in determining the HAR model’s performance. Despite extensive hyperparameter tuning, ML models fail to surpass the linear benchmark set by HAR when utilizing a refined fitting approach for the latter. Moreover, the simplicity of HAR allows for an interpretable model with drastically lower computational costs. We assess performance using QLIKE, MSE, and realized utility metrics, finding that HAR consistently outperforms its ML counterparts when both rely solely on realized volatility and VIX as predictors. Our results underscore the importance of a correctly specified fitting scheme. They suggest that properly fitted HAR models provide superior forecasting accuracy, establishing robust guidelines for their practical application and use as a benchmark. This study not only reaffirms the efficacy of the HAR model but also provides a critical perspective on the practical limitations of ML approaches in realized volatility forecasting. ...

June 12, 2024 · 2 min · Research Team

The Theory of Intrinsic Time: A Primer

The Theory of Intrinsic Time: A Primer ArXiv ID: 2406.07354 “View on arXiv” Authors: Unknown Abstract The concept of time mostly plays a subordinate role in finance and economics. The assumption is that time flows continuously and that time series data should be analyzed at regular, equidistant intervals. Nonetheless, already nearly 60 years ago, the concept of an event-based measure of time was first introduced. This paper expands on this theme by discussing the paradigm of intrinsic time, its origins, history, and modern applications. Departing from traditional, continuous measures of time, intrinsic time proposes an event-based, algorithmic framework that captures the dynamic and fluctuating nature of real-world phenomena more accurately. Unsuspected implications arise in general for complex systems and specifically for financial markets. For instance, novel structures and regularities are revealed, otherwise obscured by any analysis utilizing equidistant time intervals. Of particular interest is the emergence of a multiplicity of scaling laws, a hallmark signature of an underlying organizational principle in complex systems. Moreover, a central insight from this novel paradigm is the realization that universal time does not exist; instead, time is observer-dependent, shaped by the intrinsic activity unfolding within complex systems. This research opens up new avenues for economic modeling and forecasting, paving the way for a deeper understanding of the invisible forces that guide the evolution and emergence of market dynamics and financial systems. An exciting and rich landscape of possibilities emerges within the paradigm of intrinsic time. ...

June 11, 2024 · 2 min · Research Team

Dissecting Multifractal detrended cross-correlation analysis

Dissecting Multifractal detrended cross-correlation analysis ArXiv ID: 2406.19406 “View on arXiv” Authors: Unknown Abstract In this work we address the question of the Multifractal detrended cross-correlation analysis method that has been subject to some controversies since its inception almost two decades ago. To this end we propose several new options to deal with negative cross-covariance among two time series, that may serve to construct a more robust view of the multifractal spectrum among the series. We compare these novel options with the proposals already existing in the literature, and we provide fast code in C, R and Python for both new and the already existing proposals. We test different algorithms on synthetic series with an exact analytical solution, as well as on daily price series of ethanol and sugar in Brazil from 2010 to 2023. ...

June 9, 2024 · 2 min · Research Team

Estimation of tail risk measures in finance: Approaches to extreme value mixture modeling

Estimation of tail risk measures in finance: Approaches to extreme value mixture modeling ArXiv ID: 2407.05933 “View on arXiv” Authors: Unknown Abstract This thesis evaluates most of the extreme mixture models and methods that have appended in the literature and implements them in the context of finance and insurance. The paper also reviews and studies extreme value theory, time series, volatility clustering, and risk measurement methods in detail. Comparing the performance of extreme mixture models and methods on different simulated distributions shows that the method based on kernel density estimation does not have an absolute superior or close to the best performance, especially for the estimation of the extreme upper or lower tail of the distribution. Preprocessing time series data using a generalized autoregressive conditional heteroskedasticity model (GARCH) and applying extreme value mixture models on extracted residuals from GARCH can improve the goodness of fit and the estimation of the tail distribution. ...

June 1, 2024 · 2 min · Research Team