false

Multiple split approach -- multidimensional probabilistic forecasting of electricity markets

Multiple split approach – multidimensional probabilistic forecasting of electricity markets ArXiv ID: 2407.07795 “View on arXiv” Authors: Unknown Abstract In this article, a multiple split method is proposed that enables construction of multidimensional probabilistic forecasts of a selected set of variables. The method uses repeated resampling to estimate uncertainty of simultaneous multivariate predictions. This nonparametric approach links the gap between point and probabilistic predictions and can be combined with different point forecasting methods. The performance of the method is evaluated with data describing the German short-term electricity market. The results show that the proposed approach provides highly accurate predictions. The gains from multidimensional forecasting are the largest when functions of variables, such as price spread or residual load, are considered. Finally, the method is used to support a decision process of a moderate generation utility that produces electricity from wind energy and sells it on either a day-ahead or an intraday market. The company makes decisions under high uncertainty because it knows neither the future production level nor the prices. We show that joint forecasting of both market prices and fundamentals can be used to predict the distribution of a profit, and hence helps to design a strategy that balances a level of income and a trading risk. ...

July 10, 2024 · 2 min · Research Team

A Comprehensive Analysis of Machine Learning Models for Algorithmic Trading of Bitcoin

A Comprehensive Analysis of Machine Learning Models for Algorithmic Trading of Bitcoin ArXiv ID: 2407.18334 “View on arXiv” Authors: Unknown Abstract This study evaluates the performance of 41 machine learning models, including 21 classifiers and 20 regressors, in predicting Bitcoin prices for algorithmic trading. By examining these models under various market conditions, we highlight their accuracy, robustness, and adaptability to the volatile cryptocurrency market. Our comprehensive analysis reveals the strengths and limitations of each model, providing critical insights for developing effective trading strategies. We employ both machine learning metrics (e.g., Mean Absolute Error, Root Mean Squared Error) and trading metrics (e.g., Profit and Loss percentage, Sharpe Ratio) to assess model performance. Our evaluation includes backtesting on historical data, forward testing on recent unseen data, and real-world trading scenarios, ensuring the robustness and practical applicability of our models. Key findings demonstrate that certain models, such as Random Forest and Stochastic Gradient Descent, outperform others in terms of profit and risk management. These insights offer valuable guidance for traders and researchers aiming to leverage machine learning for cryptocurrency trading. ...

July 9, 2024 · 2 min · Research Team

Advanced Financial Fraud Detection Using GNN-CL Model

Advanced Financial Fraud Detection Using GNN-CL Model ArXiv ID: 2407.06529 “View on arXiv” Authors: Unknown Abstract The innovative GNN-CL model proposed in this paper marks a breakthrough in the field of financial fraud detection by synergistically combining the advantages of graph neural networks (gnn), convolutional neural networks (cnn) and long short-term memory (LSTM) networks. This convergence enables multifaceted analysis of complex transaction patterns, improving detection accuracy and resilience against complex fraudulent activities. A key novelty of this paper is the use of multilayer perceptrons (MLPS) to estimate node similarity, effectively filtering out neighborhood noise that can lead to false positives. This intelligent purification mechanism ensures that only the most relevant information is considered, thereby improving the model’s understanding of the network structure. Feature weakening often plagues graph-based models due to the dilution of key signals. In order to further address the challenge of feature weakening, GNN-CL adopts reinforcement learning strategies. By dynamically adjusting the weights assigned to central nodes, it reinforces the importance of these influential entities to retain important clues of fraud even in less informative data. Experimental evaluations on Yelp datasets show that the results highlight the superior performance of GNN-CL compared to existing methods. ...

July 9, 2024 · 2 min · Research Team

Gambling Away Stability: Sports Betting's Impact on Vulnerable Households

Gambling Away Stability: Sports Betting’s Impact on Vulnerable Households ArXiv ID: ssrn-4881086 “View on arXiv” Authors: Unknown Abstract We estimate the causal effect of online sports betting on households’ investment, spending, and debt management decisions using household transaction data and a Keywords: Online Sports Betting, Household Finance, Risk-Taking Behavior, Consumer Debt, Transactional Data Analysis, Household Finance/Consumer Spending Complexity vs Empirical Score Math Complexity: 3.5/10 Empirical Rigor: 7.0/10 Quadrant: Street Traders Why: The paper relies on causal econometric methods (e.g., difference-in-differences) which involve moderate statistical formulas but no advanced stochastic calculus, placing math complexity in the low-to-moderate range. The empirical rigor is high due to the use of detailed household transaction data, causal identification, and analysis of real financial outcomes like spending and debt, making it backtest-ready with real-world data. flowchart TD A["Research Goal<br>Estimate causal effect of online sports betting<br>on household finance decisions"] --> B["Data Sources<br>Household transaction data<br>Online betting platform records"] B --> C["Key Methodology<br>Matched sample analysis<br>Investment/Spending comparison<br>Pre-Post betting event analysis"] C --> D["Computational Processes<br>Panel regression models<br>Propensity score matching<br>Event study methodology"] D --> E["Key Findings<br>Reduced risky investments<br>Increased consumption volatility<br>Higher debt accumulation<br>Impact on vulnerable households"]

July 9, 2024 · 1 min · Research Team

Stochastic Approaches to Asset Price Analysis

Stochastic Approaches to Asset Price Analysis ArXiv ID: 2407.06745 “View on arXiv” Authors: Unknown Abstract In this project, we propose to explore the Kalman filter’s performance for estimating asset prices. We begin by introducing a stochastic mean-reverting processes, the Ornstein-Uhlenbeck (OU) model. After this we discuss the Kalman filter in detail, and its application with this model. After a demonstration of the Kalman filter on a simulated OU process and a discussion of maximum likelihood estimation (MLE) for estimating model parameters, we apply the Kalman filter with the OU process and trailing parameter estimation to real stock market data. We finish by proposing a simple day-trading algorithm using the Kalman filter with the OU process and backtest its performance using Apple’s stock price. We then move to the Heston model, a combination of Geometric Brownian Motion and the OU process. Maximum likelihood estimation is commonly used for Heston model parameter estimation, which results in very complex forms. Here we propose an alternative but easier way of parameter estimation, called the method of moments (MOM). After the derivation of these estimators, we again apply this method to real stock data to assess its performance. ...

July 9, 2024 · 2 min · Research Team

Unified Approach for Hedging Impermanent Loss of Liquidity Provision

Unified Approach for Hedging Impermanent Loss of Liquidity Provision ArXiv ID: 2407.05146 “View on arXiv” Authors: Unknown Abstract We develop static and dynamic approaches for hedging of the impermanent loss (IL) of liquidity provision (LP) staked at Decentralised Exchanges (DEXes) which employ Uniswap V2 and V3 protocols. We provide detailed definitions and formulas for computing the IL to unify different definitions occurring in the existing literature. We show that the IL can be seen a contingent claim with a non-linear payoff for a fixed maturity date. Thus, we introduce the contingent claim termed as IL protection claim which delivers the negative of IL payoff at the maturity date. We apply arbitrage-based methods for valuation and risk management of this claim. First, we develop the static model-independent replication method for the valuation of IL protection claim using traded European vanilla call and put options. We extend and generalize an existing method to show that the IL protection claim can be hedged perfectly with options if there is a liquid options market. Second, we develop the dynamic model-based approach for the valuation and hedging of IL protection claims under a risk-neutral measure. We derive analytic valuation formulas using a wide class of price dynamics for which the characteristic function is available under the risk-neutral measure. As base cases, we derive analytic valuation formulas for IL protection claim under the Black-Scholes-Merton model and the log-normal stochastic volatility model. We finally discuss estimation of risk-reward of LP staking using our results. ...

July 6, 2024 · 2 min · Research Team

Longitudinal market structure detection using a dynamic modularity-spectral algorithm

Longitudinal market structure detection using a dynamic modularity-spectral algorithm ArXiv ID: 2407.04500 “View on arXiv” Authors: Unknown Abstract In this paper, we introduce the Dynamic Modularity-Spectral Algorithm (DynMSA), a novel approach to identify clusters of stocks with high intra-cluster correlations and low inter-cluster correlations by combining Random Matrix Theory with modularity optimisation and spectral clustering. The primary objective is to uncover hidden market structures and find diversifiers based on return correlations, thereby achieving a more effective risk-reducing portfolio allocation. We applied DynMSA to constituents of the S&P 500 and compared the results to sector- and market-based benchmarks. Besides the conception of this algorithm, our contributions further include implementing a sector-based calibration for modularity optimisation and a correlation-based distance function for spectral clustering. Testing revealed that DynMSA outperforms baseline models in intra- and inter-cluster correlation differences, particularly over medium-term correlation look-backs. It also identifies stable clusters and detects regime changes due to exogenous shocks, such as the COVID-19 pandemic. Portfolios constructed using our clusters showed higher Sortino and Sharpe ratios, lower downside volatility, reduced maximum drawdown and higher annualised returns compared to an equally weighted market benchmark. ...

July 5, 2024 · 2 min · Research Team

Unified continuous-time q-learning for mean-field game and mean-field control problems

Unified continuous-time q-learning for mean-field game and mean-field control problems ArXiv ID: 2407.04521 “View on arXiv” Authors: Unknown Abstract This paper studies the continuous-time q-learning in mean-field jump-diffusion models when the population distribution is not directly observable. We propose the integrated q-function in decoupled form (decoupled Iq-function) from the representative agent’s perspective and establish its martingale characterization, which provides a unified policy evaluation rule for both mean-field game (MFG) and mean-field control (MFC) problems. Moreover, we consider the learning procedure where the representative agent updates the population distribution based on his own state values. Depending on the task to solve the MFG or MFC problem, we can employ the decoupled Iq-function differently to characterize the mean-field equilibrium policy or the mean-field optimal policy respectively. Based on these theoretical findings, we devise a unified q-learning algorithm for both MFG and MFC problems by utilizing test policies and the averaged martingale orthogonality condition. For several financial applications in the jump-diffusion setting, we obtain the exact parameterization of the decoupled Iq-functions and the value functions, and illustrate our q-learning algorithm with satisfactory performance. ...

July 5, 2024 · 2 min · Research Team

Unwinding Toxic Flow with Partial Information

Unwinding Toxic Flow with Partial Information ArXiv ID: 2407.04510 “View on arXiv” Authors: Unknown Abstract We consider a central trading desk which aggregates the inflow of clients’ orders with unobserved toxicity, i.e. persistent adverse directionality. The desk chooses either to internalise the inflow or externalise it to the market in a cost effective manner. In this model, externalising the order flow creates both price impact costs and an additional market feedback reaction for the inflow of trades. The desk’s objective is to maximise the daily trading P&L subject to end of the day inventory penalization. We formulate this setting as a partially observable stochastic control problem and solve it in two steps. First, we derive the filtered dynamics of the inventory and toxicity, projected to the observed filtration, which turns the stochastic control problem into a fully observed problem. Then we use a variational approach in order to derive the unique optimal trading strategy. We illustrate our results for various scenarios in which the desk is facing momentum and mean-reverting toxicity. Our implementation shows that the P&L performance gap between the partially observable problem and the full information case are of order $0.01%$ in all tested scenarios. ...

July 5, 2024 · 2 min · Research Team

Block-diagonal idiosyncratic covariance estimation in high-dimensional factor models for financial time series

Block-diagonal idiosyncratic covariance estimation in high-dimensional factor models for financial time series ArXiv ID: 2407.03781 “View on arXiv” Authors: Unknown Abstract Estimation of high-dimensional covariance matrices in latent factor models is an important topic in many fields and especially in finance. Since the number of financial assets grows while the estimation window length remains of limited size, the often used sample estimator yields noisy estimates which are not even positive definite. Under the assumption of latent factor models, the covariance matrix is decomposed into a common low-rank component and a full-rank idiosyncratic component. In this paper we focus on the estimation of the idiosyncratic component, under the assumption of a grouped structure of the time series, which may arise due to specific factors such as industries, asset classes or countries. We propose a generalized methodology for estimation of the block-diagonal idiosyncratic component by clustering the residual series and applying shrinkage to the obtained blocks in order to ensure positive definiteness. We derive two different estimators based on different clustering methods and test their performance using simulation and historical data. The proposed methods are shown to provide reliable estimates and outperform other state-of-the-art estimators based on thresholding methods. ...

July 4, 2024 · 2 min · Research Team