false

Uncertain Regulations, Definite Impacts: The Impact of the US Securities and Exchange Commission's Regulatory Interventions on Crypto Assets

Uncertain Regulations, Definite Impacts: The Impact of the US Securities and Exchange Commission’s Regulatory Interventions on Crypto Assets ArXiv ID: 2412.02452 “View on arXiv” Authors: Unknown Abstract This study employs an event study methodology to investigate the market impact of the U.S. Securities and Exchange Commission’s (SEC) classification of crypto assets as securities. It explores how SEC interventions influence asset returns and trading volumes, focusing on explicitly named crypto assets. The empirical analysis highlights significant adverse market reactions, notably returns plummeting 12% over one week post-announcement, persisting for a month. We demonstrate that the severity of market reaction depends on sentiment and asset characteristics such as market size, age, volatility, and illiquidity. Further, we identify significant ex-ante trading volume effects indicative of pre-announcement informed trading. ...

December 3, 2024 · 2 min · Research Team

Unsupervised learning-based calibration scheme for Rough Bergomi model

Unsupervised learning-based calibration scheme for Rough Bergomi model ArXiv ID: 2412.02135 “View on arXiv” Authors: Unknown Abstract Current deep learning-based calibration schemes for rough volatility models are based on the supervised learning framework, which can be costly due to a large amount of training data being generated. In this work, we propose a novel unsupervised learning-based scheme for the rough Bergomi (rBergomi) model which does not require accessing training data. The main idea is to use the backward stochastic differential equation (BSDE) derived in [“Bayer, Qiu and Yao, {“SIAM J. Financial Math.”}, 2022”] and simultaneously learn the BSDE solutions with the model parameters. We establish that the mean squares error between the option prices under the learned model parameters and the historical data is bounded by the loss function. Moreover, the loss can be made arbitrarily small under suitable conditions on the fitting ability of the rBergomi model to the market and the universal approximation capability of neural networks. Numerical experiments for both simulated and historical data confirm the efficiency of scheme. ...

December 3, 2024 · 2 min · Research Team

Research on Optimizing Real-Time Data Processing in High-Frequency Trading Algorithms using Machine Learning

Research on Optimizing Real-Time Data Processing in High-Frequency Trading Algorithms using Machine Learning ArXiv ID: 2412.01062 “View on arXiv” Authors: Unknown Abstract High-frequency trading (HFT) represents a pivotal and intensely competitive domain within the financial markets. The velocity and accuracy of data processing exert a direct influence on profitability, underscoring the significance of this field. The objective of this work is to optimise the real-time processing of data in high-frequency trading algorithms. The dynamic feature selection mechanism is responsible for monitoring and analysing market data in real time through clustering and feature weight analysis, with the objective of automatically selecting the most relevant features. This process employs an adaptive feature extraction method, which enables the system to respond and adjust its feature set in a timely manner when the data input changes, thus ensuring the efficient utilisation of data. The lightweight neural networks are designed in a modular fashion, comprising fast convolutional layers and pruning techniques that facilitate the expeditious completion of data processing and output prediction. In contrast to conventional deep learning models, the neural network architecture has been specifically designed to minimise the number of parameters and computational complexity, thereby markedly reducing the inference time. The experimental results demonstrate that the model is capable of maintaining consistent performance in the context of varying market conditions, thereby illustrating its advantages in terms of processing speed and revenue enhancement. ...

December 2, 2024 · 2 min · Research Team

Alpha Mining and Enhancing via Warm Start Genetic Programming for Quantitative Investment

Alpha Mining and Enhancing via Warm Start Genetic Programming for Quantitative Investment ArXiv ID: 2412.00896 “View on arXiv” Authors: Unknown Abstract Traditional genetic programming (GP) often struggles in stock alpha factor discovery due to its vast search space, overwhelming computational burden, and sporadic effective alphas. We find that GP performs better when focusing on promising regions rather than random searching. This paper proposes a new GP framework with carefully chosen initialization and structural constraints to enhance search performance and improve the interpretability of the alpha factors. This approach is motivated by and mimics the alpha searching practice and aims to boost the efficiency of such a process. Analysis of 2020-2024 Chinese stock market data shows that our method yields superior out-of-sample prediction results and higher portfolio returns than the benchmark. ...

December 1, 2024 · 2 min · Research Team

Probabilistic Predictions of Option Prices Using Multiple Sources of Data

Probabilistic Predictions of Option Prices Using Multiple Sources of Data ArXiv ID: 2412.00658 “View on arXiv” Authors: Unknown Abstract A new modular approximate Bayesian inferential framework is proposed that enables fast calculation of probabilistic predictions of future option prices. We exploit multiple information sources, including daily spot returns, high-frequency spot data and option prices. A benefit of this modular Bayesian approach is that it allows us to work with the theoretical option pricing model, without needing to specify an arbitrary statistical model that links the theoretical prices to their observed counterparts. We show that our approach produces accurate probabilistic predictions of option prices in realistic scenarios and, despite not explicitly modelling pricing errors, the method is shown to be robust to their presence. Predictive accuracy based on the Heston stochastic volatility model, with predictions produced via rapid real-time updates, is illustrated empirically for short-maturity options. ...

December 1, 2024 · 2 min · Research Team

SeQwen at the Financial Misinformation Detection Challenge Task: Sequential Learning for Claim Verification and Explanation Generation in Financial Domains

SeQwen at the Financial Misinformation Detection Challenge Task: Sequential Learning for Claim Verification and Explanation Generation in Financial Domains ArXiv ID: 2412.00549 “View on arXiv” Authors: Unknown Abstract This paper presents the system description of our entry for the COLING 2025 FMD challenge, focusing on misinformation detection in financial domains. We experimented with a combination of large language models, including Qwen, Mistral, and Gemma-2, and leveraged pre-processing and sequential learning for not only identifying fraudulent financial content but also generating coherent, and concise explanations that clarify the rationale behind the classifications. Our approach achieved competitive results with an F1-score of 0.8283 for classification, and ROUGE-1 of 0.7253 for explanations. This work highlights the transformative potential of LLMs in financial applications, offering insights into their capabilities for combating misinformation and enhancing transparency while identifying areas for future improvement in robustness and domain adaptation. ...

November 30, 2024 · 2 min · Research Team

Capital Asset Pricing Model with Size Factor and Normalizing by Volatility Index

Capital Asset Pricing Model with Size Factor and Normalizing by Volatility Index ArXiv ID: 2411.19444 “View on arXiv” Authors: Unknown Abstract The Capital Asset Pricing Model (CAPM) relates a well-diversified stock portfolio to a benchmark portfolio. We insert size effect in CAPM, capturing the observation that small stocks have higher risk and return than large stocks, on average. Dividing stock index returns by the Volatility Index makes them independent and normal. In this article, we combine these ideas to create a new discrete-time model, which includes volatility, relative size, and CAPM. We fit this model using real-world data, prove the long-term stability, and connect this research to Stochastic Portfolio Theory. We fill important gaps in our previous article on CAPM with the size factor. ...

November 29, 2024 · 2 min · Research Team

Dynamic ETF Portfolio Optimization Using enhanced Transformer-Based Models for Covariance and Semi-Covariance Prediction(Work in Progress)

Dynamic ETF Portfolio Optimization Using enhanced Transformer-Based Models for Covariance and Semi-Covariance Prediction(Work in Progress) ArXiv ID: 2411.19649 “View on arXiv” Authors: Unknown Abstract This study explores the use of Transformer-based models to predict both covariance and semi-covariance matrices for ETF portfolio optimization. Traditional portfolio optimization techniques often rely on static covariance estimates or impose strict model assumptions, which may fail to capture the dynamic and non-linear nature of market fluctuations. Our approach leverages the power of Transformer models to generate adaptive, real-time predictions of asset covariances, with a focus on the semi-covariance matrix to account for downside risk. The semi-covariance matrix emphasizes negative correlations between assets, offering a more nuanced approach to risk management compared to traditional methods that treat all volatility equally. Through a series of experiments, we demonstrate that Transformer-based predictions of both covariance and semi-covariance significantly enhance portfolio performance. Our results show that portfolios optimized using the semi-covariance matrix outperform those optimized with the standard covariance matrix, particularly in volatile market conditions. Moreover, the use of the Sortino ratio, a risk-adjusted performance metric that focuses on downside risk, further validates the effectiveness of our approach in managing risk while maximizing returns. These findings have important implications for asset managers and investors, offering a dynamic, data-driven framework for portfolio construction that adapts more effectively to shifting market conditions. By integrating Transformer-based models with the semi-covariance matrix for improved risk management, this research contributes to the growing field of machine learning in finance and provides valuable insights for optimizing ETF portfolios. ...

November 29, 2024 · 3 min · Research Team

Ergodic optimal liquidations in DeFi

Ergodic optimal liquidations in DeFi ArXiv ID: 2411.19637 “View on arXiv” Authors: Unknown Abstract We address the liquidation problem arising from the credit risk management in decentralised finance (DeFi) by formulating it as an ergodic optimal control problem. In decentralised derivatives exchanges, liquidation is triggered whenever the parties fail to maintain sufficient collateral for their open positions. Consequently, effectively managing and liquidating disposal of positions accrued through liquidations is a critical concern for decentralised derivatives exchanges. By simplifying the model (linear temporary and permanent price impacts, simplified cash balance dynamics), we derive the closed-form solutions for the optimal liquidation strategies, which balance immediate executions with the temporary and permanent price impacts, and the optimal long-term average reward. Numerical simulations further highlight the effectiveness of the proposed optimal strategy and demonstrate that the simplified model closely approximates the original market environment. Finally, we provide the method for calibrating the parameters in the model from the available data. ...

November 29, 2024 · 2 min · Research Team

BPQP: A Differentiable Convex Optimization Framework for Efficient End-to-End Learning

BPQP: A Differentiable Convex Optimization Framework for Efficient End-to-End Learning ArXiv ID: 2411.19285 “View on arXiv” Authors: Unknown Abstract Data-driven decision-making processes increasingly utilize end-to-end learnable deep neural networks to render final decisions. Sometimes, the output of the forward functions in certain layers is determined by the solutions to mathematical optimization problems, leading to the emergence of differentiable optimization layers that permit gradient back-propagation. However, real-world scenarios often involve large-scale datasets and numerous constraints, presenting significant challenges. Current methods for differentiating optimization problems typically rely on implicit differentiation, which necessitates costly computations on the Jacobian matrices, resulting in low efficiency. In this paper, we introduce BPQP, a differentiable convex optimization framework designed for efficient end-to-end learning. To enhance efficiency, we reformulate the backward pass as a simplified and decoupled quadratic programming problem by leveraging the structural properties of the KKT matrix. This reformulation enables the use of first-order optimization algorithms in calculating the backward pass gradients, allowing our framework to potentially utilize any state-of-the-art solver. As solver technologies evolve, BPQP can continuously adapt and improve its efficiency. Extensive experiments on both simulated and real-world datasets demonstrate that BPQP achieves a significant improvement in efficiency–typically an order of magnitude faster in overall execution time compared to other differentiable optimization layers. Our results not only highlight the efficiency gains of BPQP but also underscore its superiority over differentiable optimization layer baselines. ...

November 28, 2024 · 2 min · Research Team