false

Can a GPT4-Powered AI Agent Be a Good Enough Performance Attribution Analyst?

Can a GPT4-Powered AI Agent Be a Good Enough Performance Attribution Analyst? ArXiv ID: 2403.10482 “View on arXiv” Authors: Unknown Abstract Performance attribution analysis, defined as the process of explaining the drivers of the excess performance of an investment portfolio against a benchmark, stands as a significant feature of portfolio management and plays a crucial role in the investment decision-making process, particularly within the fund management industry. Rooted in a solid financial and mathematical framework, the importance and methodologies of this analytical technique are extensively documented across numerous academic research papers and books. The integration of large language models (LLMs) and AI agents marks a groundbreaking development in this field. These agents are designed to automate and enhance the performance attribution analysis by accurately calculating and analyzing portfolio performances against benchmarks. In this study, we introduce the application of an AI Agent for a variety of essential performance attribution tasks, including the analysis of performance drivers and utilizing LLMs as calculation engine for multi-level attribution analysis and question-answering (QA) tasks. Leveraging advanced prompt engineering techniques such as Chain-of-Thought (CoT) and Plan and Solve (PS), and employing a standard agent framework from LangChain, the research achieves promising results: it achieves accuracy rates exceeding 93% in analyzing performance drivers, attains 100% in multi-level attribution calculations, and surpasses 84% accuracy in QA exercises that simulate official examination standards. These findings affirm the impactful role of AI agents, prompt engineering and evaluation in advancing portfolio management processes, highlighting a significant development in the practical application and evaluation of Generative AI technologies within the domain. ...

March 15, 2024 · 2 min · Research Team

Chain-structured neural architecture search for financial time series forecasting

Chain-structured neural architecture search for financial time series forecasting ArXiv ID: 2403.14695 “View on arXiv” Authors: Unknown Abstract Neural architecture search (NAS) emerged as a way to automatically optimize neural networks for a specific task and dataset. Despite an abundance of research on NAS for images and natural language applications, similar studies for time series data are lacking. Among NAS search spaces, chain-structured are the simplest and most applicable to small datasets like time series. We compare three popular NAS strategies on chain-structured search spaces: Bayesian optimization (specifically Tree-structured Parzen Estimator), the hyperband method, and reinforcement learning in the context of financial time series forecasting. These strategies were employed to optimize simple well-understood neural architectures like the MLP, 1D CNN, and RNN, with more complex temporal fusion transformers (TFT) and their own optimizers included for comparison. We find Bayesian optimization and the hyperband method performing best among the strategies, and RNN and 1D CNN best among the architectures, but all methods were very close to each other with a high variance due to the difficulty of working with financial datasets. We discuss our approach to overcome the variance and provide implementation recommendations for future users and researchers. ...

March 15, 2024 · 2 min · Research Team

Empowering Credit Scoring Systems with Quantum-Enhanced Machine Learning

Empowering Credit Scoring Systems with Quantum-Enhanced Machine Learning ArXiv ID: 2404.00015 “View on arXiv” Authors: Unknown Abstract Quantum Kernels are projected to provide early-stage usefulness for quantum machine learning. However, highly sophisticated classical models are hard to surpass without losing interpretability, particularly when vast datasets can be exploited. Nonetheless, classical models struggle once data is scarce and skewed. Quantum feature spaces are projected to find better links between data features and the target class to be predicted even in such challenging scenarios and most importantly, enhanced generalization capabilities. In this work, we propose a novel approach called Systemic Quantum Score (SQS) and provide preliminary results indicating potential advantage over purely classical models in a production grade use case for the Finance sector. SQS shows in our specific study an increased capacity to extract patterns out of fewer data points as well as improved performance over data-hungry algorithms such as XGBoost, providing advantage in a competitive market as it is the FinTech and Neobank regime. ...

March 15, 2024 · 2 min · Research Team

Missing Data Imputation With Granular Semantics and AI-driven Pipeline for Bankruptcy Prediction

Missing Data Imputation With Granular Semantics and AI-driven Pipeline for Bankruptcy Prediction ArXiv ID: 2404.00013 “View on arXiv” Authors: Unknown Abstract This work focuses on designing a pipeline for the prediction of bankruptcy. The presence of missing values, high dimensional data, and highly class-imbalance databases are the major challenges in the said task. A new method for missing data imputation with granular semantics has been introduced here. The merits of granular computing have been explored here to define this method. The missing values have been predicted using the feature semantics and reliable observations in a low-dimensional space, in the granular space. The granules are formed around every missing entry, considering a few of the highly correlated features and most reliable closest observations to preserve the relevance and reliability, the context, of the database against the missing entries. An intergranular prediction is then carried out for the imputation within those contextual granules. That is, the contextual granules enable a small relevant fraction of the huge database to be used for imputation and overcome the need to access the entire database repetitively for each missing value. This method is then implemented and tested for the prediction of bankruptcy with the Polish Bankruptcy dataset. It provides an efficient solution for big and high-dimensional datasets even with large imputation rates. Then an AI-driven pipeline for bankruptcy prediction has been designed using the proposed granular semantic-based data filling method followed by the solutions to the issues like high dimensional dataset and high class-imbalance in the dataset. The rest of the pipeline consists of feature selection with the random forest for reducing dimensionality, data balancing with SMOTE, and prediction with six different popular classifiers including deep NN. All methods defined here have been experimentally verified with suitable comparative studies and proven to be effective on all the data sets captured over the five years. ...

March 15, 2024 · 3 min · Research Team

Optimal Portfolio Choice with Cross-Impact Propagators

Optimal Portfolio Choice with Cross-Impact Propagators ArXiv ID: 2403.10273 “View on arXiv” Authors: Unknown Abstract We consider a class of optimal portfolio choice problems in continuous time where the agent’s transactions create both transient cross-impact driven by a matrix-valued Volterra propagator, as well as temporary price impact. We formulate this problem as the maximization of a revenue-risk functional, where the agent also exploits available information on a progressively measurable price predicting signal. We solve the maximization problem explicitly in terms of operator resolvents, by reducing the corresponding first order condition to a coupled system of stochastic Fredholm equations of the second kind and deriving its solution. We then give sufficient conditions on the matrix-valued propagator so that the model does not permit price manipulation. We also provide an implementation of the solutions to the optimal portfolio choice problem and to the associated optimal execution problem. Our solutions yield financial insights on the influence of cross-impact on the optimal strategies and its interplay with alpha decays. ...

March 15, 2024 · 2 min · Research Team

Deep Limit Order Book Forecasting

Deep Limit Order Book Forecasting ArXiv ID: 2403.09267 “View on arXiv” Authors: Unknown Abstract We exploit cutting-edge deep learning methodologies to explore the predictability of high-frequency Limit Order Book mid-price changes for a heterogeneous set of stocks traded on the NASDAQ exchange. In so doing, we release `LOBFrame’, an open-source code base to efficiently process large-scale Limit Order Book data and quantitatively assess state-of-the-art deep learning models’ forecasting capabilities. Our results are twofold. We demonstrate that the stocks’ microstructural characteristics influence the efficacy of deep learning methods and that their high forecasting power does not necessarily correspond to actionable trading signals. We argue that traditional machine learning metrics fail to adequately assess the quality of forecasts in the Limit Order Book context. As an alternative, we propose an innovative operational framework that evaluates predictions’ practicality by focusing on the probability of accurately forecasting complete transactions. This work offers academics and practitioners an avenue to make informed and robust decisions on the application of deep learning techniques, their scope and limitations, effectively exploiting emergent statistical properties of the Limit Order Book. ...

March 14, 2024 · 2 min · Research Team

Layer 2 be or Layer not 2 be: Scaling on Uniswap v3

Layer 2 be or Layer not 2 be: Scaling on Uniswap v3 ArXiv ID: 2403.09494 “View on arXiv” Authors: Unknown Abstract This paper studies the market structure impact of cheaper and faster chains on the Uniswap v3 Protocol. The Uniswap Protocol is the largest decentralized application on Ethereum by both gas and blockspace used, and user behaviors of the protocol are very sensitive to fluctuations in gas prices and market structure due to the economic factors of the Protocol. We focus on the chains where Uniswap v3 has the most activity, giving us the best comparison to Ethereum mainnet. Because of cheaper gas and lower block times, we find evidence that the majority of swaps get better gas-adjusted execution on these chains, liquidity providers are more capital efficient, and liquidity providers have increased fee returns from more arbitrage. We also present evidence that two second block times may be too long for optimal liquidity provider returns, compared to first come, first served. We argue that many of the current drawbacks with AMMs may be due to chain dynamics and are vastly improved with cheaper and faster transactions ...

March 14, 2024 · 2 min · Research Team

Mean-Field Microcanonical Gradient Descent

Mean-Field Microcanonical Gradient Descent ArXiv ID: 2403.08362 “View on arXiv” Authors: Unknown Abstract Microcanonical gradient descent is a sampling procedure for energy-based models allowing for efficient sampling of distributions in high dimension. It works by transporting samples from a high-entropy distribution, such as Gaussian white noise, to a low-energy region using gradient descent. We put this model in the framework of normalizing flows, showing how it can often overfit by losing an unnecessary amount of entropy in the descent. As a remedy, we propose a mean-field microcanonical gradient descent that samples several weakly coupled data points simultaneously, allowing for better control of the entropy loss while paying little in terms of likelihood fit. We study these models in the context of financial time series, illustrating the improvements on both synthetic and real data. ...

March 13, 2024 · 2 min · Research Team

Trading Large Orders in the Presence of Multiple High-Frequency Anticipatory Traders

Trading Large Orders in the Presence of Multiple High-Frequency Anticipatory Traders ArXiv ID: 2403.08202 “View on arXiv” Authors: Unknown Abstract We investigate a market with a normal-speed informed trader (IT) who may employ mixed strategy and multiple anticipatory high-frequency traders (HFTs) who are under different inventory pressures, in a three-period Kyle’s model. The pure- and mixed-strategy equilibria are considered and the results provide recommendations for IT’s randomization strategy with different numbers of HFTs. Some surprising results about investors’ profits arise: the improvement of anticipatory traders’ speed or a more precise prediction may harm themselves but help IT. ...

March 13, 2024 · 2 min · Research Team

Pairs Trading Using a Novel Graphical Matching Approach

Pairs Trading Using a Novel Graphical Matching Approach ArXiv ID: 2403.07998 “View on arXiv” Authors: Unknown Abstract Pairs trading, a strategy that capitalizes on price movements of asset pairs driven by similar factors, has gained significant popularity among traders. Common practice involves selecting highly cointegrated pairs to form a portfolio, which often leads to the inclusion of multiple pairs sharing common assets. This approach, while intuitive, inadvertently elevates portfolio variance and diminishes risk-adjusted returns by concentrating on a small number of highly cointegrated assets. Our study introduces an innovative pair selection method employing graphical matchings designed to tackle this challenge. We model all assets and their cointegration levels with a weighted graph, where edges signify pairs and their weights indicate the extent of cointegration. A portfolio of pairs is a subgraph of this graph. We construct a portfolio which is a maximum weighted matching of this graph to select pairs which have strong cointegration while simultaneously ensuring that there are no shared assets within any pair of pairs. This approach ensures each asset is included in just one pair, leading to a significantly lower variance in the matching-based portfolio compared to a baseline approach that selects pairs purely based on cointegration. Theoretical analysis and empirical testing using data from the S&P 500 between 2017 and 2023, affirm the efficacy of our method. Notably, our matching-based strategy showcases a marked improvement in risk-adjusted performance, evidenced by a gross Sharpe ratio of 1.23, a significant enhancement over the baseline value of 0.48 and market value of 0.59. Additionally, our approach demonstrates reduced trading costs attributable to lower turnover, alongside minimized single asset risk due to a more diversified asset base. ...

March 12, 2024 · 2 min · Research Team