false

Quantitative Investment Diversification Strategies via Various Risk Models

Quantitative Investment Diversification Strategies via Various Risk Models ArXiv ID: 2407.01550 “View on arXiv” Authors: Unknown Abstract This paper focuses on the developing of high-dimensional risk models to construct portfolios of securities in the US stock exchange. Investors seek to gain the highest profits and lowest risk in capital markets. We have developed various risk models and for each model different investment strategies are tested. Out of sample tests are performed on a long-term horizon from 1970 until 2023. ...

April 27, 2024 · 1 min · Research Team

Recommender Systems in Financial Trading: Using machine-based conviction analysis in an explainable AI investment framework

Recommender Systems in Financial Trading: Using machine-based conviction analysis in an explainable AI investment framework ArXiv ID: 2404.11080 “View on arXiv” Authors: Unknown Abstract Traditionally, assets are selected for inclusion in a portfolio (long or short) by human analysts. Teams of human portfolio managers (PMs) seek to weigh and balance these securities using optimisation methods and other portfolio construction processes. Often, human PMs consider human analyst recommendations against the backdrop of the analyst’s recommendation track record and the applicability of the analyst to the recommendation they provide. Many firms regularly ask analysts to provide a “conviction” level on their recommendations. In the eyes of PMs, understanding a human analyst’s track record has typically come down to basic spread sheet tabulation or, at best, a “virtual portfolio” paper trading book to keep track of results of recommendations. Analysts’ conviction around their recommendations and their “paper trading” track record are two crucial workflow components between analysts and portfolio construction. Many human PMs may not even appreciate that they factor these data points into their decision-making logic. This chapter explores how Artificial Intelligence (AI) can be used to replicate these two steps and bridge the gap between AI data analytics and AI-based portfolio construction methods. This field of AI is referred to as Recommender Systems (RS). This chapter will further explore what metadata that RS systems functionally supply to downstream systems and their features. ...

April 17, 2024 · 2 min · Research Team

Learning to Generate Explainable Stock Predictions using Self-Reflective Large Language Models

Learning to Generate Explainable Stock Predictions using Self-Reflective Large Language Models ArXiv ID: 2402.03659 “View on arXiv” Authors: Unknown Abstract Explaining stock predictions is generally a difficult task for traditional non-generative deep learning models, where explanations are limited to visualizing the attention weights on important texts. Today, Large Language Models (LLMs) present a solution to this problem, given their known capabilities to generate human-readable explanations for their decision-making process. However, the task of stock prediction remains challenging for LLMs, as it requires the ability to weigh the varying impacts of chaotic social texts on stock prices. The problem gets progressively harder with the introduction of the explanation component, which requires LLMs to explain verbally why certain factors are more important than the others. On the other hand, to fine-tune LLMs for such a task, one would need expert-annotated samples of explanation for every stock movement in the training set, which is expensive and impractical to scale. To tackle these issues, we propose our Summarize-Explain-Predict (SEP) framework, which utilizes a self-reflective agent and Proximal Policy Optimization (PPO) to let a LLM teach itself how to generate explainable stock predictions in a fully autonomous manner. The reflective agent learns how to explain past stock movements through self-reasoning, while the PPO trainer trains the model to generate the most likely explanations from input texts. The training samples for the PPO trainer are also the responses generated during the reflective process, which eliminates the need for human annotators. Using our SEP framework, we fine-tune a LLM that can outperform both traditional deep-learning and LLM methods in prediction accuracy and Matthews correlation coefficient for the stock classification task. To justify the generalization capability of our framework, we further test it on the portfolio construction task, and demonstrate its effectiveness through various portfolio metrics. ...

February 6, 2024 · 3 min · Research Team

Markowitz Portfolio Construction at Seventy

Markowitz Portfolio Construction at Seventy ArXiv ID: 2401.05080 “View on arXiv” Authors: Unknown Abstract More than seventy years ago Harry Markowitz formulated portfolio construction as an optimization problem that trades off expected return and risk, defined as the standard deviation of the portfolio returns. Since then the method has been extended to include many practical constraints and objective terms, such as transaction cost or leverage limits. Despite several criticisms of Markowitz’s method, for example its sensitivity to poor forecasts of the return statistics, it has become the dominant quantitative method for portfolio construction in practice. In this article we describe an extension of Markowitz’s method that addresses many practical effects and gracefully handles the uncertainty inherent in return statistics forecasting. Like Markowitz’s original formulation, the extension is also a convex optimization problem, which can be solved with high reliability and speed. ...

January 10, 2024 · 2 min · Research Team

Improved Data Generation for Enhanced Asset Allocation: A Synthetic Dataset Approach for the Fixed Income Universe

Improved Data Generation for Enhanced Asset Allocation: A Synthetic Dataset Approach for the Fixed Income Universe ArXiv ID: 2311.16004 “View on arXiv” Authors: Unknown Abstract We present a novel process for generating synthetic datasets tailored to assess asset allocation methods and construct portfolios within the fixed income universe. Our approach begins by enhancing the CorrGAN model to generate synthetic correlation matrices. Subsequently, we propose an Encoder-Decoder model that samples additional data conditioned on a given correlation matrix. The resulting synthetic dataset facilitates in-depth analyses of asset allocation methods across diverse asset universes. Additionally, we provide a case study that exemplifies the use of the synthetic dataset to improve portfolios constructed within a simulation-based asset allocation process. ...

November 27, 2023 · 2 min · Research Team

Benchmarking Large Language Model Volatility

Benchmarking Large Language Model Volatility ArXiv ID: 2311.15180 “View on arXiv” Authors: Unknown Abstract The impact of non-deterministic outputs from Large Language Models (LLMs) is not well examined for financial text understanding tasks. Through a compelling case study on investing in the US equity market via news sentiment analysis, we uncover substantial variability in sentence-level sentiment classification results, underscoring the innate volatility of LLM outputs. These uncertainties cascade downstream, leading to more significant variations in portfolio construction and return. While tweaking the temperature parameter in the language model decoder presents a potential remedy, it comes at the expense of stifled creativity. Similarly, while ensembling multiple outputs mitigates the effect of volatile outputs, it demands a notable computational investment. This work furnishes practitioners with invaluable insights for adeptly navigating uncertainty in the integration of LLMs into financial decision-making, particularly in scenarios dictated by non-deterministic information. ...

November 26, 2023 · 2 min · Research Team

Portfolio Construction using Black-Litterman Model and Factors

Portfolio Construction using Black-Litterman Model and Factors ArXiv ID: 2311.04475 “View on arXiv” Authors: Unknown Abstract This paper presents a portfolio construction process, including mainly two parts, Factors Selection and Weight Allocations. For the factors selection part, We have chosen 20 factors by considering three aspects, the global market, different assets class, and stock idiosyncratic characteristics. Each factor is proxied by a corresponding ETF. Then, we would apply several weight allocation methods to those factors, including two fixed weight allocation methods, three optimisation methods, and a Black-Litterman model. In addition, we would also fit a Deep Learning model for generating views periodically and incorporating views with the prior to achieve dynamically updated weights by using the Black-Litterman model. In the end, the robustness checking shows how weights change with respect to time evolving and variance increasing. Results using shrinkage variance are provided to alleviate the impacts of representativeness of historical data, but there sadly has little impact. Overall, the model by using the Deep Learning plus Black-Litterman model results outperform the portfolio by other weight allocation schemes, even though further improvement and robustness checking should be performed. ...

November 8, 2023 · 2 min · Research Team

Statistical arbitrage portfolio construction based on preference relations

Statistical arbitrage portfolio construction based on preference relations ArXiv ID: 2310.08284 “View on arXiv” Authors: Unknown Abstract Statistical arbitrage methods identify mispricings in securities with the goal of building portfolios which are weakly correlated with the market. In pairs trading, an arbitrage opportunity is identified by observing relative price movements between a pair of two securities. By simultaneously observing multiple pairs, one can exploit different arbitrage opportunities and increase the performance of such methods. However, the use of a large number of pairs is difficult due to the increased probability of contradictory trade signals among different pairs. In this paper, we propose a novel portfolio construction method based on preference relation graphs, which can reconcile contradictory pairs trading signals across multiple security pairs. The proposed approach enables joint exploitation of arbitrage opportunities among a large number of securities. Experimental results using three decades of historical returns of roughly 500 stocks from the S&P 500 index show that the portfolios based on preference relations exhibit robust returns even with high transaction costs, and that their performance improves with the number of securities considered. ...

October 12, 2023 · 2 min · Research Team

NoxTrader: LSTM-Based Stock Return Momentum Prediction for Quantitative Trading

NoxTrader: LSTM-Based Stock Return Momentum Prediction for Quantitative Trading ArXiv ID: 2310.00747 “View on arXiv” Authors: Unknown Abstract We introduce NoxTrader, a sophisticated system designed for portfolio construction and trading execution with the primary objective of achieving profitable outcomes in the stock market, specifically aiming to generate moderate to long-term profits. The underlying learning process of NoxTrader is rooted in the assimilation of valuable insights derived from historical trading data, particularly focusing on time-series analysis due to the nature of the dataset employed. In our approach, we utilize price and volume data of US stock market for feature engineering to generate effective features, including Return Momentum, Week Price Momentum, and Month Price Momentum. We choose the Long Short-Term Memory (LSTM)model to capture continuous price trends and implement dynamic model updates during the trading execution process, enabling the model to continuously adapt to the current market trends. Notably, we have developed a comprehensive trading backtesting system - NoxTrader, which allows us to manage portfolios based on predictive scores and utilize custom evaluation metrics to conduct a thorough assessment of our trading performance. Our rigorous feature engineering and careful selection of prediction targets enable us to generate prediction data with an impressive correlation range between 0.65 and 0.75. Finally, we monitor the dispersion of our prediction data and perform a comparative analysis against actual market data. Through the use of filtering techniques, we improved the initial -60% investment return to 325%. ...

October 1, 2023 · 2 min · Research Team

Hedging Forecast Combinations With an Application to the Random Forest

Hedging Forecast Combinations With an Application to the Random Forest ArXiv ID: 2308.15384 “View on arXiv” Authors: Unknown Abstract This papers proposes a generic, high-level methodology for generating forecast combinations that would deliver the optimal linearly combined forecast in terms of the mean-squared forecast error if one had access to two population quantities: the mean vector and the covariance matrix of the vector of individual forecast errors. We point out that this problem is identical to a mean-variance portfolio construction problem, in which portfolio weights correspond to forecast combination weights. We allow negative forecast weights and interpret such weights as hedging over and under estimation risks across estimators. This interpretation follows directly as an implication of the portfolio analogy. We demonstrate our method’s improved out-of-sample performance relative to standard methods in combining tree forecasts to form weighted random forests in 14 data sets. ...

August 29, 2023 · 2 min · Research Team