false

Blending gradient boosted trees and neural networks for point and probabilistic forecasting of hierarchical time series

Blending gradient boosted trees and neural networks for point and probabilistic forecasting of hierarchical time series ArXiv ID: 2310.13029 “View on arXiv” Authors: Unknown Abstract In this paper we tackle the problem of point and probabilistic forecasting by describing a blending methodology of machine learning models that belong to gradient boosted trees and neural networks families. These principles were successfully applied in the recent M5 Competition on both Accuracy and Uncertainty tracks. The keypoints of our methodology are: a) transform the task to regression on sales for a single day b) information rich feature engineering c) create a diverse set of state-of-the-art machine learning models and d) carefully construct validation sets for model tuning. We argue that the diversity of the machine learning models along with the careful selection of validation examples, where the most important ingredients for the effectiveness of our approach. Although forecasting data had an inherent hierarchy structure (12 levels), none of our proposed solutions exploited that hierarchical scheme. Using the proposed methodology, our team was ranked within the gold medal range in both Accuracy and the Uncertainty track. Inference code along with already trained models are available at https://github.com/IoannisNasios/M5_Uncertainty_3rd_place ...

October 19, 2023 · 2 min · Research Team

Enhanced Local Explainability and Trust Scores with Random Forest Proximities

Enhanced Local Explainability and Trust Scores with Random Forest Proximities ArXiv ID: 2310.12428 “View on arXiv” Authors: Unknown Abstract We initiate a novel approach to explain the predictions and out of sample performance of random forest (RF) regression and classification models by exploiting the fact that any RF can be mathematically formulated as an adaptive weighted K nearest-neighbors model. Specifically, we employ a recent result that, for both regression and classification tasks, any RF prediction can be rewritten exactly as a weighted sum of the training targets, where the weights are RF proximities between the corresponding pairs of data points. We show that this linearity facilitates a local notion of explainability of RF predictions that generates attributions for any model prediction across observations in the training set, and thereby complements established feature-based methods like SHAP, which generate attributions for a model prediction across input features. We show how this proximity-based approach to explainability can be used in conjunction with SHAP to explain not just the model predictions, but also out-of-sample performance, in the sense that proximities furnish a novel means of assessing when a given model prediction is more or less likely to be correct. We demonstrate this approach in the modeling of US corporate bond prices and returns in both regression and classification cases. ...

October 19, 2023 · 2 min · Research Team

A Framework for Treating Model Uncertainty in the Asset Liability Management Problem

A Framework for Treating Model Uncertainty in the Asset Liability Management Problem ArXiv ID: 2310.11987 “View on arXiv” Authors: Unknown Abstract The problem of asset liability management (ALM) is a classic problem of the financial mathematics and of great interest for the banking institutions and insurance companies. Several formulations of this problem under various model settings have been studied under the Mean-Variance (MV) principle perspective. In this paper, the ALM problem is revisited under the context of model uncertainty in the one-stage framework. In practice, uncertainty issues appear to several aspects of the problem, e.g. liability process characteristics, market conditions, inflation rates, inside information effects, etc. A framework relying on the notion of the Wasserstein barycenter is presented which is able to treat robustly this type of ambiguities by appropriate handling the various information sources (models) and appropriately reformulating the relevant decision making problem. The proposed framework can be applied to a number of different model settings leading to the selection of investment portfolios that remain robust to the various uncertainties appearing in the market. The paper is concluded with a numerical experiment for a static version of the ALM problem, employing standard modelling approaches, illustrating the capabilities of the proposed method with very satisfactory results in retrieving the true optimal strategy even in high noise cases. ...

October 18, 2023 · 2 min · Research Team

Black-Litterman Asset Allocation under Hidden Truncation Distribution

Black-Litterman Asset Allocation under Hidden Truncation Distribution ArXiv ID: 2310.12333 “View on arXiv” Authors: Unknown Abstract In this paper, we study the Black-Litterman (BL) asset allocation model (Black and Litterman, 1990) under the hidden truncation skew-normal distribution (Arnold and Beaver, 2000). In particular, when returns are assumed to follow this skew normal distribution, we show that the posterior returns, after incorporating views, are also skew normal. By using Simaan three moments risk model (Simaan, 1993), we could then obtain the optimal portfolio. Empirical data show that the optimal portfolio obtained this way has less risk compared to an optimal portfolio of the classical BL model and that they become more negatively skewed as the expected returns of portfolios increase, which suggests that the investors trade a negative skewness for a higher expected return. We also observe a negative relation between portfolio volatility and portfolio skewness. This observation suggests that investors may be making a trade-off, opting for lower volatility in exchange for higher skewness, or vice versa. This trade-off indicates that stocks with significant price declines tend to exhibit increased volatility. ...

October 18, 2023 · 2 min · Research Team

Robust Trading in a Generalized Lattice Market

Robust Trading in a Generalized Lattice Market ArXiv ID: 2310.11023 “View on arXiv” Authors: Unknown Abstract This paper introduces a novel robust trading paradigm, called \textit{“multi-double linear policies”}, situated within a \textit{“generalized”} lattice market. Distinctively, our framework departs from most existing robust trading strategies, which are predominantly limited to single or paired assets and typically embed asset correlation within the trading strategy itself, rather than as an inherent characteristic of the market. Our generalized lattice market model incorporates both serially correlated returns and asset correlation through a conditional probabilistic model. In the nominal case, where the parameters of the model are known, we demonstrate that the proposed policies ensure survivability and probabilistic positivity. We then derive an analytic expression for the worst-case expected gain-loss and prove sufficient conditions that the proposed policies can maintain a \textit{“positive expected profits”}, even within a seemingly nonprofitable symmetric lattice market. When the parameters are unknown and require estimation, we show that the parameter space of the lattice model forms a convex polyhedron, and we present an efficient estimation method using a constrained least-squares method. These theoretical findings are strengthened by extensive empirical studies using data from the top 30 companies within the S&P 500 index, substantiating the efficacy of the generalized model and the robustness of the proposed policies in sustaining the positive expected profit and providing downside risk protection. ...

October 17, 2023 · 2 min · Research Team

Few-Shot Learning Patterns in Financial Time-Series for Trend-Following Strategies

Few-Shot Learning Patterns in Financial Time-Series for Trend-Following Strategies ArXiv ID: 2310.10500 “View on arXiv” Authors: Unknown Abstract Forecasting models for systematic trading strategies do not adapt quickly when financial market conditions rapidly change, as was seen in the advent of the COVID-19 pandemic in 2020, causing many forecasting models to take loss-making positions. To deal with such situations, we propose a novel time-series trend-following forecaster that can quickly adapt to new market conditions, referred to as regimes. We leverage recent developments from the deep learning community and use few-shot learning. We propose the Cross Attentive Time-Series Trend Network – X-Trend – which takes positions attending over a context set of financial time-series regimes. X-Trend transfers trends from similar patterns in the context set to make forecasts, then subsequently takes positions for a new distinct target regime. By quickly adapting to new financial regimes, X-Trend increases Sharpe ratio by 18.9% over a neural forecaster and 10-fold over a conventional Time-series Momentum strategy during the turbulent market period from 2018 to 2023. Our strategy recovers twice as quickly from the COVID-19 drawdown compared to the neural-forecaster. X-Trend can also take zero-shot positions on novel unseen financial assets obtaining a 5-fold Sharpe ratio increase versus a neural time-series trend forecaster over the same period. Furthermore, the cross-attention mechanism allows us to interpret the relationship between forecasts and patterns in the context set. ...

October 16, 2023 · 2 min · Research Team

Quantifying the relative importance of the spatial and temporal resolution in energy systems optimisation model

Quantifying the relative importance of the spatial and temporal resolution in energy systems optimisation model ArXiv ID: 2310.10518 “View on arXiv” Authors: Unknown Abstract An increasing number of studies using energy system optimisation models are conducted with higher spatial and temporal resolution. This comes with a computational cost which places a limit on the size, complexity, and detail of the model. In this paper, we explore the relative importance of structural aspects of energy system models, spatial and temporal resolution, compared to uncertainties in input parameters such as final energy demand, discount rate and capital costs. We use global sensitivity analysis to uncover these interactions for two developing countries, Kenya, and Benin, which still lack universal access to electricity. We find that temporal resolution has a high influence on all assessed results parameters, and spatial resolution has a significant influence on the expansion of distribution lines to the unelectrified population. The larger overall influence of temporal resolution indicates that this should be prioritised compared to spatial resolution. ...

October 16, 2023 · 2 min · Research Team

Towards reducing hallucination in extracting information from financial reports using Large Language Models

Towards reducing hallucination in extracting information from financial reports using Large Language Models ArXiv ID: 2310.10760 “View on arXiv” Authors: Unknown Abstract For a financial analyst, the question and answer (Q&A) segment of the company financial report is a crucial piece of information for various analysis and investment decisions. However, extracting valuable insights from the Q&A section has posed considerable challenges as the conventional methods such as detailed reading and note-taking lack scalability and are susceptible to human errors, and Optical Character Recognition (OCR) and similar techniques encounter difficulties in accurately processing unstructured transcript text, often missing subtle linguistic nuances that drive investor decisions. Here, we demonstrate the utilization of Large Language Models (LLMs) to efficiently and rapidly extract information from earnings report transcripts while ensuring high accuracy transforming the extraction process as well as reducing hallucination by combining retrieval-augmented generation technique as well as metadata. We evaluate the outcomes of various LLMs with and without using our proposed approach based on various objective metrics for evaluating Q&A systems, and empirically demonstrate superiority of our method. ...

October 16, 2023 · 2 min · Research Team

A Portfolio Rebalancing Approach for the Indian Stock Market

A Portfolio Rebalancing Approach for the Indian Stock Market ArXiv ID: 2310.09770 “View on arXiv” Authors: Unknown Abstract This chapter presents a calendar rebalancing approach to portfolios of stocks in the Indian stock market. Ten important sectors of the Indian economy are first selected. For each of these sectors, the top ten stocks are identified based on their free-float market capitalization values. Using the ten stocks in each sector, a sector-specific portfolio is designed. In this study, the historical stock prices are used from January 4, 2021, to September 20, 2023 (NSE Website). The portfolios are designed based on the training data from January 4, 2021 to June 30, 2022. The performances of the portfolios are tested over the period from July 1, 2022, to September 20, 2023. The calendar rebalancing approach presented in the chapter is based on a yearly rebalancing method. However, the method presented is perfectly flexible and can be adapted for weekly or monthly rebalancing. The rebalanced portfolios for the ten sectors are analyzed in detail for their performances. The performance results are not only indicative of the relative performances of the sectors over the training (i.e., in-sample) data and test (out-of-sample) data, but they also reflect the overall effectiveness of the proposed portfolio rebalancing approach. ...

October 15, 2023 · 2 min · Research Team

All AMMs are CFMMs. All DeFi markets have invariants. A DeFi market is arbitrage-free if and only if it has an increasing invariant

All AMMs are CFMMs. All DeFi markets have invariants. A DeFi market is arbitrage-free if and only if it has an increasing invariant ArXiv ID: 2310.09782 “View on arXiv” Authors: Unknown Abstract In a universal framework that expresses any market system in terms of state transition rules, we prove that every DeFi market system has an invariant function and is thus by definition a CFMM; indeed, all automated market makers (AMMs) are CFMMs. Invariants connect directly to arbitrage and to completeness, according to two fundamental equivalences. First, a DeFi market system is, we prove, arbitrage-free if and only if it has a strictly increasing invariant, where arbitrage-free means that no state can be transformed into a dominated state by any sequence of transactions. Second, the invariant is, we prove, unique if and only if the market system is complete, meaning that it allows transitions between all pairs of states in the state space, in at least one direction. Thus a necessary and sufficient condition for no-arbitrage (respectively, for completeness) is the existence of the increasing (respectively, the uniqueness of the) invariant, which, therefore, fulfills in nonlinear DeFi theory the foundational role parallel to the existence (respectively, uniqueness) of the pricing measure in the Fundamental Theorem of Asset Pricing for linear markets. Moreover, a market system is recoverable by its invariant if and only if it is complete; and in all cases, complete or incomplete, every market system has, and is recoverable by, a multi-invariant. A market system is arbitrage-free if and only if its multi-invariant is increasing. Our examples illustrate (non)existence of various specific types of arbitrage in the context of various specific types of market systems – with or without fees, with or without liquidity operations, and with or without coordination among multiple pools – but the fundamental theorems have full generality, applicable to any DeFi market system and to any notion of arbitrage expressible as a strict partial order. ...

October 15, 2023 · 3 min · Research Team