false

Using quantile time series and historical simulation to forecast financial risk multiple steps ahead

Using quantile time series and historical simulation to forecast financial risk multiple steps ahead ArXiv ID: 2502.20978 “View on arXiv” Authors: Unknown Abstract A method for quantile-based, semi-parametric historical simulation estimation of multiple step ahead Value-at-Risk (VaR) and Expected Shortfall (ES) models is developed. It uses the quantile loss function, analogous to how the quasi-likelihood is employed by standard historical simulation methods. The returns data are scaled by the estimated quantile series, then resampling is employed to estimate the forecast distribution one and multiple steps ahead, allowing tail risk forecasting. The proposed method is applicable to any data or model where the relationship between VaR and ES does not change over time and can be extended to allow a measurement equation incorporating realized measures, thus including Realized GARCH and Realized CAViaR type models. Its finite sample properties, and its comparison with existing historical simulation methods, are evaluated via a simulation study. A forecasting study assesses the relative accuracy of the 1% and 2.5% VaR and ES one-day-ahead and ten-day-ahead forecasting results for the proposed class of models compared to several competitors. ...

February 28, 2025 · 2 min · Research Team

Risk forecasting using Long Short-Term Memory Mixture Density Networks

Risk forecasting using Long Short-Term Memory Mixture Density Networks ArXiv ID: 2501.01278 “View on arXiv” Authors: Unknown Abstract This work aims to implement Long Short-Term Memory mixture density networks (LSTM-MDNs) for Value-at-Risk forecasting and compare their performance with established models (historical simulation, CMM, and GARCH) using a defined backtesting procedure. The focus was on the neural network’s ability to capture volatility clustering and its real-world applicability. Three architectures were tested: a 2-component mixture density network, a regularized 2-component model (Arimond et al., 2020), and a 3-component mixture model, the latter being tested for the first time in Value-at-Risk forecasting. Backtesting was performed on three stock indices (FTSE 100, S&P 500, EURO STOXX 50) over two distinct two-year periods (2017-2018 as a calm period, 2021-2022 as turbulent). Model performance was assessed through unconditional coverage and independence assumption tests. The neural network’s ability to handle volatility clustering was validated via correlation analysis and graphical evaluation. Results show limited success for the neural network approach. LSTM-MDNs performed poorly for 2017/2018 but outperformed benchmark models in 2021/2022. The LSTM mechanism allowed the neural network to capture volatility clustering similarly to GARCH models. However, several issues were identified: the need for proper model initialization and reliance on large datasets for effective learning. The findings suggest that while LSTM-MDNs provide adequate risk forecasts, further research and adjustments are necessary for stable performance. ...

January 2, 2025 · 2 min · Research Team

Shifting the yield curve for fixed-income and derivatives portfolios

Shifting the yield curve for fixed-income and derivatives portfolios ArXiv ID: 2412.15986 “View on arXiv” Authors: Unknown Abstract We use granular regulatory data on euro interest rate swap trades between January 2021 and June 2023 to assess whether derivative positions of Italian banks can offset losses on their debt securities holdings should interest rates rise unexpectedly. At the aggregate level of the banking system, we find that a 100-basis-point upward shift of the yield curve increases on average the value of swaps by 3.65% of Common Equity Tier 1 (CET1), compensating in part for the losses of 2.64% and 5.98% of CET1 recorded on debt securities valued at fair value and amortised cost. Variation exists across institutions, with some bank swap positions playing an offsetting role and some exacerbating bond market exposures to interest rate risk. Nevertheless, we conclude that, on aggregate, Italian banks use swaps as hedging instruments to reduce their interest rate exposures, which improves their ability to cope with the recent tightening of monetary policy. Finally, we draw on our swap pricing model to conduct an extensive data quality analysis of the transaction-level information available to authorities, and we show that the errors in fitting value changes over time are significantly lower compared to those in fitting the values themselves. ...

December 20, 2024 · 2 min · Research Team

Enhancing Risk Assessment in Transformers with Loss-at-Risk Functions

Enhancing Risk Assessment in Transformers with Loss-at-Risk Functions ArXiv ID: 2411.02558 “View on arXiv” Authors: Unknown Abstract In the financial field, precise risk assessment tools are essential for decision-making. Recent studies have challenged the notion that traditional network loss functions like Mean Square Error (MSE) are adequate, especially under extreme risk conditions that can lead to significant losses during market upheavals. Transformers and Transformer-based models are now widely used in financial forecasting according to their outstanding performance in time-series-related predictions. However, these models typically lack sensitivity to extreme risks and often underestimate great financial losses. To address this problem, we introduce a novel loss function, the Loss-at-Risk, which incorporates Value at Risk (VaR) and Conditional Value at Risk (CVaR) into Transformer models. This integration allows Transformer models to recognize potential extreme losses and further improves their capability to handle high-stakes financial decisions. Moreover, we conduct a series of experiments with highly volatile financial datasets to demonstrate that our Loss-at-Risk function improves the Transformers’ risk prediction and management capabilities without compromising their decision-making accuracy or efficiency. The results demonstrate that integrating risk-aware metrics during training enhances the Transformers’ risk assessment capabilities while preserving their core strengths in decision-making and reasoning across diverse scenarios. ...

November 4, 2024 · 2 min · Research Team

Portfolio Stress Testing and Value at Risk (VaR) Incorporating Current Market Conditions

Portfolio Stress Testing and Value at Risk (VaR) Incorporating Current Market Conditions ArXiv ID: 2409.18970 “View on arXiv” Authors: Unknown Abstract Value at Risk (VaR) and stress testing are two of the most widely used approaches in portfolio risk management to estimate potential market value losses under adverse market moves. VaR quantifies potential loss in value over a specified horizon (such as one day or ten days) at a desired confidence level (such as 95’th percentile). In scenario design and stress testing, the goal is to construct extreme market scenarios such as those involving severe recession or a specific event of concern (such as a rapid increase in rates or a geopolitical event), and quantify potential impact of such scenarios on the portfolio. The goal of this paper is to propose an approach for incorporating prevailing market conditions in stress scenario design and estimation of VaR so that they provide more accurate and realistic insights about portfolio risk over the near term. The proposed approach is based on historical data where historical observations of market changes are given more weight if a certain period in history is “more similar” to the prevailing market conditions. Clusters of market conditions are identified using a Machine Learning approach called Variational Inference (VI) where for each cluster future changes in portfolio value are similar. VI based algorithm uses optimization techniques to obtain analytical approximations of the posterior probability density of cluster assignments (market regimes) and probabilities of different outcomes for changes in portfolio value. Covid related volatile period around the year 2020 is used to illustrate the performance of the proposed approach and in particular show how VaR and stress scenarios adapt quickly to changing market conditions. Another advantage of the proposed approach is that classification of market conditions into clusters can provide useful insights about portfolio performance under different market conditions. ...

September 12, 2024 · 3 min · Research Team

Adaptive Multilevel Stochastic Approximation of the Value-at-Risk

Adaptive Multilevel Stochastic Approximation of the Value-at-Risk ArXiv ID: 2408.06531 “View on arXiv” Authors: Unknown Abstract Crépey, Frikha, and Louzi (2023) introduced a multilevel stochastic approximation scheme to compute the value-at-risk of a financial loss that is only simulatable by Monte Carlo. The optimal complexity of the scheme is in $O({"\varepsilon"}^{"-5/2"})$, ${"\varepsilon"} > 0$ being a prescribed accuracy, which is suboptimal when compared to the canonical multilevel Monte Carlo performance. This suboptimality stems from the discontinuity of the Heaviside function involved in the biased stochastic gradient that is recursively evaluated to derive the value-at-risk. To mitigate this issue, this paper proposes and analyzes a multilevel stochastic approximation algorithm that adaptively selects the number of inner samples at each level, and proves that its optimal complexity is in $O({"\varepsilon"}^{"-2"}|\ln {"\varepsilon"}|^{“5/2”})$. Our theoretical analysis is exemplified through numerical experiments. ...

August 12, 2024 · 2 min · Research Team

Elicitability and identifiability of tail risk measures

Elicitability and identifiability of tail risk measures ArXiv ID: 2404.14136 “View on arXiv” Authors: Unknown Abstract Tail risk measures are fully determined by the distribution of the underlying loss beyond its quantile at a certain level, with Value-at-Risk, Expected Shortfall and Range Value-at-Risk being prime examples. They are induced by law-based risk measures, called their generators, evaluated on the tail distribution. This paper establishes joint identifiability and elicitability results of tail risk measures together with the corresponding quantile, provided that their generators are identifiable and elicitable, respectively. As an example, we establish the joint identifiability and elicitability of the tail expectile together with the quantile. The corresponding consistent scores constitute a novel class of weighted scores, nesting the known class of scores of Fissler and Ziegel for the Expected Shortfall together with the quantile. For statistical purposes, our results pave the way to easier model fitting for tail risk measures via regression and the generalized method of moments, but also model comparison and model validation in terms of established backtesting procedures. ...

April 22, 2024 · 2 min · Research Team

The Boosted Difference of Convex Functions Algorithm for Value-at-Risk Constrained Portfolio Optimization

The Boosted Difference of Convex Functions Algorithm for Value-at-Risk Constrained Portfolio Optimization ArXiv ID: 2402.09194 “View on arXiv” Authors: Unknown Abstract A highly relevant problem of modern finance is the design of Value-at-Risk (VaR) optimal portfolios. Due to contemporary financial regulations, banks and other financial institutions are tied to use the risk measure to control their credit, market, and operational risks. Despite its practical relevance, the non-convexity induced by VaR constraints in portfolio optimization problems remains a major challenge. To address this complexity more effectively, this paper proposes the use of the Boosted Difference-of-Convex Functions Algorithm (BDCA) to approximately solve a Markowitz-style portfolio selection problem with a VaR constraint. As one of the key contributions, we derive a novel line search framework that allows the application of the algorithm to Difference-of-Convex functions (DC) programs where both components are non-smooth. Moreover, we prove that the BDCA linearly converges to a Karush-Kuhn-Tucker point for the problem at hand using the Kurdyka-Lojasiewicz property. We also outline that this result can be generalized to a broader class of piecewise-linear DC programs with linear equality and inequality constraints. In the practical part, extensive numerical experiments under consideration of best practices then demonstrate the robustness of the BDCA under challenging constraint settings and adverse initialization. In particular, the algorithm consistently identifies the highest number of feasible solutions even under the most challenging conditions, while other approaches from chance-constrained programming lead to a complete failure in these settings. Due to the open availability of all data sets and code, this paper further provides a practical guide for transparent and easily reproducible comparisons of VaR-constrained portfolio selection problems in Python. ...

February 14, 2024 · 2 min · Research Team

Navigating Market Turbulence: Insights from Causal Network Contagion Value at Risk

Navigating Market Turbulence: Insights from Causal Network Contagion Value at Risk ArXiv ID: 2402.06032 “View on arXiv” Authors: Unknown Abstract Accurately defining, measuring and mitigating risk is a cornerstone of financial risk management, especially in the presence of financial contagion. Traditional correlation-based risk assessment methods often struggle under volatile market conditions, particularly in the face of external shocks, highlighting the need for a more robust and invariant predictive approach. This paper introduces the Causal Network Contagion Value at Risk (Causal-NECO VaR), a novel methodology that significantly advances causal inference in financial risk analysis. Embracing a causal network framework, this method adeptly captures and analyses volatility and spillover effects, effectively setting it apart from conventional contagion-based VaR models. Causal-NECO VaR’s key innovation lies in its ability to derive directional influences among assets from observational data, thereby offering robust risk predictions that remain invariant to market shocks and systemic changes. A comprehensive simulation study and the application to the Forex market show the robustness of the method. Causal-NECO VaR not only demonstrates predictive accuracy, but also maintains its reliability in unstable financial environments, offering clearer risk assessments even amidst unforeseen market disturbances. This research makes a significant contribution to the field of risk management and financial stability, presenting a causal approach to the computation of VaR. It emphasises the model’s superior resilience and invariant predictive power, essential for navigating the complexities of today’s ever-evolving financial markets. ...

February 8, 2024 · 2 min · Research Team

Deep Generative Modeling for Financial Time Series with Application in VaR: A Comparative Review

Deep Generative Modeling for Financial Time Series with Application in VaR: A Comparative Review ArXiv ID: 2401.10370 “View on arXiv” Authors: Unknown Abstract In the financial services industry, forecasting the risk factor distribution conditional on the history and the current market environment is the key to market risk modeling in general and value at risk (VaR) model in particular. As one of the most widely adopted VaR models in commercial banks, Historical simulation (HS) uses the empirical distribution of daily returns in a historical window as the forecast distribution of risk factor returns in the next day. The objectives for financial time series generation are to generate synthetic data paths with good variety, and similar distribution and dynamics to the original historical data. In this paper, we apply multiple existing deep generative methods (e.g., CGAN, CWGAN, Diffusion, and Signature WGAN) for conditional time series generation, and propose and test two new methods for conditional multi-step time series generation, namely Encoder-Decoder CGAN and Conditional TimeVAE. Furthermore, we introduce a comprehensive framework with a set of KPIs to measure the quality of the generated time series for financial modeling. The KPIs cover distribution distance, autocorrelation and backtesting. All models (HS, parametric and neural networks) are tested on both historical USD yield curve data and additional data simulated from GARCH and CIR processes. The study shows that top performing models are HS, GARCH and CWGAN models. Future research directions in this area are also discussed. ...

January 18, 2024 · 3 min · Research Team