false

Informative Risk Measures in the Banking Industry: A Proposal based on the Magnitude-Propensity Approach

Informative Risk Measures in the Banking Industry: A Proposal based on the Magnitude-Propensity Approach ArXiv ID: 2511.21556 “View on arXiv” Authors: Michele Bonollo, Martino Grasselli, Gianmarco Mori, Havva Nilsu Oz Abstract Despite decades of research in risk management, most of the literature has focused on scalar risk measures (like e.g. Value-at-Risk and Expected Shortfall). While such scalar measures provide compact and tractable summaries, they provide a poor informative value as they miss the intrinsic multivariate nature of risk.To contribute to a paradigmatic enhancement, and building on recent theoretical work by Faugeras and Pagés (2024), we propose a novel multivariate representation of risk that better reflects the structure of potential portfolio losses, while maintaining desirable properties of interpretability and analytical coherence. The proposed framework extends the classical frequency-severity approach and provides a more comprehensive characterization of extreme events. Several empirical applications based on real-world data demonstrate the feasibility, robustness and practical relevance of the methodology, suggesting its potential for both regulatory and managerial applications. ...

November 26, 2025 · 2 min · Research Team

Levy-stable scaling of risk and performance functionals

Levy-stable scaling of risk and performance functionals ArXiv ID: 2511.07834 “View on arXiv” Authors: Dmitrii Vlasiuk Abstract We develop a finite-horizon model in which liquid-asset returns exhibit Levy-stable scaling on a data-driven window [“tau_UV, tau_IR”] and aggregate into a finite-variance regime outside. The window and the tail index alpha are identified from the log-log slope of the central body and a two-segment fit of scale versus horizon. With an anchor horizon tau_0, we derive horizon-correct formulas for Value-at-Risk, Expected Shortfall, Sharpe and Information ratios, Kelly under a Value-at-Risk constraint, and one-step drawdown, where each admits a closed-form Gaussian-bias term driven by the exponent gap (1/alpha - 1/2). The implementation is nonparametric up to alpha and fixed tail quantiles. The formulas are reproducible across horizons on the Levy window. ...

November 11, 2025 · 2 min · Research Team

Optimized Multi-Level Monte Carlo Parametrization and Antithetic Sampling for Nested Simulations

Optimized Multi-Level Monte Carlo Parametrization and Antithetic Sampling for Nested Simulations ArXiv ID: 2510.18995 “View on arXiv” Authors: Alexandre Boumezoued, Adel Cherchali, Vincent Lemaire, Gilles Pagès, Mathieu Truc Abstract Estimating risk measures such as large loss probabilities and Value-at-Risk is fundamental in financial risk management and often relies on computationally intensive nested Monte Carlo methods. While Multi-Level Monte Carlo (MLMC) techniques and their weighted variants are typically more efficient, their effectiveness tends to deteriorate when dealing with irregular functions, notably indicator functions, which are intrinsic to these risk measures. We address this issue by introducing a novel MLMC parametrization that significantly improves performance in practical, non-asymptotic settings while maintaining theoretical asymptotic guarantees. We also prove that antithetic sampling of MLMC levels enhances efficiency regardless of the regularity of the underlying function. Numerical experiments motivated by the calculation of economic capital in a life insurance context confirm the practical value of our approach for estimating loss probabilities and quantiles, bridging theoretical advances and practical requirements in financial risk estimation. ...

October 21, 2025 · 2 min · Research Team

Minimizing the Value-at-Risk of Loan Portfolio via Deep Neural Networks

Minimizing the Value-at-Risk of Loan Portfolio via Deep Neural Networks ArXiv ID: 2510.07444 “View on arXiv” Authors: Albert Di Wang, Ye Du Abstract Risk management is a prominent issue in peer-to-peer lending. An investor may naturally reduce his risk exposure by diversifying instead of putting all his money on one loan. In that case, an investor may want to minimize the Value-at-Risk (VaR) or Conditional Value-at-Risk (CVaR) of his loan portfolio. We propose a low degree of freedom deep neural network model, DeNN, as well as a high degree of freedom model, DSNN, to tackle the problem. In particular, our models predict not only the default probability of a loan but also the time when it will default. The experiments demonstrate that both models can significantly reduce the portfolio VaRs at different confidence levels, compared to benchmarks. More interestingly, the low degree of freedom model, DeNN, outperforms DSNN in most scenarios. ...

October 8, 2025 · 2 min · Research Team

The Interplay between Utility and Risk in Portfolio Selection

The Interplay between Utility and Risk in Portfolio Selection ArXiv ID: 2509.10351 “View on arXiv” Authors: Leonardo Baggiani, Martin Herdegen, Nazem Khan Abstract We revisit the problem of portfolio selection, where an investor maximizes utility subject to a risk constraint. Our framework is very general and accommodates a wide range of utility and risk functionals, including non-concave utilities such as S-shaped utilities from prospect theory and non-convex risk measures such as Value at Risk. Our main contribution is a novel and complete characterization of well-posedness for utility-risk portfolio selection in one period that takes the interplay between the utility and the risk objectives fully into account. We show that under mild regularity conditions the minimal necessary and sufficient condition for well-posedness is given by a very simple either-or criterion: either the utility functional or the risk functional need to satisfy the axiom of sensitivity to large losses. This allows to easily describe well-posedness or ill-posedness for many utility-risk pairs, which we illustrate by a large number of examples. In the special case of expected utility maximization without a risk constraint (but including non-concave utilities), we show that well-posedness is fully characterised by the asymptotic loss-gain ratio, a simple and interpretable quantity that describes the investor’s asymptotic relative weighting of large losses versus large gains. ...

September 12, 2025 · 2 min · Research Team

Dependency Network-Based Portfolio Design with Forecasting and VaR Constraints

Dependency Network-Based Portfolio Design with Forecasting and VaR Constraints ArXiv ID: 2507.20039 “View on arXiv” Authors: Zihan Lin, Haojie Liu, Randall R. Rojas Abstract This study proposes a novel portfolio optimization framework that integrates statistical social network analysis with time series forecasting and risk management. Using daily stock data from the S&P 500 (2020-2024), we construct dependency networks via Vector Autoregression (VAR) and Forecast Error Variance Decomposition (FEVD), transforming influence relationships into a cost-based network. Specifically, FEVD breaks down the VAR’s forecast error variance to quantify how much each stock’s shocks contribute to another’s uncertainty information we invert to form influence-based edge weights in our network. By applying the Minimum Spanning Tree (MST) algorithm, we extract the core inter-stock structure and identify central stocks through degree centrality. A dynamic portfolio is constructed using the top-ranked stocks, with capital allocated based on Value at Risk (VaR). To refine stock selection, we incorporate forecasts from ARIMA and Neural Network Autoregressive (NNAR) models. Trading simulations over a one-year period demonstrate that the MST-based strategies outperform a buy-and-hold benchmark, with the tuned NNAR-enhanced strategy achieving a 63.74% return versus 18.00% for the benchmark. Our results highlight the potential of combining network structures, predictive modeling, and risk metrics to improve adaptive financial decision-making. ...

July 26, 2025 · 2 min · Research Team

Optimization Method of Multi-factor Investment Model Driven by Deep Learning for Risk Control

Optimization Method of Multi-factor Investment Model Driven by Deep Learning for Risk Control ArXiv ID: 2507.00332 “View on arXiv” Authors: Ruisi Li, Xinhui Gu Abstract Propose a deep learning driven multi factor investment model optimization method for risk control. By constructing a deep learning model based on Long Short Term Memory (LSTM) and combining it with a multi factor investment model, we optimize factor selection and weight determination to enhance the model’s adaptability and robustness to market changes. Empirical analysis shows that the LSTM model is significantly superior to the benchmark model in risk control indicators such as maximum retracement, Sharp ratio and value at risk (VaR), and shows strong adaptability and robustness in different market environments. Furthermore, the model is applied to the actual portfolio to optimize the asset allocation, which significantly improves the performance of the portfolio, provides investors with more scientific and accurate investment decision-making basis, and effectively balances the benefits and risks. ...

July 1, 2025 · 2 min · Research Team

Copula Analysis of Risk: A Multivariate Risk Analysis for VaR and CoVaR using Copulas and DCC-GARCH

Copula Analysis of Risk: A Multivariate Risk Analysis for VaR and CoVaR using Copulas and DCC-GARCH ArXiv ID: 2505.06950 “View on arXiv” Authors: Aryan Singh, Paul O Reilly, Daim Sharif, Patrick Haughey, Eoghan McCarthy, Sathvika Thorali Suresh, Aakhil Anvar, Adarsh Sajeev Kumar Abstract A multivariate risk analysis for VaR and CVaR using different copula families is performed on historical financial time series fitted with DCC-GARCH models. A theoretical background is provided alongside a comparison of goodness-of-fit across different copula families to estimate the validity and effectiveness of approaches discussed. ...

May 11, 2025 · 1 min · Research Team

Bridging Econometrics and AI: VaR Estimation via Reinforcement Learning and GARCH Models

Bridging Econometrics and AI: VaR Estimation via Reinforcement Learning and GARCH Models ArXiv ID: 2504.16635 “View on arXiv” Authors: Fredy Pokou, Jules Sadefo Kamdem, François Benhmad Abstract In an environment of increasingly volatile financial markets, the accurate estimation of risk remains a major challenge. Traditional econometric models, such as GARCH and its variants, are based on assumptions that are often too rigid to adapt to the complexity of the current market dynamics. To overcome these limitations, we propose a hybrid framework for Value-at-Risk (VaR) estimation, combining GARCH volatility models with deep reinforcement learning. Our approach incorporates directional market forecasting using the Double Deep Q-Network (DDQN) model, treating the task as an imbalanced classification problem. This architecture enables the dynamic adjustment of risk-level forecasts according to market conditions. Empirical validation on daily Eurostoxx 50 data covering periods of crisis and high volatility shows a significant improvement in the accuracy of VaR estimates, as well as a reduction in the number of breaches and also in capital requirements, while respecting regulatory risk thresholds. The ability of the model to adjust risk levels in real time reinforces its relevance to modern and proactive risk management. ...

April 23, 2025 · 2 min · Research Team

Unified GARCH-Recurrent Neural Network in Financial Volatility Forecasting

Unified GARCH-Recurrent Neural Network in Financial Volatility Forecasting ArXiv ID: 2504.09380 “View on arXiv” Authors: Unknown Abstract In this study, we develop a unified volatility modeling framework that embeds GARCH dynamics directly within recurrent neural networks. We propose two interpretable hybrid architectures, GARCH-GRU and GARCH-LSTM, that integrate the GARCH(1,1) volatility update into the multiplicative gating structure of GRU and LSTM cells. This unified design preserves economically meaningful GARCH parameters while enabling the networks to learn nonlinear temporal dependencies in financial time series. Comprehensive out-of-sample evaluations across major U.S. equity indices show that both models consistently outperform classical GARCH specifications, pipeline-style hybrids, and neural baselines such as the Transformer across multiple metrics (MSE, MAE, SMAPE, and out-of-sample R\textsuperscript{“2”}). Within this family, the GARCH-GRU achieves the strongest accuracy-efficiency tradeoff, training nearly three times faster than GARCH-LSTM while maintaining comparable or superior forecasting accuracy under normal market conditions and delivering stable and economically plausible parameter estimates. The advantages persist during extreme market turbulence. In the COVID-19 stress period, both architectures retain superior forecasting accuracy and deliver well-calibrated 99 percent Value-at-Risk forecasts, achieving lower violation ratios and competitive Pinball losses relative to all benchmarks. Overall, the findings underscore the effectiveness of embedding GARCH dynamics within recurrent neural architectures, yielding models that are accurate, efficient, interpretable, and robust for real-world risk-aware volatility forecasting. ...

April 13, 2025 · 2 min · Research Team