false

Insights into Tail-Based and Order Statistics

Insights into Tail-Based and Order Statistics ArXiv ID: 2511.04784 “View on arXiv” Authors: Hamidreza Maleki Almani Abstract Heavy-tailed phenomena appear across diverse domains –from wealth and firm sizes in economics to network traffic, biological systems, and physical processes– characterized by the disproportionate influence of extreme values. These distributions challenge classical statistical models, as their tails decay too slowly for conventional approximations to hold. Among their key descriptive measures are quantile contributions, which quantify the proportion of a total quantity (such as income, energy, or risk) attributed to observations above a given quantile threshold. This paper presents a theoretical study of the quantile contribution statistic and its relationship with order statistics. We derive a closed-form expression for the joint cumulative distribution function (CDF) of order statistics and, based on it, obtain an explicit CDF for quantile contributions applicable to small samples. We then investigate the asymptotic behavior of these contributions as the sample size increases, establishing the asymptotic normality of the numerator and characterizing the limiting distribution of the quantile contribution. Finally, simulation studies illustrate the convergence properties and empirical accuracy of the theoretical results, providing a foundation for applying quantile contributions in the analysis of heavy-tailed data. ...

November 6, 2025 · 2 min · Research Team

Optimized Multi-Level Monte Carlo Parametrization and Antithetic Sampling for Nested Simulations

Optimized Multi-Level Monte Carlo Parametrization and Antithetic Sampling for Nested Simulations ArXiv ID: 2510.18995 “View on arXiv” Authors: Alexandre Boumezoued, Adel Cherchali, Vincent Lemaire, Gilles Pagès, Mathieu Truc Abstract Estimating risk measures such as large loss probabilities and Value-at-Risk is fundamental in financial risk management and often relies on computationally intensive nested Monte Carlo methods. While Multi-Level Monte Carlo (MLMC) techniques and their weighted variants are typically more efficient, their effectiveness tends to deteriorate when dealing with irregular functions, notably indicator functions, which are intrinsic to these risk measures. We address this issue by introducing a novel MLMC parametrization that significantly improves performance in practical, non-asymptotic settings while maintaining theoretical asymptotic guarantees. We also prove that antithetic sampling of MLMC levels enhances efficiency regardless of the regularity of the underlying function. Numerical experiments motivated by the calculation of economic capital in a life insurance context confirm the practical value of our approach for estimating loss probabilities and quantiles, bridging theoretical advances and practical requirements in financial risk estimation. ...

October 21, 2025 · 2 min · Research Team

Coherent estimation of risk measures

Coherent estimation of risk measures ArXiv ID: 2510.05809 “View on arXiv” Authors: Martin Aichele, Igor Cialenco, Damian Jelito, Marcin Pitera Abstract We develop a statistical framework for risk estimation, inspired by the axiomatic theory of risk measures. Coherent risk estimators – functionals of P&L samples inheriting the economic properties of risk measures – are defined and characterized through robust representations linked to $L$-estimators. The framework provides a canonical methodology for constructing estimators with sound financial and statistical properties, unifying risk measure theory, principles for capital adequacy, and practical statistical challenges in market risk. A numerical study illustrates the approach, focusing on expected shortfall estimation under both i.i.d. and overlapping samples relevant for regulatory FRTB model applications. ...

October 7, 2025 · 2 min · Research Team

Uncertainty-Aware Strategies: A Model-Agnostic Framework for Robust Financial Optimization through Subsampling

Uncertainty-Aware Strategies: A Model-Agnostic Framework for Robust Financial Optimization through Subsampling ArXiv ID: 2506.07299 “View on arXiv” Authors: Hans Buehler, Blanka Horvath, Yannick Limmer, Thorsten Schmidt Abstract This paper addresses the challenge of model uncertainty in quantitative finance, where decisions in portfolio allocation, derivative pricing, and risk management rely on estimating stochastic models from limited data. In practice, the unavailability of the true probability measure forces reliance on an empirical approximation, and even small misestimations can lead to significant deviations in decision quality. Building on the framework of Klibanoff et al. (2005), we enhance the conventional objective - whether this is expected utility in an investing context or a hedging metric - by superimposing an outer “uncertainty measure”, motivated by traditional monetary risk measures, on the space of models. In scenarios where a natural model distribution is lacking or Bayesian methods are impractical, we propose an ad hoc subsampling strategy, analogous to bootstrapping in statistical finance and related to mini-batch sampling in deep learning, to approximate model uncertainty. To address the quadratic memory demands of naive implementations, we also present an adapted stochastic gradient descent algorithm that enables efficient parallelization. Through analytical, simulated, and empirical studies - including multi-period, real data and high-dimensional examples - we demonstrate that uncertainty measures outperform traditional mixture of measures strategies and our model-agnostic subsampling-based approach not only enhances robustness against model risk but also achieves performance comparable to more elaborate Bayesian methods. ...

June 8, 2025 · 2 min · Research Team

Multi-period Mean-Buffered Probability of Exceedance in Defined Contribution Portfolio Optimization

Multi-period Mean-Buffered Probability of Exceedance in Defined Contribution Portfolio Optimization ArXiv ID: 2505.22121 “View on arXiv” Authors: Duy-Minh Dang, Chang Chen Abstract We investigate multi-period mean-risk portfolio optimization for long-horizon Defined Contribution plans, focusing on buffered Probability of Exceedance (bPoE), a more intuitive, dollar-based alternative to Conditional Value-at-Risk (CVaR). We formulate both pre-commitment and time-consistent Mean-bPoE and Mean-CVaR portfolio optimization problems under realistic investment constraints (e.g., no leverage, no short selling) and jump-diffusion dynamics. These formulations are naturally framed as bilevel optimization problems, with an outer search over the shortfall threshold and an inner optimization over rebalancing decisions. We establish an equivalence between the pre-commitment formulations through a one-to-one correspondence of their scalarization optimal sets, while showing that no such equivalence holds in the time-consistent setting. We develop provably convergent numerical schemes for the value functions associated with both pre-commitment and time-consistent formulations of these mean-risk control problems. Using nearly a century of market data, we find that time-consistent Mean-bPoE strategies closely resemble their pre-commitment counterparts. In particular, they maintain alignment with investors’ preferences for a minimum acceptable terminal wealth level-unlike time-consistent Mean-CVaR, which often leads to counterintuitive control behavior. We further show that bPoE, as a strictly tail-oriented measure, prioritizes guarding against catastrophic shortfalls while allowing meaningful upside exposure, making it especially appealing for long-horizon wealth security. These findings highlight bPoE’s practical advantages for Defined Contribution investment planning. ...

May 28, 2025 · 2 min · Research Team

Modern Computational Methods in Reinsurance Optimization: From Simulated Annealing to Quantum Branch & Bound

Modern Computational Methods in Reinsurance Optimization: From Simulated Annealing to Quantum Branch & Bound ArXiv ID: 2504.16530 “View on arXiv” Authors: George Woodman, Ruben S. Andrist, Thomas Häner, Damian S. Steiger, Martin J. A. Schuetz, Helmut G. Katzgraber, Marcin Detyniecki Abstract We propose and implement modern computational methods to enhance catastrophe excess-of-loss reinsurance contracts in practice. The underlying optimization problem involves attachment points, limits, and reinstatement clauses, and the objective is to maximize the expected profit while considering risk measures and regulatory constraints. We study the problem formulation, paving the way for practitioners, for two very different approaches: A local search optimizer using simulated annealing, which handles realistic constraints, and a branch & bound approach exploring the potential of a future speedup via quantum branch & bound. On the one hand, local search effectively generates contract structures within several constraints, proving useful for complex treaties that have multiple local optima. On the other hand, although our branch & bound formulation only confirms that solving the full problem with a future quantum computer would require a stronger, less expensive bound and substantial hardware improvements, we believe that the designed application-specific bound is sufficiently strong to serve as a basis for further works. Concisely, we provide insurance practitioners with a robust numerical framework for contract optimization that handles realistic constraints today, as well as an outlook and initial steps towards an approach which could leverage quantum computers in the future. ...

April 23, 2025 · 2 min · Research Team

Optimal payoff under Bregman-Wasserstein divergence constraints

Optimal payoff under Bregman-Wasserstein divergence constraints ArXiv ID: 2411.18397 “View on arXiv” Authors: Unknown Abstract We study optimal payoff choice for an expected utility maximizer under the constraint that their payoff is not allowed to deviate ``too much’’ from a given benchmark. We solve this problem when the deviation is assessed via a Bregman-Wasserstein (BW) divergence, generated by a convex function $φ$. Unlike the Wasserstein distance (i.e., when $φ(x)=x^2$) the inherent asymmetry of the BW divergence makes it possible to penalize positive deviations different than negative ones. As a main contribution, we provide the optimal payoff in this setting. Numerical examples illustrate that the choice of $φ$ allow to better align the payoff choice with the objectives of investors. ...

November 27, 2024 · 2 min · Research Team

Diversification quotient based on expectiles

Diversification quotient based on expectiles ArXiv ID: 2411.14646 “View on arXiv” Authors: Unknown Abstract A diversification quotient (DQ) quantifies diversification in stochastic portfolio models based on a family of risk measures. We study DQ based on expectiles, offering a useful alternative to conventional risk measures such as Value-at-Risk (VaR) and Expected Shortfall (ES). The expectile-based DQ admits simple formulas and has a natural connection to the Omega ratio. Moreover, the expectile-based DQ is not affected by small-sample issues faced by VaR-based or ES-based DQ due to the scarcity of tail data. The expectile-based DQ exhibits pseudo-convexity in portfolio weights, allowing gradient descent algorithms for portfolio selection. We show that the corresponding optimization problem can be efficiently solved using linear programming techniques in real-data applications. Explicit formulas for DQ based on expectiles are also derived for elliptical and multivariate regularly varying distribution models. Our findings enhance the understanding of the DQ’s role in financial risk management and highlight its potential to improve portfolio construction strategies. ...

November 22, 2024 · 2 min · Research Team

Mirror Descent Algorithms for Risk Budgeting Portfolios

Mirror Descent Algorithms for Risk Budgeting Portfolios ArXiv ID: 2411.12323 “View on arXiv” Authors: Unknown Abstract This paper introduces and examines numerical approximation schemes for computing risk budgeting portfolios associated to positive homogeneous and sub-additive risk measures. We employ Mirror Descent algorithms to determine the optimal risk budgeting weights in both deterministic and stochastic settings, establishing convergence along with an explicit non-asymptotic quantitative rate for the averaged algorithm. A comprehensive numerical analysis follows, illustrating our theoretical findings across various risk measures – including standard deviation, Expected Shortfall, deviation measures, and Variantiles – and comparing the performance with that of the standard stochastic gradient descent method recently proposed in the literature. ...

November 19, 2024 · 2 min · Research Team

Worst-case values of target semi-variances with applications to robust portfolio selection

Worst-case values of target semi-variances with applications to robust portfolio selection ArXiv ID: 2410.01732 “View on arXiv” Authors: Unknown Abstract The expected regret and target semi-variance are two of the most important risk measures for downside risk. When the distribution of a loss is uncertain, and only partial information of the loss is known, their worst-case values play important roles in robust risk management for finance, insurance, and many other fields. Jagannathan (1977) derived the worst-case expected regrets when only the mean and variance of a loss are known and the loss is arbitrary, symmetric, or non-negative. While Chen et al. (2011) obtained the worst-case target semi-variances under similar conditions but focusing on arbitrary losses. In this paper, we first complement the study of Chen et al. (2011) on the worst-case target semi-variances and derive the closed-form expressions for the worst-case target semi-variance when only the mean and variance of a loss are known and the loss is symmetric or non-negative. Then, we investigate worst-case target semi-variances over uncertainty sets that represent undesirable scenarios faced by an investors. Our methods for deriving these worst-case values are different from those used in Jagannathan (1977) and Chen et al. (2011). As applications of the results derived in this paper, we propose robust portfolio selection methods that minimize the worst-case target semi-variance of a portfolio loss over different uncertainty sets. To explore the insights of our robust portfolio selection methods, we conduct numerical experiments with real financial data and compare our portfolio selection methods with several existing portfolio selection models related to the models proposed in this paper. ...

October 2, 2024 · 2 min · Research Team