false

Black-Litterman and ESG Portfolio Optimization

Black-Litterman and ESG Portfolio Optimization ArXiv ID: 2511.21850 “View on arXiv” Authors: Aviv Alpern, Svetlozar Rachev Abstract We introduce a simple portfolio optimization strategy using ESG data with the Black-Litterman allocation framework. ESG scores are used as a bias for Stein shrinkage estimation of equilibrium risk premiums used in assigning Black-Litterman asset weights. Assets are modeled as multivariate affine normal-inverse Gaussian variables using CVaR as a risk measure. This strategy, though very simple, when employed with a soft turnover constraint is exceptionally successful. Portfolios are reallocated daily over a 4.7 year period, each with a different set of hyperparameters used for optimization. The most successful strategies have returns of approximately 40-45% annually. ...

November 26, 2025 · 2 min · Research Team

HODL Strategy or Fantasy? 480 Million Crypto Market Simulations and the Macro-Sentiment Effect

HODL Strategy or Fantasy? 480 Million Crypto Market Simulations and the Macro-Sentiment Effect ArXiv ID: 2512.02029 “View on arXiv” Authors: Weikang Zhang, Alison Watts Abstract Crypto enthusiasts claim that buying and holding crypto assets yields high returns, often citing Bitcoin’s past performance to promote other tokens and fuel fear of missing out. However, understanding the real risk-return trade-off and what factors affect future crypto returns is crucial as crypto becomes increasingly accessible to retail investors through major brokerages. We examine the HODL strategy through two independent analyses. First, we implement 480 million Monte Carlo simulations across 378 non-stablecoin crypto assets, net of trading fees and the opportunity cost of 1-month Treasury bills, and find strong evidence of survivorship bias and extreme downside concentration. At the 2-3 year horizon, the median excess return is -28.4 percent, the 1 percent conditional value at risk indicates that tail scenarios wipe out principal after all costs, and only the top quartile achieves very large gains, with a mean excess return of 1,326.7 percent. These results challenge the HODL narrative: across a broad set of assets, simple buy-and-hold loads extreme downside risk onto most investors, and the miracles mostly belong to the luckiest quarter. Second, using a Bayesian multi-horizon local projection framework, we find that endogenous predictors based on realized risk-return metrics have economically negligible and unstable effects, while macro-finance factors, especially the 24-week exponential moving average of the Fear and Greed Index, display persistent long-horizon impacts and high cross-basket stability. Where significant, a one-standard-deviation sentiment shock reduces forward top-quartile mean excess returns by 15-22 percentage points and median returns by 6-10 percentage points over 1-3 year horizons, suggesting that macro-sentiment conditions, rather than realized return histories, are the dominant indicators for future outcomes. ...

November 19, 2025 · 3 min · Research Team

Multi-Agent Regime-Conditioned Diffusion (MARCD) for CVaR-Constrained Portfolio Decisions

Multi-Agent Regime-Conditioned Diffusion (MARCD) for CVaR-Constrained Portfolio Decisions ArXiv ID: 2510.10807 “View on arXiv” Authors: Ali Atiah Alzahrani Abstract We examine whether regime-conditioned generative scenarios combined with a convex CVaR allocator improve portfolio decisions under regime shifts. We present MARCD, a generative-to-decision framework with: (i) a Gaussian HMM to infer latent regimes; (ii) a diffusion generator that produces regime-conditioned scenarios; (iii) signal extraction via blended, shrunk moments; and (iv) a governed CVaR epigraph quadratic program. Contributions: Within the Scenario stage we introduce a tail-weighted diffusion objective that up-weights low-quantile outcomes relevant for drawdowns and a regime-expert (MoE) denoiser whose gate increases with crisis posteriors; both are evaluated end-to-end through the allocator. Under strict walk-forward on liquid multi-asset ETFs (2005-2025), MARCD exhibits stronger scenario calibration and materially smaller drawdowns: MaxDD 9.3% versus 14.1% for BL (a 34% reduction) over 2020-2025 out-of-sample. The framework provides an auditable pipeline with explicit budget, box, and turnover constraints, demonstrating the value of decision-aware generative modeling in finance. ...

October 12, 2025 · 2 min · Research Team

Minimizing the Value-at-Risk of Loan Portfolio via Deep Neural Networks

Minimizing the Value-at-Risk of Loan Portfolio via Deep Neural Networks ArXiv ID: 2510.07444 “View on arXiv” Authors: Albert Di Wang, Ye Du Abstract Risk management is a prominent issue in peer-to-peer lending. An investor may naturally reduce his risk exposure by diversifying instead of putting all his money on one loan. In that case, an investor may want to minimize the Value-at-Risk (VaR) or Conditional Value-at-Risk (CVaR) of his loan portfolio. We propose a low degree of freedom deep neural network model, DeNN, as well as a high degree of freedom model, DSNN, to tackle the problem. In particular, our models predict not only the default probability of a loan but also the time when it will default. The experiments demonstrate that both models can significantly reduce the portfolio VaRs at different confidence levels, compared to benchmarks. More interestingly, the low degree of freedom model, DeNN, outperforms DSNN in most scenarios. ...

October 8, 2025 · 2 min · Research Team

Deep Hedging to Manage Tail Risk

Deep Hedging to Manage Tail Risk ArXiv ID: 2506.22611 “View on arXiv” Authors: Yuming Ma Abstract Extending Buehler et al.’s 2019 Deep Hedging paradigm, we innovatively employ deep neural networks to parameterize convex-risk minimization (CVaR/ES) for the portfolio tail-risk hedging problem. Through comprehensive numerical experiments on crisis-era bootstrap market simulators – customizable with transaction costs, risk budgets, liquidity constraints, and market impact – our end-to-end framework not only achieves significant one-day 99% CVaR reduction but also yields practical insights into friction-aware strategy adaptation, demonstrating robustness and operational viability in realistic markets. ...

June 27, 2025 · 1 min · Research Team

Copula Analysis of Risk: A Multivariate Risk Analysis for VaR and CoVaR using Copulas and DCC-GARCH

Copula Analysis of Risk: A Multivariate Risk Analysis for VaR and CoVaR using Copulas and DCC-GARCH ArXiv ID: 2505.06950 “View on arXiv” Authors: Aryan Singh, Paul O Reilly, Daim Sharif, Patrick Haughey, Eoghan McCarthy, Sathvika Thorali Suresh, Aakhil Anvar, Adarsh Sajeev Kumar Abstract A multivariate risk analysis for VaR and CVaR using different copula families is performed on historical financial time series fitted with DCC-GARCH models. A theoretical background is provided alongside a comparison of goodness-of-fit across different copula families to estimate the validity and effectiveness of approaches discussed. ...

May 11, 2025 · 1 min · Research Team

Bayesian Optimization for CVaR-based portfolio optimization

Bayesian Optimization for CVaR-based portfolio optimization ArXiv ID: 2503.17737 “View on arXiv” Authors: Unknown Abstract Optimal portfolio allocation is often formulated as a constrained risk problem, where one aims to minimize a risk measure subject to some performance constraints. This paper presents new Bayesian Optimization algorithms for such constrained minimization problems, seeking to minimize the conditional value-at-risk (a computationally intensive risk measure) under a minimum expected return constraint. The proposed algorithms utilize a new acquisition function, which drives sampling towards the optimal region. Additionally, a new two-stage procedure is developed, which significantly reduces the number of evaluations of the expensive-to-evaluate objective function. The proposed algorithm’s competitive performance is demonstrated through practical examples. ...

March 22, 2025 · 2 min · Research Team

Enhancing Risk Assessment in Transformers with Loss-at-Risk Functions

Enhancing Risk Assessment in Transformers with Loss-at-Risk Functions ArXiv ID: 2411.02558 “View on arXiv” Authors: Unknown Abstract In the financial field, precise risk assessment tools are essential for decision-making. Recent studies have challenged the notion that traditional network loss functions like Mean Square Error (MSE) are adequate, especially under extreme risk conditions that can lead to significant losses during market upheavals. Transformers and Transformer-based models are now widely used in financial forecasting according to their outstanding performance in time-series-related predictions. However, these models typically lack sensitivity to extreme risks and often underestimate great financial losses. To address this problem, we introduce a novel loss function, the Loss-at-Risk, which incorporates Value at Risk (VaR) and Conditional Value at Risk (CVaR) into Transformer models. This integration allows Transformer models to recognize potential extreme losses and further improves their capability to handle high-stakes financial decisions. Moreover, we conduct a series of experiments with highly volatile financial datasets to demonstrate that our Loss-at-Risk function improves the Transformers’ risk prediction and management capabilities without compromising their decision-making accuracy or efficiency. The results demonstrate that integrating risk-aware metrics during training enhances the Transformers’ risk assessment capabilities while preserving their core strengths in decision-making and reasoning across diverse scenarios. ...

November 4, 2024 · 2 min · Research Team

Multilevel Monte Carlo in Sample Average Approximation: Convergence, Complexity and Application

Multilevel Monte Carlo in Sample Average Approximation: Convergence, Complexity and Application ArXiv ID: 2407.18504 “View on arXiv” Authors: Unknown Abstract In this paper, we examine the Sample Average Approximation (SAA) procedure within a framework where the Monte Carlo estimator of the expectation is biased. We also introduce Multilevel Monte Carlo (MLMC) in the SAA setup to enhance the computational efficiency of solving optimization problems. In this context, we conduct a thorough analysis, exploiting Cramér’s large deviation theory, to establish uniform convergence, quantify the convergence rate, and determine the sample complexity for both standard Monte Carlo and MLMC paradigms. Additionally, we perform a root-mean-squared error analysis utilizing tools from empirical process theory to derive sample complexity without relying on the finite moment condition typically required for uniform convergence results. Finally, we validate our findings and demonstrate the advantages of the MLMC estimator through numerical examples, estimating Conditional Value-at-Risk (CVaR) in the Geometric Brownian Motion and nested expectation framework. ...

July 26, 2024 · 2 min · Research Team

A Multi-step Approach for Minimizing Risk in Decentralized Exchanges

A Multi-step Approach for Minimizing Risk in Decentralized Exchanges ArXiv ID: 2406.07200 “View on arXiv” Authors: Unknown Abstract Decentralized Exchanges are becoming even more predominant in today’s finance. Driven by the need to study this phenomenon from an academic perspective, the SIAG/FME Code Quest 2023 was announced. Specifically, participating teams were asked to implement, in Python, the basic functions of an Automated Market Maker and a liquidity provision strategy in an Automated Market Maker to minimize the Conditional Value at Risk, a critical measure of investment risk. As the competition’s winning team, we highlight our approach in this work. In particular, as the dependence of the final return on the initial wealth distribution is highly non-linear, we cannot use standard ad-hoc approaches. Additionally, classical minimization techniques would require a significant computational load due to the cost of the target function. For these reasons, we propose a three-step approach. In the first step, the target function is approximated by a Kernel Ridge Regression. Then, the approximating function is minimized. In the final step, the previously discovered minimum is utilized as the starting point for directly optimizing the desired target function. By using this procedure, we can both reduce the computational complexity and increase the accuracy of the solution. Finally, the overall computational load is further reduced thanks to an algorithmic trick concerning the returns simulation and the usage of Cython. ...

June 11, 2024 · 2 min · Research Team