false

Target-Date Funds: A State-of-the-Art Review with Policy Applications to Chile's Pension Reform

Target-Date Funds: A State-of-the-Art Review with Policy Applications to Chile’s Pension Reform ArXiv ID: 2504.17713 “View on arXiv” Authors: Fernando Suárez, José Manuel Peña, Omar Larré Abstract This review paper explores the evolution and implementation of target-date funds (TDFs), specifically focusing on their application within the context of Chile’s 2025 pension reform. The introduction of TDFs marks a significant shift in Chile’s pension system, which has traditionally relied on a multifund structure (essentially a target-risk funds system). We offer a comprehensive review of the theoretical foundations and practical considerations of TDFs, highlighting key challenges and opportunities for Chilean regulators and fund managers. Notably, we recommend that the glide path design should be dynamic, incorporating adjustments based on total accumulated wealth, with particular flexibility depending on each investor’s risk tolerance. Furthermore, we propose that the new benchmark for generational funds should feature a wide deviation band relative to the new benchmark portfolio, which could foster a market with more investment strategies and better competition among fund managers, encourage the inclusion of alternative assets, and foster greater diversification. Lastly, we highlight the need for future work to define a glide path model that incorporates the theoretical frameworks described, tailored to the unique parameters of the Chilean pension system. These recommendations aim to optimize the long-term retirement outcomes for Chilean workers under the new pension structure. ...

April 24, 2025 · 2 min · Research Team

Tokenizing Stock Prices for Enhanced Multi-Step Forecast and Prediction

Tokenizing Stock Prices for Enhanced Multi-Step Forecast and Prediction ArXiv ID: 2504.17313 “View on arXiv” Authors: Zhuohang Zhu, Haodong Chen, Qiang Qu, Xiaoming Chen, Vera Chung Abstract Effective stock price forecasting (estimating future prices) and prediction (estimating future price changes) are pivotal for investors, regulatory agencies, and policymakers. These tasks enable informed decision-making, risk management, strategic planning, and superior portfolio returns. Despite their importance, forecasting and prediction are challenging due to the dynamic nature of stock price data, which exhibit significant temporal variations in distribution and statistical properties. Additionally, while both forecasting and prediction targets are derived from the same dataset, their statistical characteristics differ significantly. Forecasting targets typically follow a log-normal distribution, characterized by significant shifts in mean and variance over time, whereas prediction targets adhere to a normal distribution. Furthermore, although multi-step forecasting and prediction offer a broader perspective and richer information compared to single-step approaches, it is much more challenging due to factors such as cumulative errors and long-term temporal variance. As a result, many previous works have tackled either single-step stock price forecasting or prediction instead. To address these issues, we introduce a novel model, termed Patched Channel Integration Encoder (PCIE), to tackle both stock price forecasting and prediction. In this model, we utilize multiple stock channels that cover both historical prices and price changes, and design a novel tokenization method to effectively embed these channels in a cross-channel and temporally efficient manner. Specifically, the tokenization process involves univariate patching and temporal learning with a channel-mixing encoder to reduce cumulative errors. Comprehensive experiments validate that PCIE outperforms current state-of-the-art models in forecast and prediction tasks. ...

April 24, 2025 · 2 min · Research Team

Automated Market Makers: A Stochastic Optimization Approach for Profitable Liquidity Concentration

Automated Market Makers: A Stochastic Optimization Approach for Profitable Liquidity Concentration ArXiv ID: 2504.16542 “View on arXiv” Authors: Simon Caspar Zeller, Paul-Niklas Ken Kandora, Daniel Kirste, Niclas Kannengießer, Steffen Rebennack, Ali Sunyaev Abstract Concentrated liquidity automated market makers (AMMs), such as Uniswap v3, enable liquidity providers (LPs) to earn liquidity rewards by depositing tokens into liquidity pools. However, LPs often face significant financial losses driven by poorly selected liquidity provision intervals and high costs associated with frequent liquidity reallocation. To support LPs in achieving more profitable liquidity concentration, we developed a tractable stochastic optimization problem that can be used to compute optimal liquidity provision intervals for profitable liquidity provision. The developed problem accounts for the relationships between liquidity rewards, divergence loss, and reallocation costs. By formalizing optimal liquidity provision as a tractable stochastic optimization problem, we support a better understanding of the relationship between liquidity rewards, divergence loss, and reallocation costs. Moreover, the stochastic optimization problem offers a foundation for more profitable liquidity concentration. ...

April 23, 2025 · 2 min · Research Team

Bridging Econometrics and AI: VaR Estimation via Reinforcement Learning and GARCH Models

Bridging Econometrics and AI: VaR Estimation via Reinforcement Learning and GARCH Models ArXiv ID: 2504.16635 “View on arXiv” Authors: Fredy Pokou, Jules Sadefo Kamdem, François Benhmad Abstract In an environment of increasingly volatile financial markets, the accurate estimation of risk remains a major challenge. Traditional econometric models, such as GARCH and its variants, are based on assumptions that are often too rigid to adapt to the complexity of the current market dynamics. To overcome these limitations, we propose a hybrid framework for Value-at-Risk (VaR) estimation, combining GARCH volatility models with deep reinforcement learning. Our approach incorporates directional market forecasting using the Double Deep Q-Network (DDQN) model, treating the task as an imbalanced classification problem. This architecture enables the dynamic adjustment of risk-level forecasts according to market conditions. Empirical validation on daily Eurostoxx 50 data covering periods of crisis and high volatility shows a significant improvement in the accuracy of VaR estimates, as well as a reduction in the number of breaches and also in capital requirements, while respecting regulatory risk thresholds. The ability of the model to adjust risk levels in real time reinforces its relevance to modern and proactive risk management. ...

April 23, 2025 · 2 min · Research Team

Collective Defined Contribution Schemes Without Intergenerational Cross-Subsidies

Collective Defined Contribution Schemes Without Intergenerational Cross-Subsidies ArXiv ID: 2504.16892 “View on arXiv” Authors: John Armstrong, James Dalby, Rohan Hobbs Abstract We present an architecture for managing Collective Defined Contribution (CDC) schemes. The current approach to UK CDC can be described as shared-indexation, where the nominal benefit of every member in a scheme receives the same level of indexation each year. The design of such schemes rely on the use of approximate discounting methodologies to value liabilities, and this leads to intergenerational cross-subsidies which can be large and unpredictable. We present an alternative approach which we call Collective-Drawdown CDC. This approach does not result in intergenerational cross-subsidies since all pooling is performed by explicit insurance contracts. It is therefore completely fair. Moreover, this scheme results in better pension outcomes when compared to shared-indexation CDC under the same model parameters. ...

April 23, 2025 · 2 min · Research Team

Modern Computational Methods in Reinsurance Optimization: From Simulated Annealing to Quantum Branch & Bound

Modern Computational Methods in Reinsurance Optimization: From Simulated Annealing to Quantum Branch & Bound ArXiv ID: 2504.16530 “View on arXiv” Authors: George Woodman, Ruben S. Andrist, Thomas Häner, Damian S. Steiger, Martin J. A. Schuetz, Helmut G. Katzgraber, Marcin Detyniecki Abstract We propose and implement modern computational methods to enhance catastrophe excess-of-loss reinsurance contracts in practice. The underlying optimization problem involves attachment points, limits, and reinstatement clauses, and the objective is to maximize the expected profit while considering risk measures and regulatory constraints. We study the problem formulation, paving the way for practitioners, for two very different approaches: A local search optimizer using simulated annealing, which handles realistic constraints, and a branch & bound approach exploring the potential of a future speedup via quantum branch & bound. On the one hand, local search effectively generates contract structures within several constraints, proving useful for complex treaties that have multiple local optima. On the other hand, although our branch & bound formulation only confirms that solving the full problem with a future quantum computer would require a stronger, less expensive bound and substantial hardware improvements, we believe that the designed application-specific bound is sufficiently strong to serve as a basis for further works. Concisely, we provide insurance practitioners with a robust numerical framework for contract optimization that handles realistic constraints today, as well as an outlook and initial steps towards an approach which could leverage quantum computers in the future. ...

April 23, 2025 · 2 min · Research Team

Towards a fast and robust deep hedging approach

Towards a fast and robust deep hedging approach ArXiv ID: 2504.16436 “View on arXiv” Authors: Fabienne Schmid, Daniel Oeltz Abstract We present a robust Deep Hedging framework for the pricing and hedging of option portfolios that significantly improves training efficiency and model robustness. In particular, we propose a neural model for training model embeddings which utilizes the paths of several advanced equity option models with stochastic volatility in order to learn the relationships that exist between hedging strategies. A key advantage of the proposed method is its ability to rapidly and reliably adapt to new market regimes through the recalibration of a low-dimensional embedding vector, rather than retraining the entire network. Moreover, we examine the observed Profit and Loss distributions on the parameter space of the models used to learn the embeddings. The results show that the proposed framework works well with data generated by complex models and can serve as a construction basis for an efficient and robust simulation tool for the systematic development of an entirely model-independent hedging strategy. ...

April 23, 2025 · 2 min · Research Team

Unbiased simulation of Asian options

Unbiased simulation of Asian options ArXiv ID: 2504.16349 “View on arXiv” Authors: Bruno Bouchard, Xiaolu Tan Abstract We provide an extension of the unbiased simulation method for SDEs developed in Henry-Labordere et al. [“Ann Appl Probab. 27:6 (2017) 1-37”] to a class of path-dependent dynamics, pertaining for Asian options. In our setting, both the payoff and the SDE’s coefficients depend on the (weighted) average of the process or, more precisely, on the integral of the solution to the SDE against a continuous function with bounded variations. In particular, this applies to the numerical resolution of the class of path-dependent PDEs whose regularity, in the sens of Dupire, is studied in Bouchard and Tan [“Ann. I.H.P., to appear”]. ...

April 23, 2025 · 2 min · Research Team

A Line Graph-Based Framework for Identifying Optimal Routing Paths in Decentralized Exchanges

A Line Graph-Based Framework for Identifying Optimal Routing Paths in Decentralized Exchanges ArXiv ID: 2504.15809 “View on arXiv” Authors: Unknown Abstract Decentralized exchanges, such as those employing constant product market makers (CPMMs) like Uniswap V2, play a crucial role in the blockchain ecosystem by enabling peer-to-peer token swaps without intermediaries. Despite the increasing volume of transactions, there remains limited research on identifying optimal trading paths across multiple DEXs. This paper presents a novel line-graph-based algorithm (LG) designed to efficiently discover profitable trading routes within DEX environments. We benchmark LG against the widely adopted Depth-First Search (DFS) algorithm under a linear routing scenario, encompassing platforms such as Uniswap, SushiSwap, and PancakeSwap. Experimental results demonstrate that LG consistently identifies trading paths that are as profitable as, or more profitable than, those found by DFS, while incurring comparable gas costs. Evaluations on Uniswap V2 token graphs across two temporal snapshots further validate LG’s performance. Although LG exhibits exponential runtime growth with respect to graph size in empirical tests, it remains viable for practical, real-world use cases. Our findings underscore the potential of the LG algorithm for industrial adoption, offering tangible benefits to traders and market participants in the DeFi space. ...

April 22, 2025 · 2 min · Research Team

Learning the Spoofability of Limit Order Books With Interpretable Probabilistic Neural Networks

Learning the Spoofability of Limit Order Books With Interpretable Probabilistic Neural Networks ArXiv ID: 2504.15908 “View on arXiv” Authors: Unknown Abstract This paper investigates real-time detection of spoofing activity in limit order books, focusing on cryptocurrency centralized exchanges. We first introduce novel order flow variables based on multi-scale Hawkes processes that account both for the size and placement distance from current best prices of new limit orders. Using a Level-3 data set, we train a neural network model to predict the conditional probability distribution of mid price movements based on these features. Our empirical analysis highlights the critical role of the posting distance of limit orders in the price formation process, showing that spoofing detection models that do not take the posting distance into account are inadequate to describe the data. Next, we propose a spoofing detection framework based on the probabilistic market manipulation gain of a spoofing agent and use the previously trained neural network to compute the expected gain. Running this algorithm on all submitted limit orders in the period 2024-12-04 to 2024-12-07, we find that 31% of large orders could spoof the market. Because of its simple neuronal architecture, our model can be run in real time. This work contributes to enhancing market integrity by providing a robust tool for monitoring and mitigating spoofing in both cryptocurrency exchanges and traditional financial markets. ...

April 22, 2025 · 2 min · Research Team