false

Efficient Importance Sampling under Heston Model: Short Maturity and Deep Out-of-the-Money Options

Efficient Importance Sampling under Heston Model: Short Maturity and Deep Out-of-the-Money Options ArXiv ID: 2511.19826 “View on arXiv” Authors: Yun-Feng Tu, Chuan-Hsiang Han Abstract This paper investigates asymptotically optimal importance sampling (IS) schemes for pricing European call options under the Heston stochastic volatility model. We focus on two distinct rare-event regimes where standard Monte Carlo methods suffer from significant variance deterioration: the limit as maturity approaches zero and the limit as the strike price tends to infinity. Leveraging the large deviation principle (LDP), we design a state-dependent change of measure derived from the asymptotic behavior of the log-price cumulant generating functions. In the short-maturity regime, we rigorously prove that our proposed IS drift, inspired by the variational characterization of the rate function, achieves logarithmic efficiency (asymptotic optimality) by minimizing the decay rate of the second moment of the estimator. In the deep OTM regime, we introduce a novel slow mean-reversion scaling for the variance process, where the mean-reversion speed scales as the inverse square of the small-noise parameter (defined as the reciprocal of the log-moneyness). We establish that under this specific scaling, the variance process contributes non-trivially to the large deviation rate function, requiring a specialized Riccati analysis to verify optimality. Numerical experiments demonstrate that the proposed method yields substantial variance reduction–characterized by factors exceeding several orders of magnitude–compared to standard estimators in both asymptotic regimes. ...

November 25, 2025 · 2 min · Research Team

Limit Order Book Dynamics in Matching Markets: Microstructure, Spread, and Execution Slippage

Limit Order Book Dynamics in Matching Markets: Microstructure, Spread, and Execution Slippage ArXiv ID: 2511.20606 “View on arXiv” Authors: Yao Wu Abstract Conventional models of matching markets assume that monetary transfers can clear markets by compensating for utility differentials. However, empirical patterns show that such transfers often fail to close structural preference gaps. This paper introduces a market microstructure framework that models matching decisions as a limit order book system with rigid bid ask spreads. Individual preferences are represented by a latent preference state matrix, where the spread between an agent’s internal ask price (the unconditional maximum) and the market’s best bid (the reachable maximum) creates a structural liquidity constraint. We establish a Threshold Impossibility Theorem showing that linear compensation cannot close these spreads unless it induces a categorical identity shift. A dynamic discrete choice execution model further demonstrates that matches occur only when the market to book ratio crosses a time decaying liquidity threshold, analogous to order execution under inventory pressure. Numerical experiments validate persistent slippage, regional invariance of preference orderings, and high tier zero spread executions. The model provides a unified microstructure explanation for matching failures, compensation inefficiency, and post match regret in illiquid order driven environments. ...

November 25, 2025 · 2 min · Research Team

Carbon-Penalised Portfolio Insurance Strategies in a Stochastic Factor Model with Partial Information

Carbon-Penalised Portfolio Insurance Strategies in a Stochastic Factor Model with Partial Information ArXiv ID: 2511.19186 “View on arXiv” Authors: Katia Colaneri, Federico D’Amario, Daniele Mancinelli Abstract Given the increasing importance of environmental, social and governance (ESG) factors, particularly carbon emissions, we investigate optimal proportional portfolio insurance (PPI) strategies accounting for carbon footprint reduction. PPI strategies enable investors to mitigate downside risk while retaining the potential for upside gains. This paper aims to determine the multiplier of the PPI strategy to maximise the expected utility of the terminal cushion, where the terminal cushion is penalised proportionally to the realised volatility of stocks issued by firms operating in carbon-intensive sectors. We model the risky assets’ dynamics using geometric Brownian motions whose drift rates are modulated by an unobservable common stochastic factor to capture market-specific or economy-wide state variables that are typically not directly observable. Using classical stochastic filtering theory, we formulate a suitable optimization problem and solve it for CRRA utility function. We characterise optimal carbon penalised PPI strategies and optimal value functions under full and partial information and quantify the loss of utility due incomplete information. Finally, we carry a numerical analysis showing that the proposed strategy reduces carbon emission intensity without compromising financial performance. ...

November 24, 2025 · 2 min · Research Team

Re(Visiting) Time Series Foundation Models in Finance

Re(Visiting) Time Series Foundation Models in Finance ArXiv ID: 2511.18578 “View on arXiv” Authors: Eghbal Rahimikia, Hao Ni, Weiguan Wang Abstract Financial time series forecasting is central to trading, portfolio optimization, and risk management, yet it remains challenging due to noisy, non-stationary, and heterogeneous data. Recent advances in time series foundation models (TSFMs), inspired by large language models, offer a new paradigm for learning generalizable temporal representations from large and diverse datasets. This paper presents the first comprehensive empirical study of TSFMs in global financial markets. Using a large-scale dataset of daily excess returns across diverse markets, we evaluate zero-shot inference, fine-tuning, and pre-training from scratch against strong benchmark models. We find that off-the-shelf pre-trained TSFMs perform poorly in zero-shot and fine-tuning settings, whereas models pre-trained from scratch on financial data achieve substantial forecasting and economic improvements, underscoring the value of domain-specific adaptation. Increasing the dataset size, incorporating synthetic data augmentation, and applying hyperparameter tuning further enhance performance. ...

November 23, 2025 · 2 min · Research Team

Arbitrage-Free Bond and Yield Curve Forecasting with Neural Filters under HJM Constraints

Arbitrage-Free Bond and Yield Curve Forecasting with Neural Filters under HJM Constraints ArXiv ID: 2511.17892 “View on arXiv” Authors: Xiang Gao, Cody Hyndman Abstract We develop an arbitrage-free deep learning framework for yield curve and bond price forecasting based on the Heath-Jarrow-Morton (HJM) term-structure model and a dynamic Nelson-Siegel parameterization of forward rates. Our approach embeds a no-arbitrage drift restriction into a neural state-space architecture by combining Kalman, extended Kalman, and particle filters with recurrent neural networks (LSTM/CLSTM), and introduces an explicit arbitrage error regularization (AER) term during training. The model is applied to U.S. Treasury and corporate bond data, and its performance is evaluated for both yield-space and price-space predictions at 1-day and 5-day horizons. Empirically, arbitrage regularization leads to its strongest improvements at short maturities, particularly in 5-day-ahead forecasts, increasing market-consistency as measured by bid-ask hit rates and reducing dollar-denominated prediction errors. ...

November 22, 2025 · 2 min · Research Team

Hybrid LSTM and PPO Networks for Dynamic Portfolio Optimization

Hybrid LSTM and PPO Networks for Dynamic Portfolio Optimization ArXiv ID: 2511.17963 “View on arXiv” Authors: Jun Kevin, Pujianto Yugopuspito Abstract This paper introduces a hybrid framework for portfolio optimization that fuses Long Short-Term Memory (LSTM) forecasting with a Proximal Policy Optimization (PPO) reinforcement learning strategy. The proposed system leverages the predictive power of deep recurrent networks to capture temporal dependencies, while the PPO agent adaptively refines portfolio allocations in continuous action spaces, allowing the system to anticipate trends while adjusting dynamically to market shifts. Using multi-asset datasets covering U.S. and Indonesian equities, U.S. Treasuries, and major cryptocurrencies from January 2018 to December 2024, the model is evaluated against several baselines, including equal-weight, index-style, and single-model variants (LSTM-only and PPO-only). The framework’s performance is benchmarked against equal-weighted, index-based, and single-model approaches (LSTM-only and PPO-only) using annualized return, volatility, Sharpe ratio, and maximum drawdown metrics, each adjusted for transaction costs. The results indicate that the hybrid architecture delivers higher returns and stronger resilience under non-stationary market regimes, suggesting its promise as a robust, AI-driven framework for dynamic portfolio optimization. ...

November 22, 2025 · 2 min · Research Team

Partial multivariate transformer as a tool for cryptocurrencies time series prediction

Partial multivariate transformer as a tool for cryptocurrencies time series prediction ArXiv ID: 2512.04099 “View on arXiv” Authors: Andrzej Tokajuk, Jarosław A. Chudziak Abstract Forecasting cryptocurrency prices is hindered by extreme volatility and a methodological dilemma between information-scarce univariate models and noise-prone full-multivariate models. This paper investigates a partial-multivariate approach to balance this trade-off, hypothesizing that a strategic subset of features offers superior predictive power. We apply the Partial-Multivariate Transformer (PMformer) to forecast daily returns for BTCUSDT and ETHUSDT, benchmarking it against eleven classical and deep learning models. Our empirical results yield two primary contributions. First, we demonstrate that the partial-multivariate strategy achieves significant statistical accuracy, effectively balancing informative signals with noise. Second, we experiment and discuss an observable disconnect between this statistical performance and practical trading utility; lower prediction error did not consistently translate to higher financial returns in simulations. This finding challenges the reliance on traditional error metrics and highlights the need to develop evaluation criteria more aligned with real-world financial objectives. ...

November 22, 2025 · 2 min · Research Team

Random processes for long-term market simulations

Random processes for long-term market simulations ArXiv ID: 2511.18125 “View on arXiv” Authors: Gilles Zumbach Abstract For long term investments, model portfolios are defined at the level of indexes, a setup known as Strategic Asset Allocation (SAA). The possible outcomes at a scale of a few decades can be obtained by Monte Carlo simulations, resulting in a probability density for the possible portfolio values at the investment horizon. Such studies are critical for long term wealth plannings, for example in the financial component of social insurances or in accumulated capital for retirement. The quality of the results depends on two inputs: the process used for the simulations and its parameters. The base model is a constant drift, a constant covariance and normal innovations, as pioneered by Bachelier. Beyond this model, this document presents in details a multivariate process that incorporate the most recent advances in the models for financial time series. This includes the negative correlations of the returns at a scale of a few years, the heteroskedasticity (i.e. the volatility’ dynamics), and the fat tails and asymmetry for the distributions of returns. For the parameters, the quantitative outcomes depend critically on the estimate for the drift, because this is a non random contribution acting at each time step. Replacing the point forecast by a probabilistic forecast allows us to analyze the impact of the drift values, and then to incorporate this uncertainty in the Monte Carlo simulations. ...

November 22, 2025 · 2 min · Research Team

Reinforcement Learning for Portfolio Optimization with a Financial Goal and Defined Time Horizons

Reinforcement Learning for Portfolio Optimization with a Financial Goal and Defined Time Horizons ArXiv ID: 2511.18076 “View on arXiv” Authors: Fermat Leukam, Rock Stephane Koffi, Prudence Djagba Abstract This research proposes an enhancement to the innovative portfolio optimization approach using the G-Learning algorithm, combined with parametric optimization via the GIRL algorithm (G-learning approach to the setting of Inverse Reinforcement Learning) as presented by. The goal is to maximize portfolio value by a target date while minimizing the investor’s periodic contributions. Our model operates in a highly volatile market with a well-diversified portfolio, ensuring a low-risk level for the investor, and leverages reinforcement learning to dynamically adjust portfolio positions over time. Results show that we improved the Sharpe Ratio from 0.42, as suggested by recent studies using the same approach, to a value of 0.483 a notable achievement in highly volatile markets with diversified portfolios. The comparison between G-Learning and GIRL reveals that while GIRL optimizes the reward function parameters (e.g., lambda = 0.0012 compared to 0.002), its impact on portfolio performance remains marginal. This suggests that reinforcement learning methods, like G-Learning, already enable robust optimization. This research contributes to the growing development of reinforcement learning applications in financial decision-making, demonstrating that probabilistic learning algorithms can effectively align portfolio management strategies with investor needs. ...

November 22, 2025 · 2 min · Research Team

Emergence of Randomness in Temporally Aggregated Financial Tick Sequences

Emergence of Randomness in Temporally Aggregated Financial Tick Sequences ArXiv ID: 2511.17479 “View on arXiv” Authors: Silvia Onofri, Andrey Shternshis, Stefano Marmi Abstract Markets efficiency implies that the stock returns are intrinsically unpredictable, a property that makes markets comparable to random number generators. We present a novel methodology to investigate ultra-high frequency financial data and to evaluate the extent to which tick by tick returns resemble random sequences. We extend the analysis of ultra high-frequency stock market data by applying comprehensive sets of randomness tests, beyond the usual reliance on serial correlation or entropy measures. Our purpose is to extensively analyze the randomness of these data using statistical tests from standard batteries that evaluate different aspects of randomness. We illustrate the effect of time aggregation in transforming highly correlated high-frequency trade data to random streams. More specifically, we use many of the tests in the NIST Statistical Test Suite and in the TestU01 battery (in particular the Rabbit and Alphabit sub-batteries), to prove that the degree of randomness of financial tick data increases together with the increase of the aggregation level in transaction time. Additionally, the comprehensive nature of our tests also uncovers novel patterns, such as non-monotonic behaviors in predictability for certain assets. This study demonstrates a model-free approach for both assessing randomness in financial time series and generating pseudo-random sequences from them, with potential relevance in several applications. ...

November 21, 2025 · 2 min · Research Team