false

American Option Pricing Under Time-Varying Rough Volatility: A Signature-Based Hybrid Framework

American Option Pricing Under Time-Varying Rough Volatility: A Signature-Based Hybrid Framework ArXiv ID: 2508.07151 “View on arXiv” Authors: Roshan Shah Abstract We introduce a modular framework that extends the signature method to handle American option pricing under evolving volatility roughness. Building on the signature-pricing framework of Bayer et al. (2025), we add three practical innovations. First, we train a gradient-boosted ensemble to estimate the time-varying Hurst parameter H(t) from rolling windows of recent volatility data. Second, we feed these forecasts into a regime switch that chooses either a rough Bergomi or a calibrated Heston simulator, depending on the predicted roughness. Third, we accelerate signature-kernel evaluations with Random Fourier Features (RFF), cutting computational cost while preserving accuracy. Empirical tests on S&P 500 equity-index options reveal that the assumption of persistent roughness is frequently violated, particularly during stable market regimes when H(t) approaches or exceeds 0.5. The proposed hybrid framework provides a flexible structure that adapts to changing volatility roughness, improving performance over fixed-roughness baselines and reducing duality gaps in some regimes. By integrating a dynamic Hurst parameter estimation pipeline with efficient kernel approximations, we propose to enable tractable, real-time pricing of American options in dynamic volatility environments. ...

August 10, 2025 · 2 min · Research Team

High-Dimensional Learning in Finance

High-Dimensional Learning in Finance ArXiv ID: 2506.03780 “View on arXiv” Authors: Hasan Fallahgoul Abstract Recent advances in machine learning have shown promising results for financial prediction using large, over-parameterized models. This paper provides theoretical foundations and empirical validation for understanding when and how these methods achieve predictive success. I examine two key aspects of high-dimensional learning in finance. First, I prove that within-sample standardization in Random Fourier Features implementations fundamentally alters the underlying Gaussian kernel approximation, replacing shift-invariant kernels with training-set dependent alternatives. Second, I establish information-theoretic lower bounds that identify when reliable learning is impossible no matter how sophisticated the estimator. A detailed quantitative calibration of the polynomial lower bound shows that with typical parameter choices, e.g., 12,000 features, 12 monthly observations, and R-square 2-3%, the required sample size to escape the bound exceeds 25-30 years of data–well beyond any rolling-window actually used. Thus, observed out-of-sample success must originate from lower-complexity artefacts rather than from the intended high-dimensional mechanism. ...

June 4, 2025 · 2 min · Research Team