false

Robust Optimization in Causal Models and G-Causal Normalizing Flows

Robust Optimization in Causal Models and G-Causal Normalizing Flows ArXiv ID: 2510.15458 “View on arXiv” Authors: Gabriele Visentin, Patrick Cheridito Abstract In this paper, we show that interventionally robust optimization problems in causal models are continuous under the $G$-causal Wasserstein distance, but may be discontinuous under the standard Wasserstein distance. This highlights the importance of using generative models that respect the causal structure when augmenting data for such tasks. To this end, we propose a new normalizing flow architecture that satisfies a universal approximation property for causal structural models and can be efficiently trained to minimize the $G$-causal Wasserstein distance. Empirically, we demonstrate that our model outperforms standard (non-causal) generative models in data augmentation for causal regression and mean-variance portfolio optimization in causal factor models. ...

October 17, 2025 · 2 min · Research Team

Co-Training Realized Volatility Prediction Model with Neural Distributional Transformation

Co-Training Realized Volatility Prediction Model with Neural Distributional Transformation ArXiv ID: 2310.14536 “View on arXiv” Authors: Unknown Abstract This paper shows a novel machine learning model for realized volatility (RV) prediction using a normalizing flow, an invertible neural network. Since RV is known to be skewed and have a fat tail, previous methods transform RV into values that follow a latent distribution with an explicit shape and then apply a prediction model. However, knowing that shape is non-trivial, and the transformation result influences the prediction model. This paper proposes to jointly train the transformation and the prediction model. The training process follows a maximum-likelihood objective function that is derived from the assumption that the prediction residuals on the transformed RV time series are homogeneously Gaussian. The objective function is further approximated using an expectation-maximum algorithm. On a dataset of 100 stocks, our method significantly outperforms other methods using analytical or naive neural-network transformations. ...

October 23, 2023 · 2 min · Research Team