false

Learning to Manage Investment Portfolios beyond Simple Utility Functions

Learning to Manage Investment Portfolios beyond Simple Utility Functions ArXiv ID: 2510.26165 “View on arXiv” Authors: Maarten P. Scholl, Mahmoud Mahfouz, Anisoara Calinescu, J. Doyne Farmer Abstract While investment funds publicly disclose their objectives in broad terms, their managers optimize for complex combinations of competing goals that go beyond simple risk-return trade-offs. Traditional approaches attempt to model this through multi-objective utility functions, but face fundamental challenges in specification and parameterization. We propose a generative framework that learns latent representations of fund manager strategies without requiring explicit utility specification. Our approach directly models the conditional probability of a fund’s portfolio weights, given stock characteristics, historical returns, previous weights, and a latent variable representing the fund’s strategy. Unlike methods based on reinforcement learning or imitation learning, which require specified rewards or labeled expert objectives, our GAN-based architecture learns directly from the joint distribution of observed holdings and market data. We validate our framework on a dataset of 1436 U.S. equity mutual funds. The learned representations successfully capture known investment styles, such as “growth” and “value,” while also revealing implicit manager objectives. For instance, we find that while many funds exhibit characteristics of Markowitz-like optimization, they do so with heterogeneous realizations for turnover, concentration, and latent factors. To analyze and interpret the end-to-end model, we develop a series of tests that explain the model, and we show that the benchmark’s expert labeling are contained in our model’s encoding in a linear interpretable way. Our framework provides a data-driven approach for characterizing investment strategies for applications in market simulation, strategy attribution, and regulatory oversight. ...

October 30, 2025 · 2 min · Research Team

Chaotic Bayesian Inference: Strange Attractors as Risk Models for Black Swan Events

Chaotic Bayesian Inference: Strange Attractors as Risk Models for Black Swan Events ArXiv ID: 2509.08183 “View on arXiv” Authors: Crystal Rust Abstract We introduce a new risk modeling framework where chaotic attractors shape the geometry of Bayesian inference. By combining heavy-tailed priors with Lorenz and Rossler dynamics, the models naturally generate volatility clustering, fat tails, and extreme events. We compare two complementary approaches: Model A, which emphasizes geometric stability, and Model B, which highlights rare bursts using Fibonacci diagnostics. Together, they provide a dual perspective for systemic risk analysis, linking Black Swan theory to practical tools for stress testing and volatility monitoring. ...

September 9, 2025 · 1 min · Research Team

An Interval Type-2 Version of Bayes Theorem Derived from Interval Probability Range Estimates Provided by Subject Matter Experts

An Interval Type-2 Version of Bayes Theorem Derived from Interval Probability Range Estimates Provided by Subject Matter Experts ArXiv ID: 2509.08834 “View on arXiv” Authors: John T. Rickard, William A. Dembski, James Rickards Abstract Bayesian inference is widely used in many different fields to test hypotheses against observations. In most such applications, an assumption is made of precise input values to produce a precise output value. However, this is unrealistic for real-world applications. Often the best available information from subject matter experts (SMEs) in a given field is interval range estimates of the input probabilities involved in Bayes Theorem. This paper provides two key contributions to extend Bayes Theorem to an interval type-2 (IT2) version. First, we develop an IT2 version of Bayes Theorem that uses a novel and conservative method to avoid potential inconsistencies in the input IT2 MFs that otherwise might produce invalid output results. We then describe a novel and flexible algorithm for encoding SME-provided intervals into IT2 fuzzy membership functions (MFs), which we can use to specify the input probabilities in Bayes Theorem. Our algorithm generalizes and extends previous work on this problem that primarily addressed the encoding of intervals into word MFs for Computing with Words applications. ...

August 29, 2025 · 2 min · Research Team

Modeling of Measurement Error in Financial Returns Data

Modeling of Measurement Error in Financial Returns Data ArXiv ID: 2408.07405 “View on arXiv” Authors: Unknown Abstract In this paper we consider the modeling of measurement error for fund returns data. In particular, given access to a time-series of discretely observed log-returns and the associated maximum over the observation period, we develop a stochastic model which models the true log-returns and maximum via a Lévy process and the data as a measurement error there-of. The main technical difficulty of trying to infer this model, for instance Bayesian parameter estimation, is that the joint transition density of the return and maximum is seldom known, nor can it be simulated exactly. Based upon the novel stick breaking representation of [“12”] we provide an approximation of the model. We develop a Markov chain Monte Carlo (MCMC) algorithm to sample from the Bayesian posterior of the approximated posterior and then extend this to a multilevel MCMC method which can reduce the computational cost to approximate posterior expectations, relative to ordinary MCMC. We implement our methodology on several applications including for real data. ...

August 14, 2024 · 2 min · Research Team

Electricity Spot Prices Forecasting Using Stochastic Volatility Models

Electricity Spot Prices Forecasting Using Stochastic Volatility Models ArXiv ID: 2406.19405 “View on arXiv” Authors: Unknown Abstract There are several approaches to modeling and forecasting time series as applied to prices of commodities and financial assets. One of the approaches is to model the price as a non-stationary time series process with heteroscedastic volatility (variance of price). The goal of the research is to generate probabilistic forecasts of day-ahead electricity prices in a spot marker employing stochastic volatility models. A typical stochastic volatility model - that treats the volatility as a latent stochastic process in discrete time - is explored first. Then the research focuses on enriching the baseline model by introducing several exogenous regressors. A better fitting model - as compared to the baseline model - is derived as a result of the research. Out-of-sample forecasts confirm the applicability and robustness of the enriched model. This model may be used in financial derivative instruments for hedging the risk associated with electricity trading. Keywords: Electricity spot prices forecasting, Stochastic volatility, Exogenous regressors, Autoregression, Bayesian inference, Stan ...

June 9, 2024 · 2 min · Research Team

Quantifying neural network uncertainty under volatility clustering

Quantifying neural network uncertainty under volatility clustering ArXiv ID: 2402.14476 “View on arXiv” Authors: Unknown Abstract Time-series with volatility clustering pose a unique challenge to uncertainty quantification (UQ) for returns forecasts. Methods for UQ such as Deep Evidential regression offer a simple way of quantifying return forecast uncertainty without the costs of a full Bayesian treatment. However, the Normal-Inverse-Gamma (NIG) prior adopted by Deep Evidential regression is prone to miscalibration as the NIG prior is assigned to latent mean and variance parameters in a hierarchical structure. Moreover, it also overparameterizes the marginal data distribution. These limitations may affect the accurate delineation of epistemic (model) and aleatoric (data) uncertainties. We propose a Scale Mixture Distribution as a simpler alternative which can provide favorable complexity-accuracy trade-off and assign separate subnetworks to each model parameter. To illustrate the performance of our proposed method, we apply it to two sets of financial time-series exhibiting volatility clustering: cryptocurrencies and U.S. equities and test the performance in some ablation studies. ...

February 22, 2024 · 2 min · Research Team

Bayesian Analysis of High Dimensional Vector Error Correction Model

Bayesian Analysis of High Dimensional Vector Error Correction Model ArXiv ID: 2312.17061 “View on arXiv” Authors: Unknown Abstract Vector Error Correction Model (VECM) is a classic method to analyse cointegration relationships amongst multivariate non-stationary time series. In this paper, we focus on high dimensional setting and seek for sample-size-efficient methodology to determine the level of cointegration. Our investigation centres at a Bayesian approach to analyse the cointegration matrix, henceforth determining the cointegration rank. We design two algorithms and implement them on simulated examples, yielding promising results particularly when dealing with high number of variables and relatively low number of observations. Furthermore, we extend this methodology to empirically investigate the constituents of the S&P 500 index, where low-volatility portfolios can be found during both in-sample training and out-of-sample testing periods. ...

December 28, 2023 · 2 min · Research Team

Black-Litterman, Bayesian Shrinkage, and Factor Models in Portfolio Selection: You Can Have It All

Black-Litterman, Bayesian Shrinkage, and Factor Models in Portfolio Selection: You Can Have It All ArXiv ID: 2308.09264 “View on arXiv” Authors: Unknown Abstract Mean-variance analysis is widely used in portfolio management to identify the best portfolio that makes an optimal trade-off between expected return and volatility. Yet, this method has its limitations, notably its vulnerability to estimation errors and its reliance on historical data. While shrinkage estimators and factor models have been introduced to improve estimation accuracy through bias-variance trade-offs, and the Black-Litterman model has been developed to integrate investor opinions, a unified framework combining three approaches has been lacking. Our study debuts a Bayesian blueprint that fuses shrinkage estimation with view inclusion, conceptualizing both as Bayesian updates. This model is then applied within the context of the Fama-French approach factor models, thereby integrating the advantages of each methodology. Finally, through a comprehensive empirical study in the US equity market spanning a decade, we show that the model outperforms both the simple $1/N$ portfolio and the optimal portfolios based on sample estimators. ...

August 18, 2023 · 2 min · Research Team

Large Skew-t Copula Models and Asymmetric Dependence in Intraday Equity Returns

Large Skew-t Copula Models and Asymmetric Dependence in Intraday Equity Returns ArXiv ID: 2308.05564 “View on arXiv” Authors: Unknown Abstract Skew-t copula models are attractive for the modeling of financial data because they allow for asymmetric and extreme tail dependence. We show that the copula implicit in the skew-t distribution of Azzalini and Capitanio (2003) allows for a higher level of pairwise asymmetric dependence than two popular alternative skew-t copulas. Estimation of this copula in high dimensions is challenging, and we propose a fast and accurate Bayesian variational inference (VI) approach to do so. The method uses a generative representation of the skew-t distribution to define an augmented posterior that can be approximated accurately. A stochastic gradient ascent algorithm is used to solve the variational optimization. The methodology is used to estimate skew-t factor copula models with up to 15 factors for intraday returns from 2017 to 2021 on 93 U.S. equities. The copula captures substantial heterogeneity in asymmetric dependence over equity pairs, in addition to the variability in pairwise correlations. In a moving window study we show that the asymmetric dependencies also vary over time, and that intraday predictive densities from the skew-t copula are more accurate than those from benchmark copula models. Portfolio selection strategies based on the estimated pairwise asymmetric dependencies improve performance relative to the index. ...

August 10, 2023 · 2 min · Research Team

Bayesian framework for characterizing cryptocurrency market dynamics, structural dependency, and volatility using potential field

Bayesian framework for characterizing cryptocurrency market dynamics, structural dependency, and volatility using potential field ArXiv ID: 2308.01013 “View on arXiv” Authors: Unknown Abstract Identifying the structural dependence between the cryptocurrencies and predicting market trend are fundamental for effective portfolio management in cryptocurrency trading. In this paper, we present a unified Bayesian framework based on potential field theory and Gaussian Process to characterize the structural dependency of various cryptocurrencies, using historic price information. The following are our significant contributions: (i) Proposed a novel model for cryptocurrency price movements as a trajectory of a dynamical system governed by a time-varying non-linear potential field. (ii) Validated the existence of the non-linear potential function in cryptocurrency market through Lyapunov stability analysis. (iii) Developed a Bayesian framework for inferring the non-linear potential function from observed cryptocurrency prices. (iv) Proposed that attractors and repellers inferred from the potential field are reliable cryptocurrency market indicators, surpassing existing attributes, such as, mean, open price or close price of an observation window, in the literature. (v) Analysis of cryptocurrency market during various Bitcoin crash durations from April 2017 to November 2021, shows that attractors captured the market trend, volatility, and correlation. In addition, attractors aids explainability and visualization. (vi) The structural dependence inferred by the proposed approach was found to be consistent with results obtained using the popular wavelet coherence approach. (vii) The proposed market indicators (attractors and repellers) can be used to improve the prediction performance of state-of-art deep learning price prediction models. As, an example, we show improvement in Litecoin price prediction up to a horizon of 12 days. ...

August 2, 2023 · 3 min · Research Team