false

Robust Optimization in Causal Models and G-Causal Normalizing Flows

Robust Optimization in Causal Models and G-Causal Normalizing Flows ArXiv ID: 2510.15458 “View on arXiv” Authors: Gabriele Visentin, Patrick Cheridito Abstract In this paper, we show that interventionally robust optimization problems in causal models are continuous under the $G$-causal Wasserstein distance, but may be discontinuous under the standard Wasserstein distance. This highlights the importance of using generative models that respect the causal structure when augmenting data for such tasks. To this end, we propose a new normalizing flow architecture that satisfies a universal approximation property for causal structural models and can be efficiently trained to minimize the $G$-causal Wasserstein distance. Empirically, we demonstrate that our model outperforms standard (non-causal) generative models in data augmentation for causal regression and mean-variance portfolio optimization in causal factor models. ...

October 17, 2025 · 2 min · Research Team

SoK: Market Microstructure for Decentralized Prediction Markets (DePMs)

SoK: Market Microstructure for Decentralized Prediction Markets (DePMs) ArXiv ID: 2510.15612 “View on arXiv” Authors: Nahid Rahman, Joseph Al-Chami, Jeremy Clark Abstract Decentralized prediction markets (DePMs) allow open participation in event-based wagering without fully relying on centralized intermediaries. We review the history of DePMs which date back to 2011 and includes hundreds of proposals. Perhaps surprising, modern DePMs like Polymarket deviate materially from earlier designs like Truthcoin and Augur v1. We use our review to present a modular workflow comprising seven stages: underlying infrastructure, market topic, share structure and pricing, trading, market resolution, settlement, and archiving. For each module, we enumerate the design variants, analyzing trade-offs around decentralization, expressiveness, and manipulation resistance. We also identify open problems for researchers interested in this ecosystem. ...

October 17, 2025 · 2 min · Research Team

Toward Black Scholes for Prediction Markets: A Unified Kernel and Market Maker's Handbook

Toward Black Scholes for Prediction Markets: A Unified Kernel and Market Maker’s Handbook ArXiv ID: 2510.15205 “View on arXiv” Authors: Shaw Dalen Abstract Prediction markets, such as Polymarket, aggregate dispersed information into tradable probabilities, but they still lack a unifying stochastic kernel comparable to the one options gained from Black-Scholes. As these markets scale with institutional participation, exchange integrations, and higher volumes around elections and macro prints, market makers face belief volatility, jump, and cross-event risks without standardized tools for quoting or hedging. We propose such a foundation: a logit jump-diffusion with risk-neutral drift that treats the traded probability p_t as a Q-martingale and exposes belief volatility, jump intensity, and dependence as quotable risk factors. On top, we build a calibration pipeline that filters microstructure noise, separates diffusion from jumps using expectation-maximization, enforces the risk-neutral drift, and yields a stable belief-volatility surface. We then define a coherent derivative layer (variance, correlation, corridor, and first-passage instruments) analogous to volatility and correlation products in option markets. In controlled experiments on synthetic risk-neutral paths and real event data, the model reduces short-horizon belief-variance forecast error relative to diffusion-only and probability-space baselines, supporting both causal calibration and economic interpretability. Conceptually, the logit jump-diffusion kernel supplies an implied-volatility analogue for prediction markets: a tractable, tradable language for quoting, hedging, and transferring belief risk across venues such as Polymarket. ...

October 17, 2025 · 2 min · Research Team

Wariness and Poverty Traps

Wariness and Poverty Traps ArXiv ID: 2510.14418 “View on arXiv” Authors: Hai Ha Pham, Ngoc-Sang Pham Abstract We investigate the effects of wariness (defined as individuals’ concern for their minimum utility over time) on poverty traps and equilibrium multiplicity in an overlapping generations (OLG) model. We explore conditions under which (i) wariness amplifies or mitigates the likelihood of poverty traps in the economy and (ii) it gives rise to multiple intertemporal equilibria. Furthermore, we conduct comparative statics to characterize these effects and to examine how the interplay between wariness, productivity, and factor substitutability influences the dynamics of the economy. ...

October 16, 2025 · 2 min · Research Team

Institutional Differences, Crisis Shocks, and Volatility Structure: A By-Window EGARCH/TGARCH Analysis of ASEAN Stock Markets

Institutional Differences, Crisis Shocks, and Volatility Structure: A By-Window EGARCH/TGARCH Analysis of ASEAN Stock Markets ArXiv ID: 2510.16010 “View on arXiv” Authors: Junlin Yang Abstract This study examines how institutional differences and external crises shape volatility dynamics in emerging Asian stock markets. Using daily stock index returns for Indonesia, Malaysia, and the Philippines from 2010 to 2024, we estimate EGARCH(1,1) and TGARCH(1,1) models in a by-window design. The sample is split into the 2013 Taper Tantrum, the 2020-2021 COVID-19 period, the 2022-2023 rate-hike cycle, and tranquil phases. Prior work typically studies a single market or a static period; to our knowledge no study unifies institutional comparison with multi-crisis dynamics within one GARCH framework. We address this gap and show that all three markets display strong volatility persistence and fat-tailed returns. During crises both persistence and asymmetry increase, while tail thickness rises, implying more frequent extreme moves. After crises, parameters revert toward pre-shock levels. Cross-country evidence indicates a buffering role of institutional maturity: Malaysias stronger regulatory and information systems dampen amplification and speed recovery, whereas the Philippines thinner market structure prolongs instability. We conclude that crises amplify volatility structures, while institutional robustness governs recovery speed. The results provide policy guidance on transparency, macroprudential communication, and liquidity support to reduce volatility persistence during global shocks. ...

October 15, 2025 · 2 min · Research Team

Market-Based Variance of Market Portfolio and of Entire Market

Market-Based Variance of Market Portfolio and of Entire Market ArXiv ID: 2510.13790 “View on arXiv” Authors: Victor Olkhov Abstract We present the unified market-based description of returns and variances of the trades with shares of a particular security, of the trades with shares of all securities in the market, and of the trades with the market portfolio. We consider the investor who doesn’t trade the shares of his portfolio he collected at time t0 in the past. The investor observes the time series of the current trades with all securities made in the market during the averaging interval. The investor may convert these time series into the time series that model the trades with all securities as the trades with a single security and into the time series that model the trades with the market portfolio as the trades with a single security. That establishes the same description of the returns and variances of the trades with a single security, the trades with all securities in the market, and the market portfolio. We show that the market-based variance, which accounts for the impact of random change of the volumes of consecutive trades with securities, takes the form of Markowitz’s (1952) portfolio variance if the volumes of consecutive trades with all market securities are assumed constant. That highlights that Markowitz’s (1952) variance ignores the effects of random volumes of consecutive trades. We compare the market-based variances of the market portfolio and of the trades with all market securities, consider the importance of the duration of the averaging interval, and explain the economic obstacles that limit the accuracy of the predictions of the returns and variances at best by Gaussian distributions. The same methods describe the returns and variances of any portfolio and the trades with its securities. ...

October 15, 2025 · 3 min · Research Team

Multifractality and its sources in the digital currency market

Multifractality and its sources in the digital currency market ArXiv ID: 2510.13785 “View on arXiv” Authors: Stanisław Drożdż, Robert Kluszczyński, Jarosław Kwapień, Marcin Wątorek Abstract Multifractality in time series analysis characterizes the presence of multiple scaling exponents, indicating heterogeneous temporal structures and complex dynamical behaviors beyond simple monofractal models. In the context of digital currency markets, multifractal properties arise due to the interplay of long-range temporal correlations and heavy-tailed distributions of returns, reflecting intricate market microstructure and trader interactions. Incorporating multifractal analysis into the modeling of cryptocurrency price dynamics enhances the understanding of market inefficiencies, may improve volatility forecasting and facilitate the detection of critical transitions or regime shifts. Based on the multifractal cross-correlation analysis (MFCCA) whose spacial case is the multifractal detrended fluctuation analysis (MFDFA), as the most commonly used practical tools for quantifying multifractality, in the present contribution a recently proposed method of disentangling sources of multifractality in time series was applied to the most representative instruments from the digital market. They include Bitcoin (BTC), Ethereum (ETH), decentralized exchanges (DEX) and non-fungible tokens (NFT). The results indicate the significant role of heavy tails in generating a broad multifractal spectrum. However, they also clearly demonstrate that the primary source of multifractality are temporal correlations in the series, and without them, multifractality fades out. It appears characteristic that these temporal correlations, to a large extent, do not depend on the thickness of the tails of the fluctuation distribution. These observations, made here in the context of the digital currency market, provide a further strong argument for the validity of the proposed methodology of disentangling sources of multifractality in time series. ...

October 15, 2025 · 3 min · Research Team

On Evaluating Loss Functions for Stock Ranking: An Empirical Analysis With Transformer Model

On Evaluating Loss Functions for Stock Ranking: An Empirical Analysis With Transformer Model ArXiv ID: 2510.14156 “View on arXiv” Authors: Jan Kwiatkowski, Jarosław A. Chudziak Abstract Quantitative trading strategies rely on accurately ranking stocks to identify profitable investments. Effective portfolio management requires models that can reliably order future stock returns. Transformer models are promising for understanding financial time series, but how different training loss functions affect their ability to rank stocks well is not yet fully understood. Financial markets are challenging due to their changing nature and complex relationships between stocks. Standard loss functions, which aim for simple prediction accuracy, often aren’t enough. They don’t directly teach models to learn the correct order of stock returns. While many advanced ranking losses exist from fields such as information retrieval, there hasn’t been a thorough comparison to see how well they work for ranking financial returns, especially when used with modern Transformer models for stock selection. This paper addresses this gap by systematically evaluating a diverse set of advanced loss functions including pointwise, pairwise, listwise for daily stock return forecasting to facilitate rank-based portfolio selection on S&P 500 data. We focus on assessing how each loss function influences the model’s ability to discern profitable relative orderings among assets. Our research contributes a comprehensive benchmark revealing how different loss functions impact a model’s ability to learn cross-sectional and temporal patterns crucial for portfolio selection, thereby offering practical guidance for optimizing ranking-based trading strategies. ...

October 15, 2025 · 3 min · Research Team

(Non-Parametric) Bootstrap Robust Optimization for Portfolios and Trading Strategies

(Non-Parametric) Bootstrap Robust Optimization for Portfolios and Trading Strategies ArXiv ID: 2510.12725 “View on arXiv” Authors: Daniel Cunha Oliveira, Grover Guzman, Nick Firoozye Abstract Robust optimization provides a principled framework for decision-making under uncertainty, with broad applications in finance, engineering, and operations research. In portfolio optimization, uncertainty in expected returns and covariances demands methods that mitigate estimation error, parameter instability, and model misspecification. Traditional approaches, including parametric, bootstrap-based, and Bayesian methods, enhance stability by relying on confidence intervals or probabilistic priors but often impose restrictive assumptions. This study introduces a non-parametric bootstrap framework for robust optimization in financial decision-making. By resampling empirical data, the framework constructs flexible, data-driven confidence intervals without assuming specific distributional forms, thus capturing uncertainty in statistical estimates, model parameters, and utility functions. Treating utility as a random variable enables percentile-based optimization, naturally suited for risk-sensitive and worst-case decision-making. The approach aligns with recent advances in robust optimization, reinforcement learning, and risk-aware control, offering a unified perspective on robustness and generalization. Empirically, the framework mitigates overfitting and selection bias in trading strategy optimization and improves generalization in portfolio allocation. Results across portfolio and time-series momentum experiments demonstrate that the proposed method delivers smoother, more stable out-of-sample performance, offering a practical, distribution-free alternative to traditional robust optimization methods. ...

October 14, 2025 · 2 min · Research Team

Aligning Language Models with Investor and Market Behavior for Financial Recommendations

Aligning Language Models with Investor and Market Behavior for Financial Recommendations ArXiv ID: 2510.15993 “View on arXiv” Authors: Fernando Spadea, Oshani Seneviratne Abstract Most financial recommendation systems often fail to account for key behavioral and regulatory factors, leading to advice that is misaligned with user preferences, difficult to interpret, or unlikely to be followed. We present FLARKO (Financial Language-model for Asset Recommendation with Knowledge-graph Optimization), a novel framework that integrates Large Language Models (LLMs), Knowledge Graphs (KGs), and Kahneman-Tversky Optimization (KTO) to generate asset recommendations that are both profitable and behaviorally aligned. FLARKO encodes users’ transaction histories and asset trends as structured KGs, providing interpretable and controllable context for the LLM. To demonstrate the adaptability of our approach, we develop and evaluate both a centralized architecture (CenFLARKO) and a federated variant (FedFLARKO). To our knowledge, this is the first demonstration of combining KTO for fine-tuning of LLMs for financial asset recommendation. We also present the first use of structured KGs to ground LLM reasoning over behavioral financial data in a federated learning (FL) setting. Evaluated on the FAR-Trans dataset, FLARKO consistently outperforms state-of-the-art recommendation baselines on behavioral alignment and joint profitability, while remaining interpretable and resource-efficient. ...

October 14, 2025 · 2 min · Research Team