false

Trading with market resistance and concave price impact

Trading with market resistance and concave price impact ArXiv ID: 2601.03215 “View on arXiv” Authors: Youssef Ouazzani Chahdi, Nathan De Carvalho, Grégoire Szymanski Abstract We consider an optimal trading problem under a market impact model with endogenous market resistance generated by a sophisticated trader who (partially) detects metaorders and trades against them to exploit price overreactions induced by the order flow. The model features a concave transient impact driven by a power-law propagator with a resistance term responding to the trader’s rate via a fixed-point equation involving a general resistance function. We derive a (non)linear stochastic Fredholm equation as the first-order optimality condition satisfied by optimal trading strategies. Existence and uniqueness of the optimal control are established when the resistance function is linear, and an existence result is obtained when it is strictly convex using coercivity and weak lower semicontinuity of the associated profit-and-loss functional. We also propose an iterative scheme to solve the nonlinear stochastic Fredholm equation and prove an exponential convergence rate. Numerical experiments confirm this behavior and illustrate optimal round-trip strategies under “buy” signals with various decay profiles and different market resistance specifications. ...

January 6, 2026 · 2 min · Research Team

Uni-FinLLM: A Unified Multimodal Large Language Model with Modular Task Heads for Micro-Level Stock Prediction and Macro-Level Systemic Risk Assessment

Uni-FinLLM: A Unified Multimodal Large Language Model with Modular Task Heads for Micro-Level Stock Prediction and Macro-Level Systemic Risk Assessment ArXiv ID: 2601.02677 “View on arXiv” Authors: Gongao Zhang, Haijiang Zeng, Lu Jiang Abstract Financial institutions and regulators require systems that integrate heterogeneous data to assess risks from stock fluctuations to systemic vulnerabilities. Existing approaches often treat these tasks in isolation, failing to capture cross-scale dependencies. We propose Uni-FinLLM, a unified multimodal large language model that uses a shared Transformer backbone and modular task heads to jointly process financial text, numerical time series, fundamentals, and visual data. Through cross-modal attention and multi-task optimization, it learns a coherent representation for micro-, meso-, and macro-level predictions. Evaluated on stock forecasting, credit-risk assessment, and systemic-risk detection, Uni-FinLLM significantly outperforms baselines. It raises stock directional accuracy to 67.4% (from 61.7%), credit-risk accuracy to 84.1% (from 79.6%), and macro early-warning accuracy to 82.3%. Results validate that a unified multimodal LLM can jointly model asset behavior and systemic vulnerabilities, offering a scalable decision-support engine for finance. ...

January 6, 2026 · 2 min · Research Team

Dynamic Risk in the U.S. Banking System: An Analysis of Sentiment, Policy Shocks, and Spillover Effects

Dynamic Risk in the U.S. Banking System: An Analysis of Sentiment, Policy Shocks, and Spillover Effects ArXiv ID: 2601.01783 “View on arXiv” Authors: Haibo Wang, Jun Huang, Lutfu S Sua, Jaime Ortiz, Jinshyang Roan, Bahram Alidaee Abstract The 2023 U.S. banking crisis propagated not through direct financial linkages but through a high-frequency, information-based contagion channel. This paper moves beyond exploration analysis to test the “too-similar-to-fail” hypothesis, arguing that risk spillovers were driven by perceived similarities in bank business models under acute interest rate pressure. Employing a Time-Varying Parameter Vector Autoregression (TVP-VAR) model with 30-day rolling windows, a method uniquely suited for capturing the rapid network shifts inherent in a panic, we analyze daily stock returns for the four failed institutions and a systematically selected peer group of surviving banks vulnerable to the same risks from March 18, 2022, to March 15, 2023. Our results provide strong evidence for this contagion channel: total system connectedness surged dramatically during the crisis peak, and we identify SIVB, FRC, and WAL as primary net transmitters of risk while their perceived peers became significant net receivers, a key dynamic indicator of systemic vulnerability that cannot be captured by asset-by-asset analysis. We further demonstrate that these spillovers were significantly amplified by market sentiment (as measured by the VIX) and economic policy uncertainty (EPU). By providing a clear conceptual framework and robust empirical validation, our findings confirm the persistence of systemic risks within the banking network and highlight the importance of real-time monitoring in strengthening financial stability. ...

January 5, 2026 · 2 min · Research Team

On lead-lag estimation of non-synchronously observed point processes

On lead-lag estimation of non-synchronously observed point processes ArXiv ID: 2601.01871 “View on arXiv” Authors: Takaaki Shiotani, Takaki Hayashi, Yuta Koike Abstract This paper introduces a new theoretical framework for analyzing lead-lag relationships between point processes, with a special focus on applications to high-frequency financial data. In particular, we are interested in lead-lag relationships between two sequences of order arrival timestamps. The seminal work of Dobrev and Schaumburg proposed model-free measures of cross-market trading activity based on cross-counts of timestamps. While their method is known to yield reliable results, it faces limitations because its original formulation inherently relies on discrete-time observations, an issue we address in this study. Specifically, we formulate the problem of estimating lead-lag relationships in two point processes as that of estimating the shape of the cross-pair correlation function (CPCF) of a bivariate stationary point process, a quantity well-studied in the neuroscience and spatial statistics literature. Within this framework, the prevailing lead-lag time is defined as the location of the CPCF’s sharpest peak. Under this interpretation, the peak location in Dobrev and Schaumburg’s cross-market activity measure can be viewed as an estimator of the lead-lag time in the aforementioned sense. We further propose an alternative lead-lag time estimator based on kernel density estimation and show that it possesses desirable theoretical properties and delivers superior numerical performance. Empirical evidence from high-frequency financial data demonstrates the effectiveness of our proposed method. ...

January 5, 2026 · 2 min · Research Team

Temporal Kolmogorov-Arnold Networks (T-KAN) for High-Frequency Limit Order Book Forecasting: Efficiency, Interpretability, and Alpha Decay

Temporal Kolmogorov-Arnold Networks (T-KAN) for High-Frequency Limit Order Book Forecasting: Efficiency, Interpretability, and Alpha Decay ArXiv ID: 2601.02310 “View on arXiv” Authors: Ahmad Makinde Abstract High-Frequency trading (HFT) environments are characterised by large volumes of limit order book (LOB) data, which is notoriously noisy and non-linear. Alpha decay represents a significant challenge, with traditional models such as DeepLOB losing predictive power as the time horizon (k) increases. In this paper, using data from the FI-2010 dataset, we introduce Temporal Kolmogorov-Arnold Networks (T-KAN) to replace the fixed, linear weights of standard LSTMs with learnable B-spline activation functions. This allows the model to learn the ‘shape’ of market signals as opposed to just their magnitude. This resulted in a 19.1% relative improvement in the F1-score at the k = 100 horizon. The efficacy of T-KAN networks cannot be understated, producing a 132.48% return compared to the -82.76% DeepLOB drawdown under 1.0 bps transaction costs. In addition to this, the T-KAN model proves quite interpretable, with the ‘dead-zones’ being clearly visible in the splines. The T-KAN architecture is also uniquely optimized for low-latency FPGA implementation via High level Synthesis (HLS). The code for the experiments in this project can be found at https://github.com/AhmadMak/Temporal-Kolmogorov-Arnold-Networks-T-KAN-for-High-Frequency-Limit-Order-Book-Forecasting. ...

January 5, 2026 · 2 min · Research Team

Wasserstein Distributionally Robust Rare-Event Simulation

Wasserstein Distributionally Robust Rare-Event Simulation ArXiv ID: 2601.01642 “View on arXiv” Authors: Dohyun Ahn, Huiyi Chen, Lewen Zheng Abstract Standard rare-event simulation techniques require exact distributional specifications, which limits their effectiveness in the presence of distributional uncertainty. To address this, we develop a novel framework for estimating rare-event probabilities subject to such distributional model risk. Specifically, we focus on computing worst-case rare-event probabilities, defined as a distributionally robust bound against a Wasserstein ambiguity set centered at a specific nominal distribution. By exploiting a dual characterization of this bound, we propose Distributionally Robust Importance Sampling (DRIS), a computationally tractable methodology designed to substantially reduce the variance associated with estimating the dual components. The proposed method is simple to implement and requires low sampling costs. Most importantly, it achieves vanishing relative error, the strongest efficiency guarantee that is notoriously difficult to establish in rare-event simulation. Our numerical studies confirm the superior performance of DRIS against existing benchmarks. ...

January 4, 2026 · 2 min · Research Team

Order-Constrained Spectral Causality in Multivariate Time Series

Order-Constrained Spectral Causality in Multivariate Time Series ArXiv ID: 2601.01216 “View on arXiv” Authors: Alejandro Rodriguez Dominguez Abstract We introduce an operator-theoretic framework for causal analysis in multivariate time series based on order-constrained spectral non-invariance. Directional influence is defined as sensitivity of second-order dependence operators to admissible, order-preserving temporal deformations of a designated source component, yielding an intrinsically multivariate causal notion summarized through orthogonally invariant spectral functionals. Under linear Gaussian assumptions, the criterion coincides with linear Granger causality, while beyond this regime it captures collective and nonlinear directional dependence not reflected in pairwise predictability. We establish existence, uniform consistency, and valid inference for the resulting non-smooth supremum–infimum statistics using shift-based randomization that exploits order-induced group invariance, yielding finite-sample exactness under exact invariance and asymptotic validity under weak dependence without parametric assumptions. Simulations demonstrate correct size and strong power against distributed and bulk-dominated alternatives, including nonlinear dependence missed by linear Granger tests with appropriate feature embeddings. An empirical application to a high-dimensional panel of daily financial return series spanning major asset classes illustrates system-level causal monitoring in practice. Directional organization is episodic and stress-dependent, causal propagation strengthens while remaining multi-channel, dominant causal hubs reallocate rapidly, and statistically robust transmission channels are sparse and horizon-heterogeneous even when aggregate lead–lag asymmetry is weak. The framework provides a scalable and interpretable complement to correlation-, factor-, and pairwise Granger-style analyses for complex systems. ...

January 3, 2026 · 2 min · Research Team

Capital allocation and tail central moments for the multivariate normal mean-variance mixture distribution

Capital allocation and tail central moments for the multivariate normal mean-variance mixture distribution ArXiv ID: 2601.00568 “View on arXiv” Authors: Enrique Calderín-Ojeda, Yuyu Chen, Soon Wei Tan Abstract Capital allocation is a procedure used to assess the risk contributions of individual risk components to the total risk of a portfolio. While the conditional tail expectation (CTE)-based capital allocation is arguably the most popular capital allocation method, its inability to reflect important tail behaviour of losses necessitates a more accurate approach. In this paper, we introduce a new capital allocation method based on the tail central moments (TCM), generalising the tail covariance allocation informed by the tail variance. We develop analytical expressions of the TCM as well as the TCM-based capital allocation for the class of normal mean-variance mixture distributions, which is widely used to model asymmetric and heavy-tailed data in finance and insurance. As demonstrated by a numerical analysis, the TCM-based capital allocation captures several significant patterns in the tail region of equity losses that remain undetected by the CTE, enhancing the understanding of the tail risk contributions of risk components. ...

January 2, 2026 · 2 min · Research Team

Second Thoughts: How 1-second subslots transform CEX-DEX Arbitrage on Ethereum

Second Thoughts: How 1-second subslots transform CEX-DEX Arbitrage on Ethereum ArXiv ID: 2601.00738 “View on arXiv” Authors: Aleksei Adadurov, Sergey Barseghyan, Anton Chtepine, Antero Eloranta, Andrei Sebyakin, Arsenii Valitov Abstract This paper examines the impact of reducing Ethereum slot time on decentralized exchange activity, with a focus on CEX-DEX arbitrage behavior. We develop a trading model where the agent’s DEX transaction is not guaranteed to land, and the agent explicitly accounts for this execution risk when deciding whether to pursue arbitrage opportunities. We compare agent behavior under Ethereum’s default 12-second slot time environment with a faster regime that offers 1-second subslot execution. The simulations, calibrated to Binance and Uniswap v3 data from July to September 2025, show that faster slot times increase arbitrage transaction count by 535% and trading volume by 203% on average. The increase in CEX-DEX arbitrage activity under 1-second subslots is driven by the reduction in variance of both successful and failed trade outcomes, increasing the risk-adjusted returns and making CEX-DEX arbitrage more appealing. ...

January 2, 2026 · 2 min · Research Team

Uncertainty-Adjusted Sorting for Asset Pricing with Machine Learning

Uncertainty-Adjusted Sorting for Asset Pricing with Machine Learning ArXiv ID: 2601.00593 “View on arXiv” Authors: Yan Liu, Ye Luo, Zigan Wang, Xiaowei Zhang Abstract Machine learning is central to empirical asset pricing, but portfolio construction still relies on point predictions and largely ignores asset-specific estimation uncertainty. We propose a simple change: sort assets using uncertainty-adjusted prediction bounds instead of point predictions alone. Across a broad set of ML models and a U.S. equity panel, this approach improves portfolio performance relative to point-prediction sorting. These gains persist even when bounds are built from partial or misspecified uncertainty information. They arise mainly from reduced volatility and are strongest for flexible machine learning models. Identification and robustness exercises show that these improvements are driven by asset-level rather than time or aggregate predictive uncertainty. ...

January 2, 2026 · 2 min · Research Team