false

A New Traders' Game? -- Empirical Analysis of Response Functions in a Historical Perspective

A New Traders’ Game? – Empirical Analysis of Response Functions in a Historical Perspective ArXiv ID: 2503.01629 “View on arXiv” Authors: Unknown Abstract Traders on financial markets generate non-Markovian effects in various ways, particularly through their competition with one another which can be interpreted as a game between different (types of) traders. To quantify the market mechanisms, we empirically analyze self-response functions for pairs of different stocks and the corresponding trade sign correlators. While the non-Markovian dynamics in the self-responses is liquidity-driven, it is expectation-driven in the cross-responses which is related to the emergence of correlations. We empirically study the non-stationarity of these responses over time. In our previous data analysis, we only investigated the crisis year 2008. We now considerably extend this by also analyzing the years 2007, 2014 and 2021. To improve statistics, we also work out averaged response functions for the different years. We find significant variations over time revealing changes in the traders’ game. ...

March 3, 2025 · 2 min · Research Team

Perseus: Tracing the Masterminds Behind Cryptocurrency Pump-and-Dump Schemes

\textsc{“Perseus”}: Tracing the Masterminds Behind Cryptocurrency Pump-and-Dump Schemes ArXiv ID: 2503.01686 “View on arXiv” Authors: Unknown Abstract Masterminds are entities organizing, coordinating, and orchestrating cryptocurrency pump-and-dump schemes, a form of trade-based manipulation undermining market integrity and causing financial losses for unwitting investors. Previous research detects pump-and-dump activities in the market, predicts the target cryptocurrency, and examines investors and \ac{“osn”} entities. However, these solutions do not address the root cause of the problem. There is a critical gap in identifying and tracing the masterminds involved in these schemes. In this research, we develop a detection system \textsc{“Perseus”}, which collects real-time data from the \acs{“osn”} and cryptocurrency markets. \textsc{“Perseus”} then constructs temporal attributed graphs that preserve the direction of information diffusion and the structure of the community while leveraging \ac{“gnn”} to identify the masterminds behind pump-and-dump activities. Our design of \textsc{“Perseus”} leads to higher F1 scores and precision than the \ac{“sota”} fraud detection method, achieving fast training and inferring speeds. Deployed in the real world from February 16 to October 9 2024, \textsc{“Perseus”} successfully detects $438$ masterminds who are efficient in the pump-and-dump information diffusion networks. \textsc{“Perseus”} provides regulators with an explanation of the risks of masterminds and oversight capabilities to mitigate the pump-and-dump schemes of cryptocurrency. ...

March 3, 2025 · 2 min · Research Team

Systemic Risk Management via Maximum Independent Set in Extremal Dependence Networks

Systemic Risk Management via Maximum Independent Set in Extremal Dependence Networks ArXiv ID: 2503.15534 “View on arXiv” Authors: Unknown Abstract The failure of key financial institutions may accelerate risk contagion due to their interconnections within the system. In this paper, we propose a robust portfolio strategy to mitigate systemic risks during extreme events. We use the stock returns of key financial institutions as an indicator of their performance, apply extreme value theory to assess the extremal dependence among stocks of financial institutions, and construct a network model based on a threshold approach that captures extremal dependence. Our analysis reveals different dependence structures in the Chinese and U.S. financial systems. By applying the maximum independent set (MIS) from graph theory, we identify a subset of institutions with minimal extremal dependence, facilitating the construction of diversified portfolios resilient to risk contagion. We also compare the performance of our proposed portfolios with that of the market portfolios in the two economies. ...

March 3, 2025 · 2 min · Research Team

Liquidity-adjusted Return and Volatility, and Autoregressive Models

Liquidity-adjusted Return and Volatility, and Autoregressive Models ArXiv ID: 2503.08693 “View on arXiv” Authors: Unknown Abstract We construct liquidity-adjusted return and volatility using purposely designed liquidity metrics (liquidity jump and liquidity diffusion) that incorporate additional liquidity information. Based on these measures, we introduce a liquidity-adjusted ARMA-GARCH framework to address the limitations of traditional ARMA-GARCH models, which are not effectively in modeling illiquid assets with high liquidity variability, such as cryptocurrencies. We demonstrate that the liquidity-adjusted model improves model fit for cryptocurrencies, with greater volatility sensitivity to past shocks and reduced volatility persistence of erratic past volatility. Our model is validated by the empirical evidence that the liquidity-adjusted mean-variance (LAMV) portfolios outperform the traditional mean-variance (TMV) portfolios. ...

March 2, 2025 · 2 min · Research Team

Ornstein-Uhlenbeck Process for Horse Race Betting: A Micro-Macro Analysis of Herding and Informed Bettors

Ornstein-Uhlenbeck Process for Horse Race Betting: A Micro-Macro Analysis of Herding and Informed Bettors ArXiv ID: 2503.16470 “View on arXiv” Authors: Unknown Abstract We model the time evolution of single win odds in Japanese horse racing as a stochastic process, deriving an Ornstein–Uhlenbeck process by analyzing the probability dynamics of vote shares and the empirical time series of odds movements. Our framework incorporates two types of bettors: herders, who adjust their bets based on current odds, and fundamentalists, who wager based on a horse’s true winning probability. Using data from 3450 Japan Racing Association races in 2008, we identify a microscopic probability rule governing individual bets and a mean-reverting macroscopic pattern in odds convergence. This structure parallels financial markets, where traders’ decisions are influenced by market fluctuations, and the interplay between herding and fundamentalist strategies shapes price dynamics. These results highlight the broader applicability of our approach to non-equilibrium financial and betting markets, where mean-reverting dynamics emerge from simple behavioral interactions. ...

March 1, 2025 · 2 min · Research Team

Shifting Power: Leveraging LLMs to Simulate Human Aversion in ABMs of Bilateral Financial Exchanges, A bond market study

Shifting Power: Leveraging LLMs to Simulate Human Aversion in ABMs of Bilateral Financial Exchanges, A bond market study ArXiv ID: 2503.00320 “View on arXiv” Authors: Unknown Abstract Bilateral markets, such as those for government bonds, involve decentralized and opaque transactions between market makers (MMs) and clients, posing significant challenges for traditional modeling approaches. To address these complexities, we introduce TRIBE an agent-based model augmented with a large language model (LLM) to simulate human-like decision-making in trading environments. TRIBE leverages publicly available data and stylized facts to capture realistic trading dynamics, integrating human biases like risk aversion and ambiguity sensitivity into the decision-making processes of agents. Our research yields three key contributions: first, we demonstrate that integrating LLMs into agent-based models to enhance client agency is feasible and enriches the simulation of agent behaviors in complex markets; second, we find that even slight trade aversion encoded within the LLM leads to a complete cessation of trading activity, highlighting the sensitivity of market dynamics to agents’ risk profiles; third, we show that incorporating human-like variability shifts power dynamics towards clients and can disproportionately affect the entire system, often resulting in systemic agent collapse across simulations. These findings underscore the emergent properties that arise when introducing stochastic, human-like decision processes, revealing new system behaviors that enhance the realism and complexity of artificial societies. ...

March 1, 2025 · 2 min · Research Team

Understanding the Commodity Futures Term Structure Through Signatures

Understanding the Commodity Futures Term Structure Through Signatures ArXiv ID: 2503.00603 “View on arXiv” Authors: Unknown Abstract Signature methods have been widely and effectively used as a tool for feature extraction in statistical learning methods, notably in mathematical finance. They lack, however, interpretability: in the general case, it is unclear why signatures actually work. The present article aims to address this issue directly, by introducing and developing the concept of signature perturbations. In particular, we construct a regular perturbation of the signature of the term structure of log prices for various commodities, in terms of the convenience yield. Our perturbation expansion and rigorous convergence estimates help explain the success of signature-based classification of commodities markets according to their term structure, with the volatility of the convenience yield as the major discriminant. ...

March 1, 2025 · 2 min · Research Team

Chronologically Consistent Large Language Models

Chronologically Consistent Large Language Models ArXiv ID: 2502.21206 “View on arXiv” Authors: Unknown Abstract Large language models are increasingly used in social sciences, but their training data can introduce lookahead bias and training leakage. A good chronologically consistent language model requires efficient use of training data to maintain accuracy despite time-restricted data. Here, we overcome this challenge by training a suite of chronologically consistent large language models, ChronoBERT and ChronoGPT, which incorporate only the text data that would have been available at each point in time. Despite this strict temporal constraint, our models achieve strong performance on natural language processing benchmarks, outperforming or matching widely used models (e.g., BERT), and remain competitive with larger open-weight models. Lookahead bias is model and application-specific because even if a chronologically consistent language model has poorer language comprehension, a regression or prediction model applied on top of the language model can compensate. In an asset pricing application predicting next-day stock returns from financial news, we find that ChronoBERT and ChronoGPT’s real-time outputs achieve Sharpe ratios comparable to a much larger Llama model, indicating that lookahead bias is modest. Our results demonstrate a scalable, practical framework to mitigate training leakage, ensuring more credible backtests and predictions across finance and other social science domains. ...

February 28, 2025 · 2 min · Research Team

Enhanced Derivative-Free Optimization Using Adaptive Correlation-Induced Finite Difference Estimators

Enhanced Derivative-Free Optimization Using Adaptive Correlation-Induced Finite Difference Estimators ArXiv ID: 2502.20819 “View on arXiv” Authors: Unknown Abstract Gradient-based methods are well-suited for derivative-free optimization (DFO), where finite-difference (FD) estimates are commonly used as gradient surrogates. Traditional stochastic approximation methods, such as Kiefer-Wolfowitz (KW) and simultaneous perturbation stochastic approximation (SPSA), typically utilize only two samples per iteration, resulting in imprecise gradient estimates and necessitating diminishing step sizes for convergence. In this paper, we first explore an efficient FD estimate, referred to as correlation-induced FD estimate, which is a batch-based estimate. Then, we propose an adaptive sampling strategy that dynamically determines the batch size at each iteration. By combining these two components, we develop an algorithm designed to enhance DFO in terms of both gradient estimation efficiency and sample efficiency. Furthermore, we establish the consistency of our proposed algorithm and demonstrate that, despite using a batch of samples per iteration, it achieves the same convergence rate as the KW and SPSA methods. Additionally, we propose a novel stochastic line search technique to adaptively tune the step size in practice. Finally, comprehensive numerical experiments confirm the superior empirical performance of the proposed algorithm. ...

February 28, 2025 · 2 min · Research Team

Using quantile time series and historical simulation to forecast financial risk multiple steps ahead

Using quantile time series and historical simulation to forecast financial risk multiple steps ahead ArXiv ID: 2502.20978 “View on arXiv” Authors: Unknown Abstract A method for quantile-based, semi-parametric historical simulation estimation of multiple step ahead Value-at-Risk (VaR) and Expected Shortfall (ES) models is developed. It uses the quantile loss function, analogous to how the quasi-likelihood is employed by standard historical simulation methods. The returns data are scaled by the estimated quantile series, then resampling is employed to estimate the forecast distribution one and multiple steps ahead, allowing tail risk forecasting. The proposed method is applicable to any data or model where the relationship between VaR and ES does not change over time and can be extended to allow a measurement equation incorporating realized measures, thus including Realized GARCH and Realized CAViaR type models. Its finite sample properties, and its comparison with existing historical simulation methods, are evaluated via a simulation study. A forecasting study assesses the relative accuracy of the 1% and 2.5% VaR and ES one-day-ahead and ten-day-ahead forecasting results for the proposed class of models compared to several competitors. ...

February 28, 2025 · 2 min · Research Team