false

Quantum Adaptive Self-Attention for Financial Rebalancing: An Empirical Study on Automated Market Makers in Decentralized Finance

Quantum Adaptive Self-Attention for Financial Rebalancing: An Empirical Study on Automated Market Makers in Decentralized Finance ArXiv ID: 2509.16955 “View on arXiv” Authors: Chi-Sheng Chen, Aidan Hung-Wen Tsai Abstract We formulate automated market maker (AMM) \emph{“rebalancing”} as a binary detection problem and study a hybrid quantum–classical self-attention block, \textbf{“Quantum Adaptive Self-Attention (QASA)”}. QASA constructs quantum queries/keys/values via variational quantum circuits (VQCs) and applies standard softmax attention over Pauli-$Z$ expectation vectors, yielding a drop-in attention module for financial time-series decision making. Using daily data for \textbf{“BTCUSDC”} over \textbf{“Jan-2024–Jan-2025”} with a 70/15/15 time-series split, we compare QASA against classical ensembles, a transformer, and pure quantum baselines under Return, Sharpe, and Max Drawdown. The \textbf{“QASA-Sequence”} variant attains the \emph{“best single-model risk-adjusted performance”} (\textbf{“13.99%”} return; \textbf{“Sharpe 1.76”}), while hybrid models average \textbf{“11.2%”} return (vs.\ 9.8% classical; 4.4% pure quantum), indicating a favorable performance–stability–cost trade-off. ...

September 21, 2025 · 2 min · Research Team

Is attention truly all we need? An empirical study of asset pricing in pretrained RNN sparse and global attention models

Is attention truly all we need? An empirical study of asset pricing in pretrained RNN sparse and global attention models ArXiv ID: 2508.19006 “View on arXiv” Authors: Shanyan Lai Abstract This study investigates the pretrained RNN attention models with the mainstream attention mechanisms such as additive attention, Luong’s three attentions, global self-attention (Self-att) and sliding window sparse attention (Sparse-att) for the empirical asset pricing research on top 420 large-cap US stocks. This is the first paper on the large-scale state-of-the-art (SOTA) attention mechanisms applied in the asset pricing context. They overcome the limitations of the traditional machine learning (ML) based asset pricing, such as mis-capturing the temporal dependency and short memory. Moreover, the enforced causal masks in the attention mechanisms address the future data leaking issue ignored by the more advanced attention-based models, such as the classic Transformer. The proposed attention models also consider the temporal sparsity characteristic of asset pricing data and mitigate potential overfitting issues by deploying the simplified model structures. This provides some insights for future empirical economic research. All models are examined in three periods, which cover pre-COVID-19 (mild uptrend), COVID-19 (steep uptrend with a large drawdown) and one year post-COVID-19 (sideways movement with high fluctuations), for testing the stability of these models under extreme market conditions. The study finds that in value-weighted portfolio back testing, Model Self-att and Model Sparse-att exhibit great capabilities in deriving the absolute returns and hedging downside risks, while they achieve an annualized Sortino ratio of 2.0 and 1.80 respectively in the period with COVID-19. And Model Sparse-att performs more stably than Model Self-att from the perspective of absolute portfolio returns with respect to the size of stocks’ market capitalization. ...

August 26, 2025 · 2 min · Research Team

Mitigating Distribution Shift in Stock Price Data via Return-Volatility Normalization for Accurate Prediction

Mitigating Distribution Shift in Stock Price Data via Return-Volatility Normalization for Accurate Prediction ArXiv ID: 2508.20108 “View on arXiv” Authors: Hyunwoo Lee, Jihyeong Jeon, Jaemin Hong, U Kang Abstract How can we address distribution shifts in stock price data to improve stock price prediction accuracy? Stock price prediction has attracted attention from both academia and industry, driven by its potential to uncover complex market patterns and enhance decisionmaking. However, existing methods often fail to handle distribution shifts effectively, focusing on scaling or representation adaptation without fully addressing distributional discrepancies and shape misalignments between training and test data. We propose ReVol (Return-Volatility Normalization for Mitigating Distribution Shift in Stock Price Data), a robust method for stock price prediction that explicitly addresses the distribution shift problem. ReVol leverages three key strategies to mitigate these shifts: (1) normalizing price features to remove sample-specific characteristics, including return, volatility, and price scale, (2) employing an attention-based module to estimate these characteristics accurately, thereby reducing the influence of market anomalies, and (3) reintegrating the sample characteristics into the predictive process, restoring the traits lost during normalization. Additionally, ReVol combines geometric Brownian motion for long-term trend modeling with neural networks for short-term pattern recognition, unifying their complementary strengths. Extensive experiments on real-world datasets demonstrate that ReVol enhances the performance of the state-of-the-art backbone models in most cases, achieving an average improvement of more than 0.03 in IC and over 0.7 in SR across various settings. ...

August 13, 2025 · 2 min · Research Team

Forecasting Commodity Price Shocks Using Temporal and Semantic Fusion of Prices Signals and Agentic Generative AI Extracted Economic News

Forecasting Commodity Price Shocks Using Temporal and Semantic Fusion of Prices Signals and Agentic Generative AI Extracted Economic News ArXiv ID: 2508.06497 “View on arXiv” Authors: Mohammed-Khalil Ghali, Cecil Pang, Oscar Molina, Carlos Gershenson-Garcia, Daehan Won Abstract Accurate forecasting of commodity price spikes is vital for countries with limited economic buffers, where sudden increases can strain national budgets, disrupt import-reliant sectors, and undermine food and energy security. This paper introduces a hybrid forecasting framework that combines historical commodity price data with semantic signals derived from global economic news, using an agentic generative AI pipeline. The architecture integrates dual-stream Long Short-Term Memory (LSTM) networks with attention mechanisms to fuse structured time-series inputs with semantically embedded, fact-checked news summaries collected from 1960 to 2023. The model is evaluated on a 64-year dataset comprising normalized commodity price series and temporally aligned news embeddings. Results show that the proposed approach achieves a mean AUC of 0.94 and an overall accuracy of 0.91 substantially outperforming traditional baselines such as logistic regression (AUC = 0.34), random forest (AUC = 0.57), and support vector machines (AUC = 0.47). Additional ablation studies reveal that the removal of attention or dimensionality reduction leads to moderate declines in performance, while eliminating the news component causes a steep drop in AUC to 0.46, underscoring the critical value of incorporating real-world context through unstructured text. These findings demonstrate that integrating agentic generative AI with deep learning can meaningfully improve early detection of commodity price shocks, offering a practical tool for economic planning and risk mitigation in volatile market environments while saving the very high costs of operating a full generative AI agents pipeline. ...

July 24, 2025 · 2 min · Research Team

Trading Under Uncertainty: A Distribution-Based Strategy for Futures Markets Using FutureQuant Transformer

Trading Under Uncertainty: A Distribution-Based Strategy for Futures Markets Using FutureQuant Transformer ArXiv ID: 2505.05595 “View on arXiv” Authors: Wenhao Guo, Yuda Wang, Zeqiao Huang, Changjiang Zhang, Shumin ma Abstract In the complex landscape of traditional futures trading, where vast data and variables like real-time Limit Order Books (LOB) complicate price predictions, we introduce the FutureQuant Transformer model, leveraging attention mechanisms to navigate these challenges. Unlike conventional models focused on point predictions, the FutureQuant model excels in forecasting the range and volatility of future prices, thus offering richer insights for trading strategies. Its ability to parse and learn from intricate market patterns allows for enhanced decision-making, significantly improving risk management and achieving a notable average gain of 0.1193% per 30-minute trade over state-of-the-art models with a simple algorithm using factors such as RSI, ATR, and Bollinger Bands. This innovation marks a substantial leap forward in predictive analytics within the volatile domain of futures trading. ...

May 8, 2025 · 2 min · Research Team

Neural Operators Can Play Dynamic Stackelberg Games

Neural Operators Can Play Dynamic Stackelberg Games ArXiv ID: 2411.09644 “View on arXiv” Authors: Unknown Abstract Dynamic Stackelberg games are a broad class of two-player games in which the leader acts first, and the follower chooses a response strategy to the leader’s strategy. Unfortunately, only stylized Stackelberg games are explicitly solvable since the follower’s best-response operator (as a function of the control of the leader) is typically analytically intractable. This paper addresses this issue by showing that the \textit{“follower’s best-response operator”} can be approximately implemented by an \textit{“attention-based neural operator”}, uniformly on compact subsets of adapted open-loop controls for the leader. We further show that the value of the Stackelberg game where the follower uses the approximate best-response operator approximates the value of the original Stackelberg game. Our main result is obtained using our universal approximation theorem for attention-based neural operators between spaces of square-integrable adapted stochastic processes, as well as stability results for a general class of Stackelberg games. ...

November 14, 2024 · 2 min · Research Team