false

Learning from Expert Factors: Trajectory-level Reward Shaping for Formulaic Alpha Mining

Learning from Expert Factors: Trajectory-level Reward Shaping for Formulaic Alpha Mining ArXiv ID: 2507.20263 “View on arXiv” Authors: Junjie Zhao, Chengxi Zhang, Chenkai Wang, Peng Yang Abstract Reinforcement learning (RL) has successfully automated the complex process of mining formulaic alpha factors, for creating interpretable and profitable investment strategies. However, existing methods are hampered by the sparse rewards given the underlying Markov Decision Process. This inefficiency limits the exploration of the vast symbolic search space and destabilizes the training process. To address this, Trajectory-level Reward Shaping (TLRS), a novel reward shaping method, is proposed. TLRS provides dense, intermediate rewards by measuring the subsequence-level similarity between partially generated expressions and a set of expert-designed formulas. Furthermore, a reward centering mechanism is introduced to reduce training variance. Extensive experiments on six major Chinese and U.S. stock indices show that TLRS significantly improves the predictive power of mined factors, boosting the Rank Information Coefficient by 9.29% over existing potential-based shaping algorithms. Notably, TLRS achieves a major leap in computational efficiency by reducing its time complexity with respect to the feature dimension from linear to constant, which is a significant improvement over distance-based baselines. ...

July 27, 2025 · 2 min · Research Team

FLAG-Trader: Fusion LLM-Agent with Gradient-based Reinforcement Learning for Financial Trading

FLAG-Trader: Fusion LLM-Agent with Gradient-based Reinforcement Learning for Financial Trading ArXiv ID: 2502.11433 “View on arXiv” Authors: Unknown Abstract Large language models (LLMs) fine-tuned on multimodal financial data have demonstrated impressive reasoning capabilities in various financial tasks. However, they often struggle with multi-step, goal-oriented scenarios in interactive financial markets, such as trading, where complex agentic approaches are required to improve decision-making. To address this, we propose \textsc{“FLAG-Trader”}, a unified architecture integrating linguistic processing (via LLMs) with gradient-driven reinforcement learning (RL) policy optimization, in which a partially fine-tuned LLM acts as the policy network, leveraging pre-trained knowledge while adapting to the financial domain through parameter-efficient fine-tuning. Through policy gradient optimization driven by trading rewards, our framework not only enhances LLM performance in trading but also improves results on other financial-domain tasks. We present extensive empirical evidence to validate these enhancements. ...

February 17, 2025 · 2 min · Research Team

Reinforcement-Learning Portfolio Allocation with Dynamic Embedding of Market Information

Reinforcement-Learning Portfolio Allocation with Dynamic Embedding of Market Information ArXiv ID: 2501.17992 “View on arXiv” Authors: Unknown Abstract We develop a portfolio allocation framework that leverages deep learning techniques to address challenges arising from high-dimensional, non-stationary, and low-signal-to-noise market information. Our approach includes a dynamic embedding method that reduces the non-stationary, high-dimensional state space into a lower-dimensional representation. We design a reinforcement learning (RL) framework that integrates generative autoencoders and online meta-learning to dynamically embed market information, enabling the RL agent to focus on the most impactful parts of the state space for portfolio allocation decisions. Empirical analysis based on the top 500 U.S. stocks demonstrates that our framework outperforms common portfolio benchmarks and the predict-then-optimize (PTO) approach using machine learning, particularly during periods of market stress. Traditional factor models do not fully explain this superior performance. The framework’s ability to time volatility reduces its market exposure during turbulent times. Ablation studies confirm the robustness of this performance across various reinforcement learning algorithms. Additionally, the embedding and meta-learning techniques effectively manage the complexities of high-dimensional, noisy, and non-stationary financial data, enhancing both portfolio performance and risk management. ...

January 29, 2025 · 2 min · Research Team

Exploratory Mean-Variance Portfolio Optimization with Regime-Switching Market Dynamics

Exploratory Mean-Variance Portfolio Optimization with Regime-Switching Market Dynamics ArXiv ID: 2501.16659 “View on arXiv” Authors: Unknown Abstract Considering the continuous-time Mean-Variance (MV) portfolio optimization problem, we study a regime-switching market setting and apply reinforcement learning (RL) techniques to assist informed exploration within the control space. We introduce and solve the Exploratory Mean Variance with Regime Switching (EMVRS) problem. We also present a Policy Improvement Theorem. Further, we recognize that the widely applied Temporal Difference (TD) learning is not adequate for the EMVRS context, hence we consider Orthogonality Condition (OC) learning, leveraging the martingale property of the induced optimal value function from the analytical solution to EMVRS. We design a RL algorithm that has more meaningful parameterization using the market parameters and propose an updating scheme for each parameter. Our empirical results demonstrate the superiority of OC learning over TD learning with a clear convergence of the market parameters towards their corresponding ``grounding true" values in a simulated market scenario. In a real market data study, EMVRS with OC learning outperforms its counterparts with the highest mean and reasonably low volatility of the annualized portfolio returns. ...

January 28, 2025 · 2 min · Research Team

Mean--Variance Portfolio Selection by Continuous-Time Reinforcement Learning: Algorithms, Regret Analysis, and Empirical Study

Mean–Variance Portfolio Selection by Continuous-Time Reinforcement Learning: Algorithms, Regret Analysis, and Empirical Study ArXiv ID: 2412.16175 “View on arXiv” Authors: Unknown Abstract We study continuous-time mean–variance portfolio selection in markets where stock prices are diffusion processes driven by observable factors that are also diffusion processes, yet the coefficients of these processes are unknown. Based on the recently developed reinforcement learning (RL) theory for diffusion processes, we present a general data-driven RL algorithm that learns the pre-committed investment strategy directly without attempting to learn or estimate the market coefficients. For multi-stock Black–Scholes markets without factors, we further devise a baseline algorithm and prove its performance guarantee by deriving a sublinear regret bound in terms of the Sharpe ratio. For performance enhancement and practical implementation, we modify the baseline algorithm and carry out an extensive empirical study to compare its performance, in terms of a host of common metrics, with a large number of widely employed portfolio allocation strategies on S&P 500 constituents. The results demonstrate that the proposed continuous-time RL strategy is consistently among the best, especially in a volatile bear market, and decisively outperforms the model-based continuous-time counterparts by significant margins. ...

December 8, 2024 · 2 min · Research Team

Pretrained LLM Adapted with LoRA as a Decision Transformer for Offline RL in Quantitative Trading

Pretrained LLM Adapted with LoRA as a Decision Transformer for Offline RL in Quantitative Trading ArXiv ID: 2411.17900 “View on arXiv” Authors: Unknown Abstract Developing effective quantitative trading strategies using reinforcement learning (RL) is challenging due to the high risks associated with online interaction with live financial markets. Consequently, offline RL, which leverages historical market data without additional exploration, becomes essential. However, existing offline RL methods often struggle to capture the complex temporal dependencies inherent in financial time series and may overfit to historical patterns. To address these challenges, we introduce a Decision Transformer (DT) initialized with pre-trained GPT-2 weights and fine-tuned using Low-Rank Adaptation (LoRA). This architecture leverages the generalization capabilities of pre-trained language models and the efficiency of LoRA to learn effective trading policies from expert trajectories solely from historical data. Our model performs competitively with established offline RL algorithms, including Conservative Q-Learning (CQL), Implicit Q-Learning (IQL), and Behavior Cloning (BC), as well as a baseline Decision Transformer with randomly initialized GPT-2 weights and LoRA. Empirical results demonstrate that our approach effectively learns from expert trajectories and secures superior rewards in certain trading scenarios, highlighting the effectiveness of integrating pre-trained language models and parameter-efficient fine-tuning in offline RL for quantitative trading. Replication code for our experiments is publicly available at https://github.com/syyunn/finrl-dt ...

November 26, 2024 · 2 min · Research Team

Robot See, Robot Do: Imitation Reward for Noisy Financial Environments

Robot See, Robot Do: Imitation Reward for Noisy Financial Environments ArXiv ID: 2411.08637 “View on arXiv” Authors: Unknown Abstract The sequential nature of decision-making in financial asset trading aligns naturally with the reinforcement learning (RL) framework, making RL a common approach in this domain. However, the low signal-to-noise ratio in financial markets results in noisy estimates of environment components, including the reward function, which hinders effective policy learning by RL agents. Given the critical importance of reward function design in RL problems, this paper introduces a novel and more robust reward function by leveraging imitation learning, where a trend labeling algorithm acts as an expert. We integrate imitation (expert’s) feedback with reinforcement (agent’s) feedback in a model-free RL algorithm, effectively embedding the imitation learning problem within the RL paradigm to handle the stochasticity of reward signals. Empirical results demonstrate that this novel approach improves financial performance metrics compared to traditional benchmarks and RL agents trained solely using reinforcement feedback. ...

November 13, 2024 · 2 min · Research Team

Reinforcement Learning Framework for Quantitative Trading

Reinforcement Learning Framework for Quantitative Trading ArXiv ID: 2411.07585 “View on arXiv” Authors: Unknown Abstract The inherent volatility and dynamic fluctuations within the financial stock market underscore the necessity for investors to employ a comprehensive and reliable approach that integrates risk management strategies, market trends, and the movement trends of individual securities. By evaluating specific data, investors can make more informed decisions. However, the current body of literature lacks substantial evidence supporting the practical efficacy of reinforcement learning (RL) agents, as many models have only demonstrated success in back testing using historical data. This highlights the urgent need for a more advanced methodology capable of addressing these challenges. There is a significant disconnect in the effective utilization of financial indicators to better understand the potential market trends of individual securities. The disclosure of successful trading strategies is often restricted within financial markets, resulting in a scarcity of widely documented and published strategies leveraging RL. Furthermore, current research frequently overlooks the identification of financial indicators correlated with various market trends and their potential advantages. This research endeavors to address these complexities by enhancing the ability of RL agents to effectively differentiate between positive and negative buy/sell actions using financial indicators. While we do not address all concerns, this paper provides deeper insights and commentary on the utilization of technical indicators and their benefits within reinforcement learning. This work establishes a foundational framework for further exploration and investigation of more complex scenarios. ...

November 12, 2024 · 2 min · Research Team

Optimal Execution with Reinforcement Learning

Optimal Execution with Reinforcement Learning ArXiv ID: 2411.06389 “View on arXiv” Authors: Unknown Abstract This study investigates the development of an optimal execution strategy through reinforcement learning, aiming to determine the most effective approach for traders to buy and sell inventory within a finite time horizon. Our proposed model leverages input features derived from the current state of the limit order book and operates at a high frequency to maximize control. To simulate this environment and overcome the limitations associated with relying on historical data, we utilize the multi-agent market simulator ABIDES, which provides a diverse range of depth levels within the limit order book. We present a custom MDP formulation followed by the results of our methodology and benchmark the performance against standard execution strategies. Results show that the reinforcement learning agent outperforms standard strategies and offers a practical foundation for real-world trading applications. ...

November 10, 2024 · 2 min · Research Team

A Review of Reinforcement Learning in Financial Applications

A Review of Reinforcement Learning in Financial Applications ArXiv ID: 2411.12746 “View on arXiv” Authors: Unknown Abstract In recent years, there has been a growing trend of applying Reinforcement Learning (RL) in financial applications. This approach has shown great potential to solve decision-making tasks in finance. In this survey, we present a comprehensive study of the applications of RL in finance and conduct a series of meta-analyses to investigate the common themes in the literature, such as the factors that most significantly affect RL’s performance compared to traditional methods. Moreover, we identify challenges including explainability, Markov Decision Process (MDP) modeling, and robustness that hinder the broader utilization of RL in the financial industry and discuss recent advancements in overcoming these challenges. Finally, we propose future research directions, such as benchmarking, contextual RL, multi-agent RL, and model-based RL to address these challenges and to further enhance the implementation of RL in finance. ...

November 1, 2024 · 2 min · Research Team