false

Pretrained LLM Adapted with LoRA as a Decision Transformer for Offline RL in Quantitative Trading

Pretrained LLM Adapted with LoRA as a Decision Transformer for Offline RL in Quantitative Trading ArXiv ID: 2411.17900 “View on arXiv” Authors: Unknown Abstract Developing effective quantitative trading strategies using reinforcement learning (RL) is challenging due to the high risks associated with online interaction with live financial markets. Consequently, offline RL, which leverages historical market data without additional exploration, becomes essential. However, existing offline RL methods often struggle to capture the complex temporal dependencies inherent in financial time series and may overfit to historical patterns. To address these challenges, we introduce a Decision Transformer (DT) initialized with pre-trained GPT-2 weights and fine-tuned using Low-Rank Adaptation (LoRA). This architecture leverages the generalization capabilities of pre-trained language models and the efficiency of LoRA to learn effective trading policies from expert trajectories solely from historical data. Our model performs competitively with established offline RL algorithms, including Conservative Q-Learning (CQL), Implicit Q-Learning (IQL), and Behavior Cloning (BC), as well as a baseline Decision Transformer with randomly initialized GPT-2 weights and LoRA. Empirical results demonstrate that our approach effectively learns from expert trajectories and secures superior rewards in certain trading scenarios, highlighting the effectiveness of integrating pre-trained language models and parameter-efficient fine-tuning in offline RL for quantitative trading. Replication code for our experiments is publicly available at https://github.com/syyunn/finrl-dt ...

November 26, 2024 · 2 min · Research Team

Consistent time travel for realistic interactions with historical data: reinforcement learning for market making

Consistent time travel for realistic interactions with historical data: reinforcement learning for market making ArXiv ID: 2408.02322 “View on arXiv” Authors: Unknown Abstract Reinforcement learning works best when the impact of the agent’s actions on its environment can be perfectly simulated or fully appraised from available data. Some systems are however both hard to simulate and very sensitive to small perturbations. An additional difficulty arises when a RL agent is trained offline to be part of a multi-agent system using only anonymous data, which makes it impossible to infer the state of each agent, thus to use data directly. Typical examples are competitive systems without agent-resolved data such as financial markets. We introduce consistent data time travel for offline RL as a remedy for these problems: instead of using historical data in a sequential way, we argue that one needs to perform time travel in historical data, i.e., to adjust the time index so that both the past state and the influence of the RL agent’s action on the system coincide with real data. This both alleviates the need to resort to imperfect models and consistently accounts for both the immediate and long-term reactions of the system when using anonymous historical data. We apply this idea to market making in limit order books, a notoriously difficult task for RL; it turns out that the gain of the agent is significantly higher with data time travel than with naive sequential data, which suggests that the difficulty of this task for RL may have been overestimated. ...

August 5, 2024 · 2 min · Research Team

Towards Generalizable Reinforcement Learning for Trade Execution

Towards Generalizable Reinforcement Learning for Trade Execution ArXiv ID: 2307.11685 “View on arXiv” Authors: Unknown Abstract Optimized trade execution is to sell (or buy) a given amount of assets in a given time with the lowest possible trading cost. Recently, reinforcement learning (RL) has been applied to optimized trade execution to learn smarter policies from market data. However, we find that many existing RL methods exhibit considerable overfitting which prevents them from real deployment. In this paper, we provide an extensive study on the overfitting problem in optimized trade execution. First, we model the optimized trade execution as offline RL with dynamic context (ORDC), where the context represents market variables that cannot be influenced by the trading policy and are collected in an offline manner. Under this framework, we derive the generalization bound and find that the overfitting issue is caused by large context space and limited context samples in the offline setting. Accordingly, we propose to learn compact representations for context to address the overfitting problem, either by leveraging prior knowledge or in an end-to-end manner. To evaluate our algorithms, we also implement a carefully designed simulator based on historical limit order book (LOB) data to provide a high-fidelity benchmark for different algorithms. Our experiments on the high-fidelity simulator demonstrate that our algorithms can effectively alleviate overfitting and achieve better performance. ...

May 12, 2023 · 2 min · Research Team