Pretrained LLM Adapted with LoRA as a Decision Transformer for Offline RL in Quantitative Trading
ArXiv ID: 2411.17900 “View on arXiv”
Authors: Unknown
Abstract
Developing effective quantitative trading strategies using reinforcement learning (RL) is challenging due to the high risks associated with online interaction with live financial markets. Consequently, offline RL, which leverages historical market data without additional exploration, becomes essential. However, existing offline RL methods often struggle to capture the complex temporal dependencies inherent in financial time series and may overfit to historical patterns. To address these challenges, we introduce a Decision Transformer (DT) initialized with pre-trained GPT-2 weights and fine-tuned using Low-Rank Adaptation (LoRA). This architecture leverages the generalization capabilities of pre-trained language models and the efficiency of LoRA to learn effective trading policies from expert trajectories solely from historical data. Our model performs competitively with established offline RL algorithms, including Conservative Q-Learning (CQL), Implicit Q-Learning (IQL), and Behavior Cloning (BC), as well as a baseline Decision Transformer with randomly initialized GPT-2 weights and LoRA. Empirical results demonstrate that our approach effectively learns from expert trajectories and secures superior rewards in certain trading scenarios, highlighting the effectiveness of integrating pre-trained language models and parameter-efficient fine-tuning in offline RL for quantitative trading. Replication code for our experiments is publicly available at https://github.com/syyunn/finrl-dt
Keywords: Decision Transformers, Reinforcement Learning (RL), Low-Rank Adaptation (LoRA), Offline RL, Quantitative Trading, Equities
Complexity vs Empirical Score
- Math Complexity: 6.5/10
- Empirical Rigor: 7.0/10
- Quadrant: Holy Grail
- Why: The paper introduces advanced concepts like Decision Transformers, LoRA fine-tuning, and offline RL, showing significant mathematical complexity in its architecture and methodology. It includes a clear empirical setup with specific financial data (DJIA stocks), comparisons to baselines like CQL and IQL, and provides replication code, indicating high empirical rigor.
flowchart TD
A["Research Goal<br>Offline RL for Quantitative Trading"] --> B["Data Input<br>Historical Market Data"]
B --> C["Methodology: Decision Transformer<br>GPT-2 + LoRA"]
C --> D["Training<br>Parameter-Efficient Fine-tuning"]
D --> E["Comparison<br>vs CQL, IQL, BC, Random DT"]
E --> F["Outcome 1<br>Competitive Performance"]
E --> G["Outcome 2<br>Superior Rewards in Scenarios"]
F & G --> H["Final Result<br>Effective Trading Policy"]