Deep reinforcement learning with positional context for intraday trading
ArXiv ID: 2406.08013 “View on arXiv”
Authors: Unknown
Abstract
Deep reinforcement learning (DRL) is a well-suited approach to financial decision-making, where an agent makes decisions based on its trading strategy developed from market observations. Existing DRL intraday trading strategies mainly use price-based features to construct the state space. They neglect the contextual information related to the position of the strategy, which is an important aspect given the sequential nature of intraday trading. In this study, we propose a novel DRL model for intraday trading that introduces positional features encapsulating the contextual information into its sparse state space. The model is evaluated over an extended period of almost a decade and across various assets including commodities and foreign exchange securities, taking transaction costs into account. The results show a notable performance in terms of profitability and risk-adjusted metrics. The feature importance results show that each feature incorporating contextual information contributes to the overall performance of the model. Additionally, through an exploration of the agent’s intraday trading activity, we unveil patterns that substantiate the effectiveness of our proposed model.
Keywords: Deep Reinforcement Learning (DRL), Intraday Trading, State Space Construction, Positional Features, Sequential Decision Making, Commodities
Complexity vs Empirical Score
- Math Complexity: 7.5/10
- Empirical Rigor: 8.0/10
- Quadrant: Holy Grail
- Why: The paper employs advanced deep reinforcement learning techniques with specific architectural choices (like sparse state spaces) and discusses sequential decision-making theory, indicating high mathematical complexity. It demonstrates high empirical rigor by evaluating the model over a decade across multiple asset classes, accounting for transaction costs, and performing feature importance analysis.
flowchart TD
A["Research Goal: Enhance DRL for Intraday Trading"] --> B["Methodology: DRL Model with Positional State"]
B --> C["Data: Multi-Asset History<br/>with Transaction Costs"]
C --> D["Process: Sparse State Space Construction<br/>(Price + Position Context)"]
D --> E["Process: Deep Reinforcement Learning<br/>Agent Training & Decision Making"]
E --> F["Key Findings: High Profitability &<br/>Risk-Adjusted Returns"]
F --> G["Outcome: Validated Positional<br/>Feature Importance & Trading Patterns"]