Optimal Execution Using Reinforcement Learning
ArXiv ID: 2306.17178 “View on arXiv”
Authors: Unknown
Abstract
This work is about optimal order execution, where a large order is split into several small orders to maximize the implementation shortfall. Based on the diversity of cryptocurrency exchanges, we attempt to extract cross-exchange signals by aligning data from multiple exchanges for the first time. Unlike most previous studies that focused on using single-exchange information, we discuss the impact of cross-exchange signals on the agent’s decision-making in the optimal execution problem. Experimental results show that cross-exchange signals can provide additional information for the optimal execution of cryptocurrency to facilitate the optimal execution process.
Keywords: Optimal Execution, Cross-Exchange Signals, Reinforcement Learning, Implementation Shortfall, High-Frequency Data, Cryptocurrency
Complexity vs Empirical Score
- Math Complexity: 7.5/10
- Empirical Rigor: 4.0/10
- Quadrant: Lab Rats
- Why: The paper employs advanced reinforcement learning methods and stochastic control theory, indicating high mathematical complexity. However, it relies on backtesting on cryptocurrency data without detailed implementation code or robust statistical validation, placing it in the Lab Rats quadrant.
flowchart TD
A["Research Goal:<br/>Optimal Execution with<br/>Cross-Exchange Signals"] --> B["Methodology:<br/>Align High-Freq Data<br/>from Multiple Exchanges"]
B --> C["Inputs:<br/>Multi-Exchange<br/>Order Book Data"]
C --> D["Computation:<br/>Reinforcement Learning Agent<br/>for Order Splitting"]
D --> E["Process:<br/>Execute Orders to Maximize<br/>Implementation Shortfall"]
E --> F["Outcome:<br/>Cross-Exchange Signals<br/>Enhance Execution Performance"]