Option Market Making via Reinforcement Learning
ArXiv ID: 2307.01814 “View on arXiv”
Authors: Unknown
Abstract
Market making of options with different maturities and strikes is a challenging problem due to its highly dimensional nature. In this paper, we propose a novel approach that combines a stochastic policy and reinforcement learning-inspired techniques to determine the optimal policy for posting bid-ask spreads for an options market maker who trades options with different maturities and strikes.
Keywords: Options market making, High-dimensional stochastic control, Reinforcement learning, Bid-ask spread optimization, Options (Derivatives)
Complexity vs Empirical Score
- Math Complexity: 8.5/10
- Empirical Rigor: 3.0/10
- Quadrant: Lab Rats
- Why: The paper presents a sophisticated mathematical framework involving stochastic control, HJB equations, and convergence proofs, indicating high mathematical complexity, but lacks empirical validation such as backtesting or data implementation, placing it firmly in the ‘Lab Rats’ quadrant.
flowchart TD
A["Research Goal: Optimal Options Market Making via RL"] --> B["Input: High-Dimensional Options Data<br>Multiple Maturities & Strikes"]
B --> C["Methodology: Stochastic Policy &<br>Reinforcement Learning Framework"]
C --> D["Computational Process:<br>Deep RL for Bid-Ask Spread Optimization"]
D --> E["Outcome: Efficient Policy for<br>Multi-Dimensional Options Market Making"]
E --> F["Key Finding: RL successfully handles<br>high-dimensional price volatility"]