false

FinFlowRL: An Imitation-Reinforcement Learning Framework for Adaptive Stochastic Control in Finance

FinFlowRL: An Imitation-Reinforcement Learning Framework for Adaptive Stochastic Control in Finance ArXiv ID: 2509.17964 “View on arXiv” Authors: Yang Li, Zhi Chen, Steve Y. Yang, Ruixun Zhang Abstract Traditional stochastic control methods in finance rely on simplifying assumptions that often fail in real world markets. While these methods work well in specific, well defined scenarios, they underperform when market conditions change. We introduce FinFlowRL, a novel framework for financial stochastic control that combines imitation learning with reinforcement learning. The framework first pretrains an adaptive meta policy by learning from multiple expert strategies, then finetunes it through reinforcement learning in the noise space to optimize the generation process. By employing action chunking, that is generating sequences of actions rather than single decisions, it addresses the non Markovian nature of financial markets. FinFlowRL consistently outperforms individually optimized experts across diverse market conditions. ...

September 22, 2025 · 2 min · Research Team

FlowOE: Imitation Learning with Flow Policy from Ensemble RL Experts for Optimal Execution under Heston Volatility and Concave Market Impacts

FlowOE: Imitation Learning with Flow Policy from Ensemble RL Experts for Optimal Execution under Heston Volatility and Concave Market Impacts ArXiv ID: 2506.05755 “View on arXiv” Authors: Yang Li, Zhi Chen Abstract Optimal execution in financial markets refers to the process of strategically transacting a large volume of assets over a period to achieve the best possible outcome by balancing the trade-off between market impact costs and timing or volatility risks. Traditional optimal execution strategies, such as static Almgren-Chriss models, often prove suboptimal in dynamic financial markets. This paper propose flowOE, a novel imitation learning framework based on flow matching models, to address these limitations. FlowOE learns from a diverse set of expert traditional strategies and adaptively selects the most suitable expert behavior for prevailing market conditions. A key innovation is the incorporation of a refining loss function during the imitation process, enabling flowOE not only to mimic but also to improve upon the learned expert actions. To the best of our knowledge, this work is the first to apply flow matching models in a stochastic optimal execution problem. Empirical evaluations across various market conditions demonstrate that flowOE significantly outperforms both the specifically calibrated expert models and other traditional benchmarks, achieving higher profits with reduced risk. These results underscore the practical applicability and potential of flowOE to enhance adaptive optimal execution. ...

June 6, 2025 · 2 min · Research Team

FlowHFT: Imitation Learning via Flow Matching Policy for Optimal High-Frequency Trading under Diverse Market Conditions

FlowHFT: Imitation Learning via Flow Matching Policy for Optimal High-Frequency Trading under Diverse Market Conditions ArXiv ID: 2505.05784 “View on arXiv” Authors: Yang Li, Zhi Chen, Steve Yang Abstract High-frequency trading (HFT) is an investing strategy that continuously monitors market states and places bid and ask orders at millisecond speeds. Traditional HFT approaches fit models with historical data and assume that future market states follow similar patterns. This limits the effectiveness of any single model to the specific conditions it was trained for. Additionally, these models achieve optimal solutions only under specific market conditions, such as assumptions about stock price’s stochastic process, stable order flow, and the absence of sudden volatility. Real-world markets, however, are dynamic, diverse, and frequently volatile. To address these challenges, we propose the FlowHFT, a novel imitation learning framework based on flow matching policy. FlowHFT simultaneously learns strategies from numerous expert models, each proficient in particular market scenarios. As a result, our framework can adaptively adjust investment decisions according to the prevailing market state. Furthermore, FlowHFT incorporates a grid-search fine-tuning mechanism. This allows it to refine strategies and achieve superior performance even in complex or extreme market scenarios where expert strategies may be suboptimal. We test FlowHFT in multiple market environments. We first show that flow matching policy is applicable in stochastic market environments, thus enabling FlowHFT to learn trading strategies under different market conditions. Notably, our single framework consistently achieves performance superior to the best expert for each market condition. ...

May 9, 2025 · 2 min · Research Team

Robot See, Robot Do: Imitation Reward for Noisy Financial Environments

Robot See, Robot Do: Imitation Reward for Noisy Financial Environments ArXiv ID: 2411.08637 “View on arXiv” Authors: Unknown Abstract The sequential nature of decision-making in financial asset trading aligns naturally with the reinforcement learning (RL) framework, making RL a common approach in this domain. However, the low signal-to-noise ratio in financial markets results in noisy estimates of environment components, including the reward function, which hinders effective policy learning by RL agents. Given the critical importance of reward function design in RL problems, this paper introduces a novel and more robust reward function by leveraging imitation learning, where a trend labeling algorithm acts as an expert. We integrate imitation (expert’s) feedback with reinforcement (agent’s) feedback in a model-free RL algorithm, effectively embedding the imitation learning problem within the RL paradigm to handle the stochasticity of reward signals. Empirical results demonstrate that this novel approach improves financial performance metrics compared to traditional benchmarks and RL agents trained solely using reinforcement feedback. ...

November 13, 2024 · 2 min · Research Team

MOT: A Mixture of Actors Reinforcement Learning Method by Optimal Transport for Algorithmic Trading

MOT: A Mixture of Actors Reinforcement Learning Method by Optimal Transport for Algorithmic Trading ArXiv ID: 2407.01577 “View on arXiv” Authors: Unknown Abstract Algorithmic trading refers to executing buy and sell orders for specific assets based on automatically identified trading opportunities. Strategies based on reinforcement learning (RL) have demonstrated remarkable capabilities in addressing algorithmic trading problems. However, the trading patterns differ among market conditions due to shifted distribution data. Ignoring multiple patterns in the data will undermine the performance of RL. In this paper, we propose MOT,which designs multiple actors with disentangled representation learning to model the different patterns of the market. Furthermore, we incorporate the Optimal Transport (OT) algorithm to allocate samples to the appropriate actor by introducing a regularization loss term. Additionally, we propose Pretrain Module to facilitate imitation learning by aligning the outputs of actors with expert strategy and better balance the exploration and exploitation of RL. Experimental results on real futures market data demonstrate that MOT exhibits excellent profit capabilities while balancing risks. Ablation studies validate the effectiveness of the components of MOT. ...

June 3, 2024 · 2 min · Research Team

Curriculum Learning and Imitation Learning for Model-free Control on Financial Time-series

Curriculum Learning and Imitation Learning for Model-free Control on Financial Time-series ArXiv ID: 2311.13326 “View on arXiv” Authors: Unknown Abstract Curriculum learning and imitation learning have been leveraged extensively in the robotics domain. However, minimal research has been done on leveraging these ideas on control tasks over highly stochastic time-series data. Here, we theoretically and empirically explore these approaches in a representative control task over complex time-series data. We implement the fundamental ideas of curriculum learning via data augmentation, while imitation learning is implemented via policy distillation from an oracle. Our findings reveal that curriculum learning should be considered a novel direction in improving control-task performance over complex time-series. Our ample random-seed out-sample empirics and ablation studies are highly encouraging for curriculum learning for time-series control. These findings are especially encouraging as we tune all overlapping hyperparameters on the baseline – giving an advantage to the baseline. On the other hand, we find that imitation learning should be used with caution. ...

November 22, 2023 · 2 min · Research Team

IMM: An Imitative Reinforcement Learning Approach with Predictive Representation Learning for Automatic Market Making

IMM: An Imitative Reinforcement Learning Approach with Predictive Representation Learning for Automatic Market Making ArXiv ID: 2308.08918 “View on arXiv” Authors: Unknown Abstract Market making (MM) has attracted significant attention in financial trading owing to its essential function in ensuring market liquidity. With strong capabilities in sequential decision-making, Reinforcement Learning (RL) technology has achieved remarkable success in quantitative trading. Nonetheless, most existing RL-based MM methods focus on optimizing single-price level strategies which fail at frequent order cancellations and loss of queue priority. Strategies involving multiple price levels align better with actual trading scenarios. However, given the complexity that multi-price level strategies involves a comprehensive trading action space, the challenge of effectively training profitable RL agents for MM persists. Inspired by the efficient workflow of professional human market makers, we propose Imitative Market Maker (IMM), a novel RL framework leveraging both knowledge from suboptimal signal-based experts and direct policy interactions to develop multi-price level MM strategies efficiently. The framework start with introducing effective state and action representations adept at encoding information about multi-price level orders. Furthermore, IMM integrates a representation learning unit capable of capturing both short- and long-term market trends to mitigate adverse selection risk. Subsequently, IMM formulates an expert strategy based on signals and trains the agent through the integration of RL and imitation learning techniques, leading to efficient learning. Extensive experimental results on four real-world market datasets demonstrate that IMM outperforms current RL-based market making strategies in terms of several financial criteria. The findings of the ablation study substantiate the effectiveness of the model components. ...

August 17, 2023 · 2 min · Research Team