false

Solving The Dynamic Volatility Fitting Problem: A Deep Reinforcement Learning Approach

Solving The Dynamic Volatility Fitting Problem: A Deep Reinforcement Learning Approach ArXiv ID: 2410.11789 “View on arXiv” Authors: Unknown Abstract The volatility fitting is one of the core problems in the equity derivatives business. Through a set of deterministic rules, the degrees of freedom in the implied volatility surface encoding (parametrization, density, diffusion) are defined. Whilst very effective, this approach widespread in the industry is not natively tailored to learn from shifts in market regimes and discover unsuspected optimal behaviors. In this paper, we change the classical paradigm and apply the latest advances in Deep Reinforcement Learning(DRL) to solve the fitting problem. In particular, we show that variants of Deep Deterministic Policy Gradient (DDPG) and Soft Actor Critic (SAC) can achieve at least as good as standard fitting algorithms. Furthermore, we explain why the reinforcement learning framework is appropriate to handle complex objective functions and is natively adapted for online learning. ...

October 15, 2024 · 2 min · Research Team

Deep Reinforcement Learning Strategies in Finance: Insights into Asset Holding, Trading Behavior, and Purchase Diversity

Deep Reinforcement Learning Strategies in Finance: Insights into Asset Holding, Trading Behavior, and Purchase Diversity ArXiv ID: 2407.09557 “View on arXiv” Authors: Unknown Abstract Recent deep reinforcement learning (DRL) methods in finance show promising outcomes. However, there is limited research examining the behavior of these DRL algorithms. This paper aims to investigate their tendencies towards holding or trading financial assets as well as purchase diversity. By analyzing their trading behaviors, we provide insights into the decision-making processes of DRL models in finance applications. Our findings reveal that each DRL algorithm exhibits unique trading patterns and strategies, with A2C emerging as the top performer in terms of cumulative rewards. While PPO and SAC engage in significant trades with a limited number of stocks, DDPG and TD3 adopt a more balanced approach. Furthermore, SAC and PPO tend to hold positions for shorter durations, whereas DDPG, A2C, and TD3 display a propensity to remain stationary for extended periods. ...

June 29, 2024 · 2 min · Research Team

Alpha^2: Discovering Logical Formulaic Alphas using Deep Reinforcement Learning

$\text{“Alpha”}^2$: Discovering Logical Formulaic Alphas using Deep Reinforcement Learning ArXiv ID: 2406.16505 “View on arXiv” Authors: Unknown Abstract Alphas are pivotal in providing signals for quantitative trading. The industry highly values the discovery of formulaic alphas for their interpretability and ease of analysis, compared with the expressive yet overfitting-prone black-box alphas. In this work, we focus on discovering formulaic alphas. Prior studies on automatically generating a collection of formulaic alphas were mostly based on genetic programming (GP), which is known to suffer from the problems of being sensitive to the initial population, converting to local optima, and slow computation speed. Recent efforts employing deep reinforcement learning (DRL) for alpha discovery have not fully addressed key practical considerations such as alpha correlations and validity, which are crucial for their effectiveness. In this work, we propose a novel framework for alpha discovery using DRL by formulating the alpha discovery process as program construction. Our agent, $\text{“Alpha”}^2$, assembles an alpha program optimized for an evaluation metric. A search algorithm guided by DRL navigates through the search space based on value estimates for potential alpha outcomes. The evaluation metric encourages both the performance and the diversity of alphas for a better final trading strategy. Our formulation of searching alphas also brings the advantage of pre-calculation dimensional analysis, ensuring the logical soundness of alphas, and pruning the vast search space to a large extent. Empirical experiments on real-world stock markets demonstrates $\text{“Alpha”}^2$’s capability to identify a diverse set of logical and effective alphas, which significantly improves the performance of the final trading strategy. The code of our method is available at https://github.com/x35f/alpha2. ...

June 24, 2024 · 2 min · Research Team

Deep reinforcement learning with positional context for intraday trading

Deep reinforcement learning with positional context for intraday trading ArXiv ID: 2406.08013 “View on arXiv” Authors: Unknown Abstract Deep reinforcement learning (DRL) is a well-suited approach to financial decision-making, where an agent makes decisions based on its trading strategy developed from market observations. Existing DRL intraday trading strategies mainly use price-based features to construct the state space. They neglect the contextual information related to the position of the strategy, which is an important aspect given the sequential nature of intraday trading. In this study, we propose a novel DRL model for intraday trading that introduces positional features encapsulating the contextual information into its sparse state space. The model is evaluated over an extended period of almost a decade and across various assets including commodities and foreign exchange securities, taking transaction costs into account. The results show a notable performance in terms of profitability and risk-adjusted metrics. The feature importance results show that each feature incorporating contextual information contributes to the overall performance of the model. Additionally, through an exploration of the agent’s intraday trading activity, we unveil patterns that substantiate the effectiveness of our proposed model. ...

June 12, 2024 · 2 min · Research Team

Optimizing Portfolio Management and Risk Assessment in Digital Assets Using Deep Learning for Predictive Analysis

Optimizing Portfolio Management and Risk Assessment in Digital Assets Using Deep Learning for Predictive Analysis ArXiv ID: 2402.15994 “View on arXiv” Authors: Unknown Abstract Portfolio management issues have been extensively studied in the field of artificial intelligence in recent years, but existing deep learning-based quantitative trading methods have some areas where they could be improved. First of all, the prediction mode of stocks is singular; often, only one trading expert is trained by a model, and the trading decision is solely based on the prediction results of the model. Secondly, the data source used by the model is relatively simple, and only considers the data of the stock itself, ignoring the impact of the whole market risk on the stock. In this paper, the DQN algorithm is introduced into asset management portfolios in a novel and straightforward way, and the performance greatly exceeds the benchmark, which fully proves the effectiveness of the DRL algorithm in portfolio management. This also inspires us to consider the complexity of financial problems, and the use of algorithms should be fully combined with the problems to adapt. Finally, in this paper, the strategy is implemented by selecting the assets and actions with the largest Q value. Since different assets are trained separately as environments, there may be a phenomenon of Q value drift among different assets (different assets have different Q value distribution areas), which may easily lead to incorrect asset selection. Consider adding constraints so that the Q values of different assets share a Q value distribution to improve results. ...

February 25, 2024 · 2 min · Research Team

Commodities Trading through Deep Policy Gradient Methods

Commodities Trading through Deep Policy Gradient Methods ArXiv ID: 2309.00630 “View on arXiv” Authors: Unknown Abstract Algorithmic trading has gained attention due to its potential for generating superior returns. This paper investigates the effectiveness of deep reinforcement learning (DRL) methods in algorithmic commodities trading. It formulates the commodities trading problem as a continuous, discrete-time stochastic dynamical system. The proposed system employs a novel time-discretization scheme that adapts to market volatility, enhancing the statistical properties of subsampled financial time series. To optimize transaction-cost- and risk-sensitive trading agents, two policy gradient algorithms, namely actor-based and actor-critic-based approaches, are introduced. These agents utilize CNNs and LSTMs as parametric function approximators to map historical price observations to market positions.Backtesting on front-month natural gas futures demonstrates that DRL models increase the Sharpe ratio by $83%$ compared to the buy-and-hold baseline. Additionally, the risk profile of the agents can be customized through a hyperparameter that regulates risk sensitivity in the reward function during the optimization process. The actor-based models outperform the actor-critic-based models, while the CNN-based models show a slight performance advantage over the LSTM-based models. ...

August 10, 2023 · 2 min · Research Team

Reinforcement Learning for Financial Index Tracking

Reinforcement Learning for Financial Index Tracking ArXiv ID: 2308.02820 “View on arXiv” Authors: Unknown Abstract We propose the first discrete-time infinite-horizon dynamic formulation of the financial index tracking problem under both return-based tracking error and value-based tracking error. The formulation overcomes the limitations of existing models by incorporating the intertemporal dynamics of market information variables not limited to prices, allowing exact calculation of transaction costs, accounting for the tradeoff between overall tracking error and transaction costs, allowing effective use of data in a long time period, etc. The formulation also allows novel decision variables of cash injection or withdraw. We propose to solve the portfolio rebalancing equation using a Banach fixed point iteration, which allows to accurately calculate the transaction costs specified as nonlinear functions of trading volumes in practice. We propose an extension of deep reinforcement learning (RL) method to solve the dynamic formulation. Our RL method resolves the issue of data limitation resulting from the availability of a single sample path of financial data by a novel training scheme. A comprehensive empirical study based on a 17-year-long testing set demonstrates that the proposed method outperforms a benchmark method in terms of tracking accuracy and has the potential for earning extra profit through cash withdraw strategy. ...

August 5, 2023 · 2 min · Research Team