false

Deep Reinforcement Learning for Robust Goal-Based Wealth Management

Deep Reinforcement Learning for Robust Goal-Based Wealth Management ArXiv ID: 2307.13501 “View on arXiv” Authors: Unknown Abstract Goal-based investing is an approach to wealth management that prioritizes achieving specific financial goals. It is naturally formulated as a sequential decision-making problem as it requires choosing the appropriate investment until a goal is achieved. Consequently, reinforcement learning, a machine learning technique appropriate for sequential decision-making, offers a promising path for optimizing these investment strategies. In this paper, a novel approach for robust goal-based wealth management based on deep reinforcement learning is proposed. The experimental results indicate its superiority over several goal-based wealth management benchmarks on both simulated and historical market data. ...

July 25, 2023 · 2 min · Research Team

Benchmarking Robustness of Deep Reinforcement Learning approaches to Online Portfolio Management

Benchmarking Robustness of Deep Reinforcement Learning approaches to Online Portfolio Management ArXiv ID: 2306.10950 “View on arXiv” Authors: Unknown Abstract Deep Reinforcement Learning approaches to Online Portfolio Selection have grown in popularity in recent years. The sensitive nature of training Reinforcement Learning agents implies a need for extensive efforts in market representation, behavior objectives, and training processes, which have often been lacking in previous works. We propose a training and evaluation process to assess the performance of classical DRL algorithms for portfolio management. We found that most Deep Reinforcement Learning algorithms were not robust, with strategies generalizing poorly and degrading quickly during backtesting. ...

June 19, 2023 · 2 min · Research Team

Deep Reinforcement Learning for ESG financial portfolio management

Deep Reinforcement Learning for ESG financial portfolio management ArXiv ID: 2307.09631 “View on arXiv” Authors: Unknown Abstract This paper investigates the application of Deep Reinforcement Learning (DRL) for Environment, Social, and Governance (ESG) financial portfolio management, with a specific focus on the potential benefits of ESG score-based market regulation. We leveraged an Advantage Actor-Critic (A2C) agent and conducted our experiments using environments encoded within the OpenAI Gym, adapted from the FinRL platform. The study includes a comparative analysis of DRL agent performance under standard Dow Jones Industrial Average (DJIA) market conditions and a scenario where returns are regulated in line with company ESG scores. In the ESG-regulated market, grants were proportionally allotted to portfolios based on their returns and ESG scores, while taxes were assigned to portfolios below the mean ESG score of the index. The results intriguingly reveal that the DRL agent within the ESG-regulated market outperforms the standard DJIA market setup. Furthermore, we considered the inclusion of ESG variables in the agent state space, and compared this with scenarios where such data were excluded. This comparison adds to the understanding of the role of ESG factors in portfolio management decision-making. We also analyze the behaviour of the DRL agent in IBEX 35 and NASDAQ-100 indexes. Both the A2C and Proximal Policy Optimization (PPO) algorithms were applied to these additional markets, providing a broader perspective on the generalization of our findings. This work contributes to the evolving field of ESG investing, suggesting that market regulation based on ESG scoring can potentially improve DRL-based portfolio management, with significant implications for sustainable investing strategies. ...

June 19, 2023 · 2 min · Research Team

Integrating Tick-level Data and Periodical Signal for High-frequency Market Making

Integrating Tick-level Data and Periodical Signal for High-frequency Market Making ArXiv ID: 2306.17179 “View on arXiv” Authors: Unknown Abstract We focus on the problem of market making in high-frequency trading. Market making is a critical function in financial markets that involves providing liquidity by buying and selling assets. However, the increasing complexity of financial markets and the high volume of data generated by tick-level trading makes it challenging to develop effective market making strategies. To address this challenge, we propose a deep reinforcement learning approach that fuses tick-level data with periodic prediction signals to develop a more accurate and robust market making strategy. Our results of market making strategies based on different deep reinforcement learning algorithms under the simulation scenarios and real data experiments in the cryptocurrency markets show that the proposed framework outperforms existing methods in terms of profitability and risk management. ...

June 19, 2023 · 2 min · Research Team

Deep Policy Gradient Methods in Commodity Markets

Deep Policy Gradient Methods in Commodity Markets ArXiv ID: 2308.01910 “View on arXiv” Authors: Unknown Abstract The energy transition has increased the reliance on intermittent energy sources, destabilizing energy markets and causing unprecedented volatility, culminating in the global energy crisis of 2021. In addition to harming producers and consumers, volatile energy markets may jeopardize vital decarbonization efforts. Traders play an important role in stabilizing markets by providing liquidity and reducing volatility. Several mathematical and statistical models have been proposed for forecasting future returns. However, developing such models is non-trivial due to financial markets’ low signal-to-noise ratios and nonstationary dynamics. This thesis investigates the effectiveness of deep reinforcement learning methods in commodities trading. It formalizes the commodities trading problem as a continuing discrete-time stochastic dynamical system. This system employs a novel time-discretization scheme that is reactive and adaptive to market volatility, providing better statistical properties for the sub-sampled financial time series. Two policy gradient algorithms, an actor-based and an actor-critic-based, are proposed for optimizing a transaction-cost- and risk-sensitive trading agent. The agent maps historical price observations to market positions through parametric function approximators utilizing deep neural network architectures, specifically CNNs and LSTMs. On average, the deep reinforcement learning models produce an 83 percent higher Sharpe ratio than the buy-and-hold baseline when backtested on front-month natural gas futures from 2017 to 2022. The backtests demonstrate that the risk tolerance of the deep reinforcement learning agents can be adjusted using a risk-sensitivity term. The actor-based policy gradient algorithm performs significantly better than the actor-critic-based algorithm, and the CNN-based models perform slightly better than those based on the LSTM. ...

June 14, 2023 · 2 min · Research Team

AlphaPortfolio: Direct Construction Through Deep Reinforcement Learning and Interpretable AI

AlphaPortfolio: Direct Construction Through Deep Reinforcement Learning and Interpretable AI ArXiv ID: ssrn-3554486 “View on arXiv” Authors: Unknown Abstract We directly optimize the objectives of portfolio management via deep reinforcement learning—an alternative to conventional supervised-learning paradigms that Keywords: Deep Reinforcement Learning, Portfolio Optimization, Artificial Intelligence, Asset Allocation, Portfolio Management Complexity vs Empirical Score Math Complexity: 8.5/10 Empirical Rigor: 9.0/10 Quadrant: Holy Grail Why: The paper employs advanced deep reinforcement learning (RL) with attention-based neural networks (Transformers/LSTMs) and polynomial sensitivity analysis, which involves high mathematical complexity; it also provides out-of-sample performance metrics (Sharpe ratios, alphas) and robustness checks across market conditions, indicating strong empirical backing for implementation. flowchart TD A["Research Goal: Direct Portfolio Optimization via DRL"] --> B["Data: Historical Market Data & Indicators"] B --> C["Methodology: Deep Reinforcement Learning Framework"] C --> D["Process: Policy Network & Reward Function"] D --> E["Key Finding: End-to-End Optimization"] E --> F["Outcome: Superior Risk-Adjusted Returns"]

April 20, 2020 · 1 min · Research Team