false

Advance Detection Of Bull And Bear Phases In Cryptocurrency Markets

Advance Detection Of Bull And Bear Phases In Cryptocurrency Markets ArXiv ID: 2411.13586 “View on arXiv” Authors: Unknown Abstract Cryptocurrencies are highly volatile financial instruments with more and more new retail investors joining the scene with each passing day. Bitcoin has always proved to determine in which way the rest of the cryptocurrency market is headed towards. As of today Bitcoin has a market dominance of close to 50 percent. Bull and bear phases in cryptocurrencies are determined based on the performance of Bitcoin over the 50 Day and 200 Day Moving Averages. The aim of this paper is to foretell the performance of bitcoin in the near future by employing predictive algorithms. This predicted data will then be used to calculate the 50 Day and 200 Day Moving Averages and subsequently plotted to establish the potential bull and bear phases. ...

November 18, 2024 · 2 min · Research Team

Reinforcement Learning Pair Trading: A Dynamic Scaling approach

Reinforcement Learning Pair Trading: A Dynamic Scaling approach ArXiv ID: 2407.16103 “View on arXiv” Authors: Unknown Abstract Cryptocurrency is a cryptography-based digital asset with extremely volatile prices. Around USD 70 billion worth of cryptocurrency is traded daily on exchanges. Trading cryptocurrency is difficult due to the inherent volatility of the crypto market. This study investigates whether Reinforcement Learning (RL) can enhance decision-making in cryptocurrency algorithmic trading compared to traditional methods. In order to address this question, we combined reinforcement learning with a statistical arbitrage trading technique, pair trading, which exploits the price difference between statistically correlated assets. We constructed RL environments and trained RL agents to determine when and how to trade pairs of cryptocurrencies. We developed new reward shaping and observation/action spaces for reinforcement learning. We performed experiments with the developed reinforcement learner on pairs of BTC-GBP and BTC-EUR data separated by 1 min intervals (n=263,520). The traditional non-RL pair trading technique achieved an annualized profit of 8.33%, while the proposed RL-based pair trading technique achieved annualized profits from 9.94% to 31.53%, depending upon the RL learner. Our results show that RL can significantly outperform manual and traditional pair trading techniques when applied to volatile markets such as~cryptocurrencies. ...

July 23, 2024 · 2 min · Research Team

EarnHFT: Efficient Hierarchical Reinforcement Learning for High Frequency Trading

EarnHFT: Efficient Hierarchical Reinforcement Learning for High Frequency Trading ArXiv ID: 2309.12891 “View on arXiv” Authors: Unknown Abstract High-frequency trading (HFT) uses computer algorithms to make trading decisions in short time scales (e.g., second-level), which is widely used in the Cryptocurrency (Crypto) market (e.g., Bitcoin). Reinforcement learning (RL) in financial research has shown stellar performance on many quantitative trading tasks. However, most methods focus on low-frequency trading, e.g., day-level, which cannot be directly applied to HFT because of two challenges. First, RL for HFT involves dealing with extremely long trajectories (e.g., 2.4 million steps per month), which is hard to optimize and evaluate. Second, the dramatic price fluctuations and market trend changes of Crypto make existing algorithms fail to maintain satisfactory performance. To tackle these challenges, we propose an Efficient hieArchical Reinforcement learNing method for High Frequency Trading (EarnHFT), a novel three-stage hierarchical RL framework for HFT. In stage I, we compute a Q-teacher, i.e., the optimal action value based on dynamic programming, for enhancing the performance and training efficiency of second-level RL agents. In stage II, we construct a pool of diverse RL agents for different market trends, distinguished by return rates, where hundreds of RL agents are trained with different preferences of return rates and only a tiny fraction of them will be selected into the pool based on their profitability. In stage III, we train a minute-level router which dynamically picks a second-level agent from the pool to achieve stable performance across different markets. Through extensive experiments in various market trends on Crypto markets in a high-fidelity simulation trading environment, we demonstrate that EarnHFT significantly outperforms 6 state-of-art baselines in 6 popular financial criteria, exceeding the runner-up by 30% in profitability. ...

September 22, 2023 · 3 min · Research Team

An Ensemble Method of Deep Reinforcement Learning for Automated Cryptocurrency Trading

An Ensemble Method of Deep Reinforcement Learning for Automated Cryptocurrency Trading ArXiv ID: 2309.00626 “View on arXiv” Authors: Unknown Abstract We propose an ensemble method to improve the generalization performance of trading strategies trained by deep reinforcement learning algorithms in a highly stochastic environment of intraday cryptocurrency portfolio trading. We adopt a model selection method that evaluates on multiple validation periods, and propose a novel mixture distribution policy to effectively ensemble the selected models. We provide a distributional view of the out-of-sample performance on granular test periods to demonstrate the robustness of the strategies in evolving market conditions, and retrain the models periodically to address non-stationarity of financial data. Our proposed ensemble method improves the out-of-sample performance compared with the benchmarks of a deep reinforcement learning strategy and a passive investment strategy. ...

July 27, 2023 · 2 min · Research Team