false

Commodities Trading through Deep Policy Gradient Methods

Commodities Trading through Deep Policy Gradient Methods ArXiv ID: 2309.00630 “View on arXiv” Authors: Unknown Abstract Algorithmic trading has gained attention due to its potential for generating superior returns. This paper investigates the effectiveness of deep reinforcement learning (DRL) methods in algorithmic commodities trading. It formulates the commodities trading problem as a continuous, discrete-time stochastic dynamical system. The proposed system employs a novel time-discretization scheme that adapts to market volatility, enhancing the statistical properties of subsampled financial time series. To optimize transaction-cost- and risk-sensitive trading agents, two policy gradient algorithms, namely actor-based and actor-critic-based approaches, are introduced. These agents utilize CNNs and LSTMs as parametric function approximators to map historical price observations to market positions.Backtesting on front-month natural gas futures demonstrates that DRL models increase the Sharpe ratio by $83%$ compared to the buy-and-hold baseline. Additionally, the risk profile of the agents can be customized through a hyperparameter that regulates risk sensitivity in the reward function during the optimization process. The actor-based models outperform the actor-critic-based models, while the CNN-based models show a slight performance advantage over the LSTM-based models. ...

August 10, 2023 · 2 min · Research Team

Large Skew-t Copula Models and Asymmetric Dependence in Intraday Equity Returns

Large Skew-t Copula Models and Asymmetric Dependence in Intraday Equity Returns ArXiv ID: 2308.05564 “View on arXiv” Authors: Unknown Abstract Skew-t copula models are attractive for the modeling of financial data because they allow for asymmetric and extreme tail dependence. We show that the copula implicit in the skew-t distribution of Azzalini and Capitanio (2003) allows for a higher level of pairwise asymmetric dependence than two popular alternative skew-t copulas. Estimation of this copula in high dimensions is challenging, and we propose a fast and accurate Bayesian variational inference (VI) approach to do so. The method uses a generative representation of the skew-t distribution to define an augmented posterior that can be approximated accurately. A stochastic gradient ascent algorithm is used to solve the variational optimization. The methodology is used to estimate skew-t factor copula models with up to 15 factors for intraday returns from 2017 to 2021 on 93 U.S. equities. The copula captures substantial heterogeneity in asymmetric dependence over equity pairs, in addition to the variability in pairwise correlations. In a moving window study we show that the asymmetric dependencies also vary over time, and that intraday predictive densities from the skew-t copula are more accurate than those from benchmark copula models. Portfolio selection strategies based on the estimated pairwise asymmetric dependencies improve performance relative to the index. ...

August 10, 2023 · 2 min · Research Team

Correlation-diversified portfolio construction by finding maximum independent set in large-scale market graph

Correlation-diversified portfolio construction by finding maximum independent set in large-scale market graph ArXiv ID: 2308.04769 “View on arXiv” Authors: Unknown Abstract Correlation-diversified portfolios can be constructed by finding the maximum independent sets (MISs) in market graphs with edges corresponding to correlations between two stocks. The computational complexity to find the MIS increases exponentially as the size of the market graph increases, making the MIS selection in a large-scale market graph difficult. Here we construct a diversified portfolio by solving the MIS problem for a large-scale market graph with a combinatorial optimization solver (an Ising machine) based on a quantum-inspired algorithm called simulated bifurcation (SB) and investigate the investment performance of the constructed portfolio using long-term historical market data. Comparisons using stock universes of various sizes [“TOPIX 100, Nikkei 225, TOPIX 1000, and TOPIX (including approximately 2,000 constituents)”] show that the SB-based solver outperforms conventional MIS solvers in terms of computation-time and solution-accuracy. By using the SB-based solver, we optimized the parameters of a MIS portfolio strategy through iteration of the backcast simulation that calculates the performance of the MIS portfolio strategy based on a large-scale universe covering more than 1,700 Japanese stocks for a long period of 10 years. It has been found that the best MIS portfolio strategy (Sharpe ratio = 1.16, annualized return/risk = 16.3%/14.0%) outperforms the major indices such as TOPIX (0.66, 10.0%/15.2%) and MSCI Japan Minimum Volatility Index (0.64, 7.7%/12.1%) for the period from 2013 to 2023. ...

August 9, 2023 · 2 min · Research Team

Methods for Acquiring and Incorporating Knowledge into Stock Price Prediction: A Survey

Methods for Acquiring and Incorporating Knowledge into Stock Price Prediction: A Survey ArXiv ID: 2308.04947 “View on arXiv” Authors: Unknown Abstract Predicting stock prices presents a challenging research problem due to the inherent volatility and non-linear nature of the stock market. In recent years, knowledge-enhanced stock price prediction methods have shown groundbreaking results by utilizing external knowledge to understand the stock market. Despite the importance of these methods, there is a scarcity of scholarly works that systematically synthesize previous studies from the perspective of external knowledge types. Specifically, the external knowledge can be modeled in different data structures, which we group into non-graph-based formats and graph-based formats: 1) non-graph-based knowledge captures contextual information and multimedia descriptions specifically associated with an individual stock; 2) graph-based knowledge captures interconnected and interdependent information in the stock market. This survey paper aims to provide a systematic and comprehensive description of methods for acquiring external knowledge from various unstructured data sources and then incorporating it into stock price prediction models. We also explore fusion methods for combining external knowledge with historical price features. Moreover, this paper includes a compilation of relevant datasets and delves into potential future research directions in this domain. ...

August 9, 2023 · 2 min · Research Team

SmartDCA superiority

SmartDCA superiority ArXiv ID: 2308.05200 “View on arXiv” Authors: Unknown Abstract Dollar-Cost Averaging (DCA) is a widely used technique to mitigate volatility in long-term investments of appreciating assets. However, the inefficiency of DCA arises from fixing the investment amount regardless of market conditions. In this paper, we present a more efficient approach that we name SmartDCA, which consists in adjusting asset purchases based on price levels. The simplicity of SmartDCA allows for rigorous mathematical analysis, enabling us to establish its superiority through the application of Cauchy-Schwartz inequality and Lehmer means. We further extend our analysis to what we refer to as $ρ$-SmartDCA, where the invested amount is raised to the power of $ρ$. We demonstrate that higher values of $ρ$ lead to enhanced performance. However, this approach may result in unbounded investments. To address this concern, we introduce a bounded version of SmartDCA, taking advantage of two novel mean definitions that we name quasi-Lehmer means. The bounded SmartDCA is specifically designed to retain its superiority to DCA. To support our claims, we provide rigorous mathematical proofs and conduct numerical analyses across various scenarios. The performance gain of different SmartDCA alternatives is compared against DCA using data from S&P500 and Bitcoin. The results consistently demonstrate that all SmartDCA variations yield higher long-term investment returns compared to DCA. ...

August 9, 2023 · 2 min · Research Team

The Rise and Fall of Cryptocurrencies: Defining the Economic and Social Values of Blockchain Technologies, assessing the Opportunities, and defining the Financial and Cybersecurity Risks of the Metaverse

The Rise and Fall of Cryptocurrencies: Defining the Economic and Social Values of Blockchain Technologies, assessing the Opportunities, and defining the Financial and Cybersecurity Risks of the Metaverse ArXiv ID: 2309.12322 “View on arXiv” Authors: Unknown Abstract This paper contextualises the common queries of “why is crypto crashing?” and “why is crypto down?”, the research transcends beyond the frequent market fluctuations to unravel how cryptocurrencies fundamentally work and the step-by-step process on how to create a cryptocurrency. The study examines blockchain technologies and their pivotal role in the evolving Metaverse, shedding light on topics such as how to invest in cryptocurrency, the mechanics behind crypto mining, and strategies to effectively buy and trade cryptocurrencies. Through an interdisciplinary approach, the research transitions from the fundamental principles of fintech investment strategies to the overarching implications of blockchain within the Metaverse. Alongside exploring machine learning potentials in financial sectors and risk assessment methodologies, the study critically assesses whether developed or developing nations are poised to reap greater benefits from these technologies. Moreover, it probes into both enduring and dubious crypto projects, drawing a distinct line between genuine blockchain applications and Ponzi-like schemes. The conclusion resolutely affirms the continuing dominance of blockchain technologies, underlined by a profound exploration of their intrinsic value and a reflective commentary by the author on the potential risks confronting individual investors. ...

August 9, 2023 · 2 min · Research Team

Variations on the Reinforcement Learning performance of Blackjack

Variations on the Reinforcement Learning performance of Blackjack ArXiv ID: 2308.07329 “View on arXiv” Authors: Unknown Abstract Blackjack or “21” is a popular card-based game of chance and skill. The objective of the game is to win by obtaining a hand total higher than the dealer’s without exceeding 21. The ideal blackjack strategy will maximize financial return in the long run while avoiding gambler’s ruin. The stochastic environment and inherent reward structure of blackjack presents an appealing problem to better understand reinforcement learning agents in the presence of environment variations. Here we consider a q-learning solution for optimal play and investigate the rate of learning convergence of the algorithm as a function of deck size. A blackjack simulator allowing for universal blackjack rules is also implemented to demonstrate the extent to which a card counter perfectly using the basic strategy and hi-lo system can bring the house to bankruptcy and how environment variations impact this outcome. The novelty of our work is to place this conceptual understanding of the impact of deck size in the context of learning agent convergence. ...

August 9, 2023 · 2 min · Research Team

Efficient option pricing with unary-based photonic computing chip and generative adversarial learning

Efficient option pricing with unary-based photonic computing chip and generative adversarial learning ArXiv ID: 2308.04493 “View on arXiv” Authors: Unknown Abstract In the modern financial industry system, the structure of products has become more and more complex, and the bottleneck constraint of classical computing power has already restricted the development of the financial industry. Here, we present a photonic chip that implements the unary approach to European option pricing, in combination with the quantum amplitude estimation algorithm, to achieve a quadratic speedup compared to classical Monte Carlo methods. The circuit consists of three modules: a module loading the distribution of asset prices, a module computing the expected payoff, and a module performing the quantum amplitude estimation algorithm to introduce speed-ups. In the distribution module, a generative adversarial network is embedded for efficient learning and loading of asset distributions, which precisely capture the market trends. This work is a step forward in the development of specialized photonic processors for applications in finance, with the potential to improve the efficiency and quality of financial services. ...

August 8, 2023 · 2 min · Research Team

Instabilities of explicit finite difference schemes with ghost points on the diffusion equation

Instabilities of explicit finite difference schemes with ghost points on the diffusion equation ArXiv ID: 2308.04629 “View on arXiv” Authors: Unknown Abstract Ghost, or fictitious points allow to capture boundary conditions that are not located on the finite difference grid discretization. We explore in this paper the impact of ghost points on the stability of the explicit Euler finite difference scheme in the context of the diffusion equation. In particular, we consider the case of a one-touch option under the Black-Scholes model. The observations and results are however valid for a much wider range of financial contracts and models. ...

August 8, 2023 · 1 min · Research Team

Regularity in forex returns during financial distress: Evidence from India

Regularity in forex returns during financial distress: Evidence from India ArXiv ID: 2308.04181 “View on arXiv” Authors: Unknown Abstract This paper uses the concepts of entropy to study the regularity/irregularity of the returns from the Indian Foreign exchange (forex) markets. The Approximate Entropy and Sample Entropy statistics which measure the level of repeatability in the data are used to quantify the randomness in the forex returns from the time period 2006 to 2021. The main objective of the research is to see how the randomness of the foreign exchange returns evolve over the given time period particularly during periods of high financial instability or turbulence in the global financial market. With this objective we look at 2 major financial upheavals, the subprime crisis also known as the Global Financial Crisis (GFC) during 2006-2007 and the recent Covid-19 pandemic during 2020-2021. Our empirical results overwhelmingly confirm our working hypothesis that regularity in the returns of the major Indian foreign exchange rates increases during times of financial crisis. This is evidenced by a decrease in the values of the sample entropy and approximate entropy before and after/during the financial crisis period for the majority of the exchange rates. Our empirical results also show that Sample Entropy is a better measure of regularity than Approximate Entropy for the Indian forex rates which is in agreement with the theoretical predictions. ...

August 8, 2023 · 2 min · Research Team