false

Optimizing Portfolio with Two-Sided Transactions and Lending: A Reinforcement Learning Framework

Optimizing Portfolio with Two-Sided Transactions and Lending: A Reinforcement Learning Framework ArXiv ID: 2408.05382 “View on arXiv” Authors: Unknown Abstract This study presents a Reinforcement Learning (RL)-based portfolio management model tailored for high-risk environments, addressing the limitations of traditional RL models and exploiting market opportunities through two-sided transactions and lending. Our approach integrates a new environmental formulation with a Profit and Loss (PnL)-based reward function, enhancing the RL agent’s ability in downside risk management and capital optimization. We implemented the model using the Soft Actor-Critic (SAC) agent with a Convolutional Neural Network with Multi-Head Attention (CNN-MHA). This setup effectively manages a diversified 12-crypto asset portfolio in the Binance perpetual futures market, leveraging USDT for both granting and receiving loans and rebalancing every 4 hours, utilizing market data from the preceding 48 hours. Tested over two 16-month periods of varying market volatility, the model significantly outperformed benchmarks, particularly in high-volatility scenarios, achieving higher return-to-risk ratios and demonstrating robust profitability. These results confirm the model’s effectiveness in leveraging market dynamics and managing risks in volatile environments like the cryptocurrency market. ...

August 9, 2024 · 2 min · Research Team

Consumer Transactions Simulation through Generative Adversarial Networks

Consumer Transactions Simulation through Generative Adversarial Networks ArXiv ID: 2408.03655 “View on arXiv” Authors: Unknown Abstract In the rapidly evolving domain of large-scale retail data systems, envisioning and simulating future consumer transactions has become a crucial area of interest. It offers significant potential to fortify demand forecasting and fine-tune inventory management. This paper presents an innovative application of Generative Adversarial Networks (GANs) to generate synthetic retail transaction data, specifically focusing on a novel system architecture that combines consumer behavior modeling with stock-keeping unit (SKU) availability constraints to address real-world assortment optimization challenges. We diverge from conventional methodologies by integrating SKU data into our GAN architecture and using more sophisticated embedding methods (e.g., hyper-graphs). This design choice enables our system to generate not only simulated consumer purchase behaviors but also reflects the dynamic interplay between consumer behavior and SKU availability – an aspect often overlooked, among others, because of data scarcity in legacy retail simulation models. Our GAN model generates transactions under stock constraints, pioneering a resourceful experimental system with practical implications for real-world retail operation and strategy. Preliminary results demonstrate enhanced realism in simulated transactions measured by comparing generated items with real ones using methods employed earlier in related studies. This underscores the potential for more accurate predictive modeling. ...

August 7, 2024 · 2 min · Research Team

Forecasting High Frequency Order Flow Imbalance

Forecasting High Frequency Order Flow Imbalance ArXiv ID: 2408.03594 “View on arXiv” Authors: Unknown Abstract Market information events are generated intermittently and disseminated at high speeds in real-time. Market participants consume this high-frequency data to build limit order books, representing the current bids and offers for a given asset. The arrival processes, or the order flow of bid and offer events, are asymmetric and possibly dependent on each other. The quantum and direction of this asymmetry are often associated with the direction of the traded price movement. The Order Flow Imbalance (OFI) is an indicator commonly used to estimate this asymmetry. This paper uses Hawkes processes to estimate the OFI while accounting for the lagged dependence in the order flow between bids and offers. Secondly, we develop a method to forecast the near-term distribution of the OFI, which can then be used to compare models for forecasting OFI. Thirdly, we propose a method to compare the forecasts of OFI for an arbitrarily large number of models. We apply the approach developed to tick data from the National Stock Exchange and observe that the Hawkes process modeled with a Sum of Exponential’s kernel gives the best forecast among all competing models. ...

August 7, 2024 · 2 min · Research Team

Comparative analysis of stationarity for Bitcoin and the S&P500

Comparative analysis of stationarity for Bitcoin and the S&P500 ArXiv ID: 2408.02973 “View on arXiv” Authors: Unknown Abstract This paper compares and contrasts stationarity between the conventional stock market and cryptocurrency. The dataset used for the analysis is the intraday price indices of the S&P500 from 1996 to 2023 and the intraday Bitcoin indices from 2019 to 2023, both in USD. We adopt the definition of `wide sense stationary’, which constrains the time independence of the first and second moments of a time series. The testing method used in this paper follows the Wiener-Khinchin Theorem, i.e., that for a wide sense stationary process, the power spectral density and the autocorrelation are a Fourier transform pair. We demonstrate that localized stationarity can be achieved by truncating the time series into segments, and for each segment, detrending and normalizing the price return are required. These results show that the S&P500 price return can achieve stationarity for the full 28-year period with a detrending window of 12 months and a constrained normalization window of 10 minutes. With truncated segments, a larger normalization window can be used to establish stationarity, indicating that within the segment the data is more homogeneous. For Bitcoin price return, the segment with higher volatility presents stationarity with a normalization window of 60 minutes, whereas stationarity cannot be established in other segments. ...

August 6, 2024 · 2 min · Research Team

Correlation emergence in two coupled simulated limit order books

Correlation emergence in two coupled simulated limit order books ArXiv ID: 2408.03181 “View on arXiv” Authors: Unknown Abstract We use random walks to simulate the fluid limit of two coupled diffusive limit order books to model correlation emergence. The model implements the arrival, cancellation and diffusion of orders coupled by a pairs trader profiting from the mean-reversion between the two order books in the fluid limit for a Lit order book with vanishing boundary conditions and order volume conservation. We are able to demonstrate the recovery of an Epps effect from this. We discuss how various stylised facts depend on the model parameters and the numerical scheme and discuss the various strengths and weaknesses of the approach. We demonstrate how the Epps effect depends on different choices of time and price discretisation. This shows how an Epps effect can emerge without recourse to market microstructure noise relative to a latent model but can rather be viewed as an emergent property arising from trader interactions in a world of asynchronous events. ...

August 6, 2024 · 2 min · Research Team

Efficient Asymmetric Causality Tests

Efficient Asymmetric Causality Tests ArXiv ID: 2408.03137 “View on arXiv” Authors: Unknown Abstract Asymmetric causality tests are increasingly gaining popularity in different scientific fields. This approach corresponds better to reality since logical reasons behind asymmetric behavior exist and need to be considered in empirical investigations. Hatemi-J (2012) introduced the asymmetric causality tests via partial cumulative sums for positive and negative components of the variables operating within the vector autoregressive (VAR) model. However, since the residuals across the equations in the VAR model are not independent, the ordinary least squares method for estimating the parameters is not efficient. Additionally, asymmetric causality tests mean having different causal parameters (i.e., for positive or negative components), thus, it is crucial to assess not only if these causal parameters are individually statistically significant, but also if their difference is statistically significant. Consequently, tests of difference between estimated causal parameters should explicitly be conducted, which are neglected in the existing literature. The purpose of the current paper is to deal with these issues explicitly. An application is provided, and ten different hypotheses pertinent to the asymmetric causal interaction between two largest financial markets worldwide are efficiently tested within a multivariate setting. ...

August 6, 2024 · 2 min · Research Team

Hedge Fund Portfolio Construction Using PolyModel Theory and iTransformer

Hedge Fund Portfolio Construction Using PolyModel Theory and iTransformer ArXiv ID: 2408.03320 “View on arXiv” Authors: Unknown Abstract When constructing portfolios, a key problem is that a lot of financial time series data are sparse, making it challenging to apply machine learning methods. Polymodel theory can solve this issue and demonstrate superiority in portfolio construction from various aspects. To implement the PolyModel theory for constructing a hedge fund portfolio, we begin by identifying an asset pool, utilizing over 10,000 hedge funds for the past 29 years’ data. PolyModel theory also involves choosing a wide-ranging set of risk factors, which includes various financial indices, currencies, and commodity prices. This comprehensive selection mirrors the complexities of the real-world environment. Leveraging on the PolyModel theory, we create quantitative measures such as Long-term Alpha, Long-term Ratio, and SVaR. We also use more classical measures like the Sharpe ratio or Morningstar’s MRAR. To enhance the performance of the constructed portfolio, we also employ the latest deep learning techniques (iTransformer) to capture the upward trend, while efficiently controlling the downside, using all the features. The iTransformer model is specifically designed to address the challenges in high-dimensional time series forecasting and could largely improve our strategies. More precisely, our strategies achieve better Sharpe ratio and annualized return. The above process enables us to create multiple portfolio strategies aiming for high returns and low risks when compared to various benchmarks. ...

August 6, 2024 · 2 min · Research Team

CLVR Ordering of Transactions on AMMs

CLVR Ordering of Transactions on AMMs ArXiv ID: 2408.02634 “View on arXiv” Authors: Unknown Abstract This paper introduces a trade ordering rule that aims to reduce intra-block price volatility in Automated Market Maker (AMM) powered decentralized exchanges. The ordering rule introduced here, Clever Look-ahead Volatility Reduction (CLVR), operates under the (common) framework in decentralized finance that allows some entities to observe trade requests before they are settled, assemble them into “blocks”, and order them as they like. On AMM exchanges, asset prices are continuously and transparently updated as a result of each trade and therefore, transaction order has high financial value. CLVR aims to order transactions for traders’ benefit. Our primary focus is intra-block price stability (minimizing volatility), which has two main benefits for traders: it reduces transaction failure rate and allows traders to receive closer prices to the reference price at which they submit their transactions accordingly. We show that CLVR constructs an ordering which approximately minimizes price volatility with a small computation cost and can be trivially verified externally. ...

August 5, 2024 · 2 min · Research Team

Consistent time travel for realistic interactions with historical data: reinforcement learning for market making

Consistent time travel for realistic interactions with historical data: reinforcement learning for market making ArXiv ID: 2408.02322 “View on arXiv” Authors: Unknown Abstract Reinforcement learning works best when the impact of the agent’s actions on its environment can be perfectly simulated or fully appraised from available data. Some systems are however both hard to simulate and very sensitive to small perturbations. An additional difficulty arises when a RL agent is trained offline to be part of a multi-agent system using only anonymous data, which makes it impossible to infer the state of each agent, thus to use data directly. Typical examples are competitive systems without agent-resolved data such as financial markets. We introduce consistent data time travel for offline RL as a remedy for these problems: instead of using historical data in a sequential way, we argue that one needs to perform time travel in historical data, i.e., to adjust the time index so that both the past state and the influence of the RL agent’s action on the system coincide with real data. This both alleviates the need to resort to imperfect models and consistently accounts for both the immediate and long-term reactions of the system when using anonymous historical data. We apply this idea to market making in limit order books, a notoriously difficult task for RL; it turns out that the gain of the agent is significantly higher with data time travel than with naive sequential data, which suggests that the difficulty of this task for RL may have been overestimated. ...

August 5, 2024 · 2 min · Research Team

Existence, uniqueness and positivity of solutions to the Guyon-Lekeufack path-dependent volatility model with general kernels

Existence, uniqueness and positivity of solutions to the Guyon-Lekeufack path-dependent volatility model with general kernels ArXiv ID: 2408.02477 “View on arXiv” Authors: Unknown Abstract We show the existence and uniqueness of a continuous solution to a path-dependent volatility model introduced by Guyon and Lekeufack (2023) to model the price of an equity index and its spot volatility. The considered model for the trend and activity features can be written as a Stochastic Volterra Equation (SVE) with non-convolutional and non-bounded kernels as well as non-Lipschitz coefficients. We first prove the existence and uniqueness of a solution to the SVE under integrability and regularity assumptions on the two kernels and under a condition on the second kernel weighting the past squared returns which ensures that the activity feature is bounded from below by a positive constant. Then, assuming in addition that the kernel weighting the past returns is of exponential type and that an inequality relating the logarithmic derivatives of the two kernels with respect to their second variables is satisfied, we show the positivity of the volatility process which is obtained as a non-linear function of the SVE’s solution. We show numerically that the choice of an exponential kernel for the kernel weighting the past returns has little impact on the quality of model calibration compared to other choices and the inequality involving the logarithmic derivatives is satisfied by the calibrated kernels. These results extend those of Nutz and Valdevenito (2023). ...

August 5, 2024 · 2 min · Research Team