false

Student t-Lévy regression model in YUIMA

Student t-Lévy regression model in YUIMA ArXiv ID: 2403.12078 “View on arXiv” Authors: Unknown Abstract The aim of this paper is to discuss an estimation and a simulation method in the \textsf{“R”} package YUIMA for a linear regression model driven by a Student-$t$ Lévy process with constant scale and arbitrary degrees of freedom. This process finds applications in several fields, for example finance, physic, biology, etc. The model presents two main issues. The first is related to the simulation of a sample path at high-frequency level. Indeed, only the $t$-Lévy increments defined on an unitary time interval are Student-$t$ distributed. In YUIMA, we solve this problem by means of the inverse Fourier transform for simulating the increments of a Student-$t$ Lévy defined on a interval with any length. A second problem is due to the fact that joint estimation of trend, scale, and degrees of freedom does not seem to have been investigated as yet. In YUIMA, we develop a two-step estimation procedure that efficiently deals with this issue. Numerical examples are given in order to explain methods and classes used in the YUIMA package. ...

February 26, 2024 · 2 min · Research Team

Finding Near-Optimal Portfolios With Quality-Diversity

Finding Near-Optimal Portfolios With Quality-Diversity ArXiv ID: 2402.16118 “View on arXiv” Authors: Unknown Abstract The majority of standard approaches to financial portfolio optimization (PO) are based on the mean-variance (MV) framework. Given a risk aversion coefficient, the MV procedure yields a single portfolio that represents the optimal trade-off between risk and return. However, the resulting optimal portfolio is known to be highly sensitive to the input parameters, i.e., the estimates of the return covariance matrix and the mean return vector. It has been shown that a more robust and flexible alternative lies in determining the entire region of near-optimal portfolios. In this paper, we present a novel approach for finding a diverse set of such portfolios based on quality-diversity (QD) optimization. More specifically, we employ the CVT-MAP-Elites algorithm, which is scalable to high-dimensional settings with potentially hundreds of behavioral descriptors and/or assets. The results highlight the promising features of QD as a novel tool in PO. ...

February 25, 2024 · 2 min · Research Team

Optimizing Portfolio Management and Risk Assessment in Digital Assets Using Deep Learning for Predictive Analysis

Optimizing Portfolio Management and Risk Assessment in Digital Assets Using Deep Learning for Predictive Analysis ArXiv ID: 2402.15994 “View on arXiv” Authors: Unknown Abstract Portfolio management issues have been extensively studied in the field of artificial intelligence in recent years, but existing deep learning-based quantitative trading methods have some areas where they could be improved. First of all, the prediction mode of stocks is singular; often, only one trading expert is trained by a model, and the trading decision is solely based on the prediction results of the model. Secondly, the data source used by the model is relatively simple, and only considers the data of the stock itself, ignoring the impact of the whole market risk on the stock. In this paper, the DQN algorithm is introduced into asset management portfolios in a novel and straightforward way, and the performance greatly exceeds the benchmark, which fully proves the effectiveness of the DRL algorithm in portfolio management. This also inspires us to consider the complexity of financial problems, and the use of algorithms should be fully combined with the problems to adapt. Finally, in this paper, the strategy is implemented by selecting the assets and actions with the largest Q value. Since different assets are trained separately as environments, there may be a phenomenon of Q value drift among different assets (different assets have different Q value distribution areas), which may easily lead to incorrect asset selection. Consider adding constraints so that the Q values of different assets share a Q value distribution to improve results. ...

February 25, 2024 · 2 min · Research Team

Optimizing Neural Networks for Bermudan Option Pricing: Convergence Acceleration, Future Exposure Evaluation and Interpolation in Counterparty Credit Risk

Optimizing Neural Networks for Bermudan Option Pricing: Convergence Acceleration, Future Exposure Evaluation and Interpolation in Counterparty Credit Risk ArXiv ID: 2402.15936 “View on arXiv” Authors: Unknown Abstract This paper presents a Monte-Carlo-based artificial neural network framework for pricing Bermudan options, offering several notable advantages. These advantages encompass the efficient static hedging of the target Bermudan option and the effective generation of exposure profiles for risk management. We also introduce a novel optimisation algorithm designed to expedite the convergence of the neural network framework proposed by Lokeshwar et al. (2022) supported by a comprehensive error convergence analysis. We conduct an extensive comparative analysis of the Present Value (PV) distribution under Markovian and no-arbitrage assumptions. We compare the proposed neural network model in conjunction with the approach initially introduced by Longstaff and Schwartz (2001) and benchmark it against the COS model, the pricing model pioneered by Fang and Oosterlee (2009), across all Bermudan exercise time points. Additionally, we evaluate exposure profiles, including Expected Exposure and Potential Future Exposure, generated by our proposed model and the Longstaff-Schwartz model, comparing them against the COS model. We also derive exposure profiles at finer non-standard grid points or risk horizons using the proposed approach, juxtaposed with the Longstaff Schwartz method with linear interpolation and benchmark against the COS method. In addition, we explore the effectiveness of various interpolation schemes within the context of the Longstaff-Schwartz method for generating exposures at finer grid horizons. ...

February 24, 2024 · 2 min · Research Team

Combining Transformer based Deep Reinforcement Learning with Black-Litterman Model for Portfolio Optimization

Combining Transformer based Deep Reinforcement Learning with Black-Litterman Model for Portfolio Optimization ArXiv ID: 2402.16609 “View on arXiv” Authors: Unknown Abstract As a model-free algorithm, deep reinforcement learning (DRL) agent learns and makes decisions by interacting with the environment in an unsupervised way. In recent years, DRL algorithms have been widely applied by scholars for portfolio optimization in consecutive trading periods, since the DRL agent can dynamically adapt to market changes and does not rely on the specification of the joint dynamics across the assets. However, typical DRL agents for portfolio optimization cannot learn a policy that is aware of the dynamic correlation between portfolio asset returns. Since the dynamic correlations among portfolio assets are crucial in optimizing the portfolio, the lack of such knowledge makes it difficult for the DRL agent to maximize the return per unit of risk, especially when the target market permits short selling (i.e., the US stock market). In this research, we propose a hybrid portfolio optimization model combining the DRL agent and the Black-Litterman (BL) model to enable the DRL agent to learn the dynamic correlation between the portfolio asset returns and implement an efficacious long/short strategy based on the correlation. Essentially, the DRL agent is trained to learn the policy to apply the BL model to determine the target portfolio weights. To test our DRL agent, we construct the portfolio based on all the Dow Jones Industrial Average constitute stocks. Empirical results of the experiments conducted on real-world United States stock market data demonstrate that our DRL agent significantly outperforms various comparison portfolio choice strategies and alternative DRL frameworks by at least 42% in terms of accumulated return. In terms of the return per unit of risk, our DRL agent significantly outperforms various comparative portfolio choice strategies and alternative strategies based on other machine learning frameworks. ...

February 23, 2024 · 3 min · Research Team

Enhancing Mean-Reverting Time Series Prediction with Gaussian Processes: Functional and Augmented Data Structures in Financial Forecasting

Enhancing Mean-Reverting Time Series Prediction with Gaussian Processes: Functional and Augmented Data Structures in Financial Forecasting ArXiv ID: 2403.00796 “View on arXiv” Authors: Unknown Abstract In this paper, we explore the application of Gaussian Processes (GPs) for predicting mean-reverting time series with an underlying structure, using relatively unexplored functional and augmented data structures. While many conventional forecasting methods concentrate on the short-term dynamics of time series data, GPs offer the potential to forecast not just the average prediction but the entire probability distribution over a future trajectory. This is particularly beneficial in financial contexts, where accurate predictions alone may not suffice if incorrect volatility assessments lead to capital losses. Moreover, in trade selection, GPs allow for the forecasting of multiple Sharpe ratios adjusted for transaction costs, aiding in decision-making. The functional data representation utilized in this study enables longer-term predictions by leveraging information from previous years, even as the forecast moves away from the current year’s training data. Additionally, the augmented representation enriches the training set by incorporating multiple targets for future points in time, facilitating long-term predictions. Our implementation closely aligns with the methodology outlined in, which assessed effectiveness on commodity futures. However, our testing methodology differs. Instead of real data, we employ simulated data with similar characteristics. We construct a testing environment to evaluate both data representations and models under conditions of increasing noise, fat tails, and inappropriate kernels-conditions commonly encountered in practice. By simulating data, we can compare our forecast distribution over time against a full simulation of the actual distribution of our test set, thereby reducing the inherent uncertainty in testing time series models on real data. We enable feature prediction through augmentation and employ sub-sampling to ensure the feasibility of GPs. ...

February 23, 2024 · 3 min · Research Team

Higher order measures of risk and stochastic dominance

Higher order measures of risk and stochastic dominance ArXiv ID: 2402.15387 “View on arXiv” Authors: Unknown Abstract Higher order risk measures are stochastic optimization problems by design, and for this reason they enjoy valuable properties in optimization under uncertainties. They nicely integrate with stochastic optimization problems, as has been observed by the intriguing concept of the risk quadrangles, for example. Stochastic dominance is a binary relation for random variables to compare random outcomes. It is demonstrated that the concepts of higher order risk measures and stochastic dominance are equivalent, they can be employed to characterize the other. The paper explores these relations and connects stochastic orders, higher order risk measures and the risk quadrangle. Expectiles are employed to exemplify the relations obtained. ...

February 23, 2024 · 2 min · Research Team

Long Short-Term Memory Pattern Recognition in Currency Trading

Long Short-Term Memory Pattern Recognition in Currency Trading ArXiv ID: 2403.18839 “View on arXiv” Authors: Unknown Abstract This study delves into the analysis of financial markets through the lens of Wyckoff Phases, a framework devised by Richard D. Wyckoff in the early 20th century. Focusing on the accumulation pattern within the Wyckoff framework, the research explores the phases of trading range and secondary test, elucidating their significance in understanding market dynamics and identifying potential trading opportunities. By dissecting the intricacies of these phases, the study sheds light on the creation of liquidity through market structure, offering insights into how traders can leverage this knowledge to anticipate price movements and make informed decisions. The effective detection and analysis of Wyckoff patterns necessitate robust computational models capable of processing complex market data, with spatial data best analyzed using Convolutional Neural Networks (CNNs) and temporal data through Long Short-Term Memory (LSTM) models. The creation of training data involves the generation of swing points, representing significant market movements, and filler points, introducing noise and enhancing model generalization. Activation functions, such as the sigmoid function, play a crucial role in determining the output behavior of neural network models. The results of the study demonstrate the remarkable efficacy of deep learning models in detecting Wyckoff patterns within financial data, underscoring their potential for enhancing pattern recognition and analysis in financial markets. In conclusion, the study highlights the transformative potential of AI-driven approaches in financial analysis and trading strategies, with the integration of AI technologies shaping the future of trading and investment practices. ...

February 23, 2024 · 2 min · Research Team

Sizing the bets in a focused portfolio

Sizing the bets in a focused portfolio ArXiv ID: 2402.15588 “View on arXiv” Authors: Unknown Abstract The paper provides a mathematical model and a tool for the focused investing strategy as advocated by Buffett, Munger, and others from this investment community. The approach presented here assumes that the investor’s role is to think about probabilities of different outcomes for a set of businesses. Based on these assumptions, the tool calculates the optimal allocation of capital for each of the investment candidates. The model is based on a generalized Kelly Criterion with options to provide constraints that ensure: no shorting, limited use of leverage, providing a maximum limit to the risk of permanent loss of capital, and maximum individual allocation. The software is applied to an example portfolio from which certain observations about excessive diversification are obtained. In addition, the software is made available for public use. ...

February 23, 2024 · 2 min · Research Team

CaT-GNN: Enhancing Credit Card Fraud Detection via Causal Temporal Graph Neural Networks

CaT-GNN: Enhancing Credit Card Fraud Detection via Causal Temporal Graph Neural Networks ArXiv ID: 2402.14708 “View on arXiv” Authors: Unknown Abstract Credit card fraud poses a significant threat to the economy. While Graph Neural Network (GNN)-based fraud detection methods perform well, they often overlook the causal effect of a node’s local structure on predictions. This paper introduces a novel method for credit card fraud detection, the \textbf{"\underline{Ca"}}usal \textbf{"\underline{T"}}emporal \textbf{"\underline{G"}}raph \textbf{"\underline{N"}}eural \textbf{“N”}etwork (CaT-GNN), which leverages causal invariant learning to reveal inherent correlations within transaction data. By decomposing the problem into discovery and intervention phases, CaT-GNN identifies causal nodes within the transaction graph and applies a causal mixup strategy to enhance the model’s robustness and interpretability. CaT-GNN consists of two key components: Causal-Inspector and Causal-Intervener. The Causal-Inspector utilizes attention weights in the temporal attention mechanism to identify causal and environment nodes without introducing additional parameters. Subsequently, the Causal-Intervener performs a causal mixup enhancement on environment nodes based on the set of nodes. Evaluated on three datasets, including a private financial dataset and two public datasets, CaT-GNN demonstrates superior performance over existing state-of-the-art methods. Our findings highlight the potential of integrating causal reasoning with graph neural networks to improve fraud detection capabilities in financial transactions. ...

February 22, 2024 · 2 min · Research Team