false

Generative Meta-Learning Robust Quality-Diversity Portfolio

Generative Meta-Learning Robust Quality-Diversity Portfolio ArXiv ID: 2307.07811 “View on arXiv” Authors: Unknown Abstract This paper proposes a novel meta-learning approach to optimize a robust portfolio ensemble. The method uses a deep generative model to generate diverse and high-quality sub-portfolios combined to form the ensemble portfolio. The generative model consists of a convolutional layer, a stateful LSTM module, and a dense network. During training, the model takes a randomly sampled batch of Gaussian noise and outputs a population of solutions, which are then evaluated using the objective function of the problem. The weights of the model are updated using a gradient-based optimizer. The convolutional layer transforms the noise into a desired distribution in latent space, while the LSTM module adds dependence between generations. The dense network decodes the population of solutions. The proposed method balances maximizing the performance of the sub-portfolios with minimizing their maximum correlation, resulting in a robust ensemble portfolio against systematic shocks. The approach was effective in experiments where stochastic rewards were present. Moreover, the results (Fig. 1) demonstrated that the ensemble portfolio obtained by taking the average of the generated sub-portfolio weights was robust and generalized well. The proposed method can be applied to problems where diversity is desired among co-optimized solutions for a robust ensemble. The source-codes and the dataset are in the supplementary material. ...

July 15, 2023 · 2 min · Research Team

Supervised Dynamic PCA: Linear Dynamic Forecasting with Many Predictors

Supervised Dynamic PCA: Linear Dynamic Forecasting with Many Predictors ArXiv ID: 2307.07689 “View on arXiv” Authors: Unknown Abstract This paper proposes a novel dynamic forecasting method using a new supervised Principal Component Analysis (PCA) when a large number of predictors are available. The new supervised PCA provides an effective way to bridge the gap between predictors and the target variable of interest by scaling and combining the predictors and their lagged values, resulting in an effective dynamic forecasting. Unlike the traditional diffusion-index approach, which does not learn the relationships between the predictors and the target variable before conducting PCA, we first re-scale each predictor according to their significance in forecasting the targeted variable in a dynamic fashion, and a PCA is then applied to a re-scaled and additive panel, which establishes a connection between the predictability of the PCA factors and the target variable. Furthermore, we also propose to use penalized methods such as the LASSO approach to select the significant factors that have superior predictive power over the others. Theoretically, we show that our estimators are consistent and outperform the traditional methods in prediction under some mild conditions. We conduct extensive simulations to verify that the proposed method produces satisfactory forecasting results and outperforms most of the existing methods using the traditional PCA. A real example of predicting U.S. macroeconomic variables using a large number of predictors showcases that our method fares better than most of the existing ones in applications. The proposed method thus provides a comprehensive and effective approach for dynamic forecasting in high-dimensional data analysis. ...

July 15, 2023 · 2 min · Research Team

Machine learning for option pricing: an empirical investigation of network architectures

Machine learning for option pricing: an empirical investigation of network architectures ArXiv ID: 2307.07657 “View on arXiv” Authors: Unknown Abstract We consider the supervised learning problem of learning the price of an option or the implied volatility given appropriate input data (model parameters) and corresponding output data (option prices or implied volatilities). The majority of articles in this literature considers a (plain) feed forward neural network architecture in order to connect the neurons used for learning the function mapping inputs to outputs. In this article, motivated by methods in image classification and recent advances in machine learning methods for PDEs, we investigate empirically whether and how the choice of network architecture affects the accuracy and training time of a machine learning algorithm. We find that the generalized highway network architecture achieves the best performance, when considering the mean squared error and the training time as criteria, within the considered parameter budgets for the Black-Scholes and Heston option pricing problems. Considering the transformed implied volatility problem, a simplified DGM variant achieves the lowest error among the tested architectures. We also carry out a capacity-normalised comparison for completeness, where all architectures are evaluated with an equal number of parameters. Finally, for the implied volatility problem, we additionally include experiments using real market data. ...

July 14, 2023 · 2 min · Research Team

Critical comparisons on deep learning approaches for foreign exchange rate prediction

Critical comparisons on deep learning approaches for foreign exchange rate prediction ArXiv ID: 2307.06600 “View on arXiv” Authors: Unknown Abstract In a natural market environment, the price prediction model needs to be updated in real time according to the data obtained by the system to ensure the accuracy of the prediction. In order to improve the user experience of the system, the price prediction function needs to use the fastest training model and the model prediction fitting effect of the best network as a predictive model. We conduct research on the fundamental theories of RNN, LSTM, and BP neural networks, analyse their respective characteristics, and discuss their advantages and disadvantages to provide a reference for the selection of price-prediction models. ...

July 13, 2023 · 2 min · Research Team

Exploring the Bitcoin Mesoscale

Exploring the Bitcoin Mesoscale ArXiv ID: 2307.14409 “View on arXiv” Authors: Unknown Abstract The open availability of the entire history of the Bitcoin transactions opens up the possibility to study this system at an unprecedented level of detail. This contribution is devoted to the analysis of the mesoscale structural properties of the Bitcoin User Network (BUN), across its entire history (i.e. from 2009 to 2017). What emerges from our analysis is that the BUN is characterized by a core-periphery structure a deeper analysis of which reveals a certain degree of bow-tieness (i.e. the presence of a Strongly-Connected Component, an IN- and an OUT-component together with some tendrils attached to the IN-component). Interestingly, the evolution of the BUN structural organization experiences fluctuations that seem to be correlated with the presence of bubbles, i.e. periods of price surge and decline observed throughout the entire Bitcoin history: our results, thus, further confirm the interplay between structural quantities and price movements observed in previous analyses. ...

July 13, 2023 · 2 min · Research Team

Financial Machine Learning

Financial Machine Learning ArXiv ID: ssrn-4501707 “View on arXiv” Authors: Unknown Abstract Click link for full abstract. Keywords: Unknown Complexity vs Empirical Score Math Complexity: 8.0/10 Empirical Rigor: 9.0/10 Quadrant: Holy Grail Why: The paper utilizes advanced statistical and machine learning theory (e.g., functional analysis, econometrics) combined with extensive empirical backtesting across various asset classes and datasets. flowchart TD A["Research Goal: Evaluate Financial ML"] --> B["Methodology: Cross-Validation"] B --> C["Data: Historical Market Prices"] C --> D{"Model Training"} D --> E["Computational: Overfitting Avoidance"] D --> F["Computational: Feature Engineering"] E --> G["Outcome: Low Risk Alphas"] F --> G

July 13, 2023 · 1 min · Research Team

Real-time Trading System based on Selections of Potentially Profitable, Uncorrelated, and Balanced Stocks by NP-hard Combinatorial Optimization

Real-time Trading System based on Selections of Potentially Profitable, Uncorrelated, and Balanced Stocks by NP-hard Combinatorial Optimization ArXiv ID: 2307.06339 “View on arXiv” Authors: Unknown Abstract Financial portfolio construction problems are often formulated as quadratic and discrete (combinatorial) optimization that belong to the nondeterministic polynomial time (NP)-hard class in computational complexity theory. Ising machines are hardware devices that work in quantum-mechanical/quantum-inspired principles for quickly solving NP-hard optimization problems, which potentially enable making trading decisions based on NP-hard optimization in the time constraints for high-speed trading strategies. Here we report a real-time stock trading system that determines long(buying)/short(selling) positions through NP-hard portfolio optimization for improving the Sharpe ratio using an embedded Ising machine based on a quantum-inspired algorithm called simulated bifurcation. The Ising machine selects a balanced (delta-neutral) group of stocks from an $N$-stock universe according to an objective function involving maximizing instantaneous expected returns defined as deviations from volume-weighted average prices and minimizing the summation of statistical correlation factors (for diversification). It has been demonstrated in the Tokyo Stock Exchange that the trading strategy based on NP-hard portfolio optimization for $N$=128 is executable with the FPGA (field-programmable gate array)-based trading system with a response latency of 164 $μ$s. ...

July 12, 2023 · 2 min · Research Team

Stochastic Delay Differential Games: Financial Modeling and Machine Learning Algorithms

Stochastic Delay Differential Games: Financial Modeling and Machine Learning Algorithms ArXiv ID: 2307.06450 “View on arXiv” Authors: Unknown Abstract In this paper, we propose a numerical methodology for finding the closed-loop Nash equilibrium of stochastic delay differential games through deep learning. These games are prevalent in finance and economics where multi-agent interaction and delayed effects are often desired features in a model, but are introduced at the expense of increased dimensionality of the problem. This increased dimensionality is especially significant as that arising from the number of players is coupled with the potential infinite dimensionality caused by the delay. Our approach involves parameterizing the controls of each player using distinct recurrent neural networks. These recurrent neural network-based controls are then trained using a modified version of Brown’s fictitious play, incorporating deep learning techniques. To evaluate the effectiveness of our methodology, we test it on finance-related problems with known solutions. Furthermore, we also develop new problems and derive their analytical Nash equilibrium solutions, which serve as additional benchmarks for assessing the performance of our proposed deep learning approach. ...

July 12, 2023 · 2 min · Research Team

A causal interactions indicator between two time series using extreme variations in the first eigenvalue of lagged correlation matrices

A causal interactions indicator between two time series using extreme variations in the first eigenvalue of lagged correlation matrices ArXiv ID: 2307.04953 “View on arXiv” Authors: Unknown Abstract This paper presents a method to identify causal interactions between two time series. The largest eigenvalue follows a Tracy-Widom distribution, derived from a Coulomb gas model. This defines causal interactions as the pushing and pulling of the gas, measurable by the variability of the largest eigenvalue’s explanatory power. The hypothesis that this setup applies to time series interactions was validated, with causality inferred from time lags. The standard deviation of the largest eigenvalue’s explanatory power in lagged correlation matrices indicated the probability of causal interaction between time series. Contrasting with traditional methods that rely on forecasting or window-based parametric controls, this approach offers a novel definition of causality based on dynamic monitoring of tail events. Experimental validation with controlled trials and historical data shows that this method outperforms Granger’s causality test in detecting structural changes in time series. Applications to stock returns and financial market data show the indicator’s predictive capabilities regarding average stock return and realized volatility. Further validation with brokerage data confirms its effectiveness in inferring causal relationships in liquidity flows, highlighting its potential for market and liquidity risk management. ...

July 11, 2023 · 2 min · Research Team

Portfolio Optimization: A Comparative Study

Portfolio Optimization: A Comparative Study ArXiv ID: 2307.05048 “View on arXiv” Authors: Unknown Abstract Portfolio optimization has been an area that has attracted considerable attention from the financial research community. Designing a profitable portfolio is a challenging task involving precise forecasting of future stock returns and risks. This chapter presents a comparative study of three portfolio design approaches, the mean-variance portfolio (MVP), hierarchical risk parity (HRP)-based portfolio, and autoencoder-based portfolio. These three approaches to portfolio design are applied to the historical prices of stocks chosen from ten thematic sectors listed on the National Stock Exchange (NSE) of India. The portfolios are designed using the stock price data from January 1, 2018, to December 31, 2021, and their performances are tested on the out-of-sample data from January 1, 2022, to December 31, 2022. Extensive results are analyzed on the performance of the portfolios. It is observed that the performance of the MVP portfolio is the best on the out-of-sample data for the risk-adjusted returns. However, the autoencoder portfolios outperformed their counterparts on annual returns. ...

July 11, 2023 · 2 min · Research Team