false

Generalized Exponentiated Gradient Algorithms and Their Application to On-Line Portfolio Selection

Generalized Exponentiated Gradient Algorithms and Their Application to On-Line Portfolio Selection ArXiv ID: 2406.00655 “View on arXiv” Authors: Unknown Abstract This paper introduces a novel family of generalized exponentiated gradient (EG) updates derived from an Alpha-Beta divergence regularization function. Collectively referred to as EGAB, the proposed updates belong to the category of multiplicative gradient algorithms for positive data and demonstrate considerable flexibility by controlling iteration behavior and performance through three hyperparameters: $α$, $β$, and the learning rate $η$. To enforce a unit $l_1$ norm constraint for nonnegative weight vectors within generalized EGAB algorithms, we develop two slightly distinct approaches. One method exploits scale-invariant loss functions, while the other relies on gradient projections onto the feasible domain. As an illustration of their applicability, we evaluate the proposed updates in addressing the online portfolio selection problem (OLPS) using gradient-based methods. Here, they not only offer a unified perspective on the search directions of various OLPS algorithms (including the standard exponentiated gradient and diverse mean-reversion strategies), but also facilitate smooth interpolation and extension of these updates due to the flexibility in hyperparameter selection. Simulation results confirm that the adaptability of these generalized gradient updates can effectively enhance the performance for some portfolios, particularly in scenarios involving transaction costs. ...

June 2, 2024 · 2 min · Research Team

Portfolio Optimization with Robust Covariance and Conditional Value-at-Risk Constraints

Portfolio Optimization with Robust Covariance and Conditional Value-at-Risk Constraints ArXiv ID: 2406.00610 “View on arXiv” Authors: Unknown Abstract The measure of portfolio risk is an important input of the Markowitz framework. In this study, we explored various methods to obtain a robust covariance estimators that are less susceptible to financial data noise. We evaluated the performance of large-cap portfolio using various forms of Ledoit Shrinkage Covariance and Robust Gerber Covariance matrix during the period of 2012 to 2022. Out-of-sample performance indicates that robust covariance estimators can outperform the market capitalization-weighted benchmark portfolio, particularly during bull markets. The Gerber covariance with Mean-Absolute-Deviation (MAD) emerged as the top performer. However, robust estimators do not manage tail risk well under extreme market conditions, for example, Covid-19 period. When we aim to control for tail risk, we should add constraint on Conditional Value-at-Risk (CVaR) to make more conservative decision on risk exposure. Additionally, we incorporated unsupervised clustering algorithm K-means to the optimization algorithm (i.e. Nested Clustering Optimization, NCO). It not only helps mitigate numerical instability of the optimization algorithm, but also contributes to lower drawdown as well. ...

June 2, 2024 · 2 min · Research Team

Estimation of tail risk measures in finance: Approaches to extreme value mixture modeling

Estimation of tail risk measures in finance: Approaches to extreme value mixture modeling ArXiv ID: 2407.05933 “View on arXiv” Authors: Unknown Abstract This thesis evaluates most of the extreme mixture models and methods that have appended in the literature and implements them in the context of finance and insurance. The paper also reviews and studies extreme value theory, time series, volatility clustering, and risk measurement methods in detail. Comparing the performance of extreme mixture models and methods on different simulated distributions shows that the method based on kernel density estimation does not have an absolute superior or close to the best performance, especially for the estimation of the extreme upper or lower tail of the distribution. Preprocessing time series data using a generalized autoregressive conditional heteroskedasticity model (GARCH) and applying extreme value mixture models on extracted residuals from GARCH can improve the goodness of fit and the estimation of the tail distribution. ...

June 1, 2024 · 2 min · Research Team

Machine Learning Methods for Pricing Financial Derivatives

Machine Learning Methods for Pricing Financial Derivatives ArXiv ID: 2406.00459 “View on arXiv” Authors: Unknown Abstract Stochastic differential equation (SDE) models are the foundation for pricing and hedging financial derivatives. The drift and volatility functions in SDE models are typically chosen to be algebraic functions with a small number (less than 5) parameters which can be calibrated to market data. A more flexible approach is to use neural networks to model the drift and volatility functions, which provides more degrees-of-freedom to match observed market data. Training of models requires optimizing over an SDE, which is computationally challenging. For European options, we develop a fast stochastic gradient descent (SGD) algorithm for training the neural network-SDE model. Our SGD algorithm uses two independent SDE paths to obtain an unbiased estimate of the direction of steepest descent. For American options, we optimize over the corresponding Kolmogorov partial differential equation (PDE). The neural network appears as coefficient functions in the PDE. Models are trained on large datasets (many contracts), requiring either large simulations (many Monte Carlo samples for the stock price paths) or large numbers of PDEs (a PDE must be solved for each contract). Numerical results are presented for real market data including S&P 500 index options, S&P 100 index options, and single-stock American options. The neural-network-based SDE models are compared against the Black-Scholes model, the Dupire’s local volatility model, and the Heston model. Models are evaluated in terms of how accurate they are at pricing out-of-sample financial derivatives, which is a core task in derivative pricing at financial institutions. ...

June 1, 2024 · 2 min · Research Team

Modelling financial volume curves with hierarchical Poisson processes

Modelling financial volume curves with hierarchical Poisson processes ArXiv ID: 2406.19402 “View on arXiv” Authors: Unknown Abstract Modeling the trading volume curves of financial instruments throughout the day is of key interest in financial trading applications. Predictions of these so-called volume profiles guide trade execution strategies, for example, a common strategy is to trade a desired quantity across many orders in line with the expected volume curve throughout the day so as not to impact the price of the instrument. The volume curves (for each day) are naturally grouped by stock and can be further gathered into higher-level groupings, such as by industry. In order to model such admixtures of volume curves, we introduce a hierarchical Poisson process model for the intensity functions of admixtures of inhomogenous Poisson processes, which represent the trading times of the stock throughout the day. The model is based on the hierarchical Dirichlet process, and an efficient Markov Chain Monte Carlo (MCMC) algorithm is derived following the slice sampling framework for Bayesian nonparametric mixture models. We demonstrate the method on datasets of different stocks from the Trade and Quote repository maintained by Wharton Research Data Services, including the most liquid stock on the NASDAQ stock exchange, Apple, demonstrating the scalability of the approach. ...

June 1, 2024 · 2 min · Research Team

A First Look at Financial Data Analysis Using ChatGPT-4o

A First Look at Financial Data Analysis Using ChatGPT-4o ArXiv ID: ssrn-4849578 “View on arXiv” Authors: Unknown Abstract OpenAI’s new flagship model, ChatGPT-4o, released on May 13, 2024, offers enhanced natural language understanding and more coherent responses. In this paper, we Keywords: Large Language Models (LLMs), Natural Language Processing, Generative AI, AI Evaluation, Model Performance, Technology/AI Complexity vs Empirical Score Math Complexity: 4.0/10 Empirical Rigor: 6.5/10 Quadrant: Street Traders Why: The paper involves implementing and comparing specific financial models like ARMA-GARCH, indicating moderate-to-high implementation complexity, but the core mathematics is largely descriptive and comparative rather than novel. Empirical rigor is high due to the use of real datasets (CRSP, Fama-French) and direct backtesting comparisons against Stata. flowchart TD A["Research Goal: Evaluate ChatGPT-4o for Financial Data Analysis"] --> B["Methodology: Zero-shot vs. Chain-of-Thought"] B --> C["Input: Financial Statements & Market Data"] C --> D["Process: Text Generation & Sentiment Analysis"] D --> E["Output: Financial Predictions & Explanations"] E --> F["Key Findings: High Accuracy in NLP Tasks"] F --> G["Outcome: Strong Potential but Limited Numerical Reasoning"]

May 31, 2024 · 1 min · Research Team

Loss-Versus-Fair: Efficiency of Dutch Auctions on Blockchains

Loss-Versus-Fair: Efficiency of Dutch Auctions on Blockchains ArXiv ID: 2406.00113 “View on arXiv” Authors: Unknown Abstract Milionis et al.(2023) studied the rate at which automated market makers leak value to arbitrageurs when block times are discrete and follow a Poisson process, and where the risky asset price follows a geometric Brownian motion. We extend their model to analyze another popular mechanism in decentralized finance for onchain trading: Dutch auctions. We compute the expected losses that a seller incurs to arbitrageurs and expected time-to-fill for Dutch auctions as a function of starting price, volatility, decay rate, and average interblock time. We also extend the analysis to gradual Dutch auctions, a variation on Dutch auctions for selling tokens over time at a continuous rate. We use these models to explore the tradeoff between speed of execution and quality of execution, which could help inform practitioners in setting parameters for starting price and decay rate on Dutch auctions, or help platform designers determine performance parameters like block times. ...

May 31, 2024 · 2 min · Research Team

Transforming Japan Real Estate

Transforming Japan Real Estate ArXiv ID: 2405.20715 “View on arXiv” Authors: Unknown Abstract The Japanese real estate market, valued over 35 trillion USD, offers significant investment opportunities. Accurate rent and price forecasting could provide a substantial competitive edge. This paper explores using alternative data variables to predict real estate performance in 1100 Japanese municipalities. A comprehensive house price index was created, covering all municipalities from 2005 to the present, using a dataset of over 5 million transactions. This core dataset was enriched with economic factors spanning decades, allowing for price trajectory predictions. The findings show that alternative data variables can indeed forecast real estate performance effectively. Investment signals based on these variables yielded notable returns with low volatility. For example, the net migration ratio delivered an annualized return of 4.6% with a Sharpe ratio of 1.5. Taxable income growth and new dwellings ratio also performed well, with annualized returns of 4.1% (Sharpe ratio of 1.3) and 3.3% (Sharpe ratio of 0.9), respectively. When combined with transformer models to predict risk-adjusted returns 4 years in advance, the model achieved an R-squared score of 0.28, explaining nearly 30% of the variation in future municipality prices. These results highlight the potential of alternative data variables in real estate investment. They underscore the need for further research to identify more predictive factors. Nonetheless, the evidence suggests that such data can provide valuable insights into real estate price drivers, enabling more informed investment decisions in the Japanese market. ...

May 31, 2024 · 2 min · Research Team

Low-dimensional approximations of the conditional law of Volterra processes: a non-positive curvature approach

Low-dimensional approximations of the conditional law of Volterra processes: a non-positive curvature approach ArXiv ID: 2405.20094 “View on arXiv” Authors: Unknown Abstract Predicting the conditional evolution of Volterra processes with stochastic volatility is a crucial challenge in mathematical finance. While deep neural network models offer promise in approximating the conditional law of such processes, their effectiveness is hindered by the curse of dimensionality caused by the infinite dimensionality and non-smooth nature of these problems. To address this, we propose a two-step solution. Firstly, we develop a stable dimension reduction technique, projecting the law of a reasonably broad class of Volterra process onto a low-dimensional statistical manifold of non-positive sectional curvature. Next, we introduce a sequentially deep learning model tailored to the manifold’s geometry, which we show can approximate the projected conditional law of the Volterra process. Our model leverages an auxiliary hypernetwork to dynamically update its internal parameters, allowing it to encode non-stationary dynamics of the Volterra process, and it can be interpreted as a gating mechanism in a mixture of expert models where each expert is specialized at a specific point in time. Our hypernetwork further allows us to achieve approximation rates that would seemingly only be possible with very large networks. ...

May 30, 2024 · 2 min · Research Team

A Tick-by-Tick Solution for Concentrated Liquidity Provisioning

A Tick-by-Tick Solution for Concentrated Liquidity Provisioning ArXiv ID: 2405.18728 “View on arXiv” Authors: Unknown Abstract Automated market makers with concentrated liquidity capabilities are programmable at the tick level. The maximization of earned fees, plus depreciated reserves, is a convex optimization problem whose vector solution gives the best provision of liquidity at each tick under a given set of parameter estimates for swap volume and price volatility. Surprisingly, early results show that concentrating liquidity around the current price is usually not the best strategy. ...

May 29, 2024 · 1 min · Research Team