false

Optimal retirement in presence of stochastic labor income: a free boundary approach in an incomplete market

Optimal retirement in presence of stochastic labor income: a free boundary approach in an incomplete market ArXiv ID: 2407.19190 “View on arXiv” Authors: Unknown Abstract In this work, we address the optimal retirement problem in the presence of a stochastic wage, formulated as a free boundary problem. Specifically, we explore an incomplete market setting where the wage cannot be perfectly hedged through investments in the risk-free and risky assets that characterize the financial market. ...

July 27, 2024 · 1 min · Research Team

Contrastive Learning of Asset Embeddings from Financial Time Series

Contrastive Learning of Asset Embeddings from Financial Time Series ArXiv ID: 2407.18645 “View on arXiv” Authors: Unknown Abstract Representation learning has emerged as a powerful paradigm for extracting valuable latent features from complex, high-dimensional data. In financial domains, learning informative representations for assets can be used for tasks like sector classification, and risk management. However, the complex and stochastic nature of financial markets poses unique challenges. We propose a novel contrastive learning framework to generate asset embeddings from financial time series data. Our approach leverages the similarity of asset returns over many subwindows to generate informative positive and negative samples, using a statistical sampling strategy based on hypothesis testing to address the noisy nature of financial data. We explore various contrastive loss functions that capture the relationships between assets in different ways to learn a discriminative representation space. Experiments on real-world datasets demonstrate the effectiveness of the learned asset embeddings on benchmark industry classification and portfolio optimization tasks. In each case our novel approaches significantly outperform existing baselines highlighting the potential for contrastive learning to capture meaningful and actionable relationships in financial data. ...

July 26, 2024 · 2 min · Research Team

CVA Sensitivities, Hedging and Risk

CVA Sensitivities, Hedging and Risk ArXiv ID: 2407.18583 “View on arXiv” Authors: Unknown Abstract We present a unified framework for computing CVA sensitivities, hedging the CVA, and assessing CVA risk, using probabilistic machine learning meant as refined regression tools on simulated data, validatable by low-cost companion Monte Carlo procedures. Various notions of sensitivities are introduced and benchmarked numerically. We identify the sensitivities representing the best practical trade-offs in downstream tasks including CVA hedging and risk assessment. ...

July 26, 2024 · 1 min · Research Team

Large Language Model Agent in Financial Trading: A Survey

Large Language Model Agent in Financial Trading: A Survey ArXiv ID: 2408.06361 “View on arXiv” Authors: Unknown Abstract Trading is a highly competitive task that requires a combination of strategy, knowledge, and psychological fortitude. With the recent success of large language models(LLMs), it is appealing to apply the emerging intelligence of LLM agents in this competitive arena and understanding if they can outperform professional traders. In this survey, we provide a comprehensive review of the current research on using LLMs as agents in financial trading. We summarize the common architecture used in the agent, the data inputs, and the performance of LLM trading agents in backtesting as well as the challenges presented in these research. This survey aims to provide insights into the current state of LLM-based financial trading agents and outline future research directions in this field. ...

July 26, 2024 · 2 min · Research Team

Multilevel Monte Carlo in Sample Average Approximation: Convergence, Complexity and Application

Multilevel Monte Carlo in Sample Average Approximation: Convergence, Complexity and Application ArXiv ID: 2407.18504 “View on arXiv” Authors: Unknown Abstract In this paper, we examine the Sample Average Approximation (SAA) procedure within a framework where the Monte Carlo estimator of the expectation is biased. We also introduce Multilevel Monte Carlo (MLMC) in the SAA setup to enhance the computational efficiency of solving optimization problems. In this context, we conduct a thorough analysis, exploiting Cramér’s large deviation theory, to establish uniform convergence, quantify the convergence rate, and determine the sample complexity for both standard Monte Carlo and MLMC paradigms. Additionally, we perform a root-mean-squared error analysis utilizing tools from empirical process theory to derive sample complexity without relying on the finite moment condition typically required for uniform convergence results. Finally, we validate our findings and demonstrate the advantages of the MLMC estimator through numerical examples, estimating Conditional Value-at-Risk (CVaR) in the Geometric Brownian Motion and nested expectation framework. ...

July 26, 2024 · 2 min · Research Team

TCGPN: Temporal-Correlation Graph Pre-trained Network for Stock Forecasting

TCGPN: Temporal-Correlation Graph Pre-trained Network for Stock Forecasting ArXiv ID: 2407.18519 “View on arXiv” Authors: Unknown Abstract Recently, the incorporation of both temporal features and the correlation across time series has become an effective approach in time series prediction. Spatio-Temporal Graph Neural Networks (STGNNs) demonstrate good performance on many Temporal-correlation Forecasting Problem. However, when applied to tasks lacking periodicity, such as stock data prediction, the effectiveness and robustness of STGNNs are found to be unsatisfactory. And STGNNs are limited by memory savings so that cannot handle problems with a large number of nodes. In this paper, we propose a novel approach called the Temporal-Correlation Graph Pre-trained Network (TCGPN) to address these limitations. TCGPN utilize Temporal-correlation fusion encoder to get a mixed representation and pre-training method with carefully designed temporal and correlation pre-training tasks. Entire structure is independent of the number and order of nodes, so better results can be obtained through various data enhancements. And memory consumption during training can be significantly reduced through multiple sampling. Experiments are conducted on real stock market data sets CSI300 and CSI500 that exhibit minimal periodicity. We fine-tune a simple MLP in downstream tasks and achieve state-of-the-art results, validating the capability to capture more robust temporal correlation patterns. ...

July 26, 2024 · 2 min · Research Team

Financial Statement Analysis with Large Language Models

Financial Statement Analysis with Large Language Models ArXiv ID: 2407.17866 “View on arXiv” Authors: Unknown Abstract We investigate whether large language models (LLMs) can successfully perform financial statement analysis in a way similar to a professional human analyst. We provide standardized and anonymous financial statements to GPT4 and instruct the model to analyze them to determine the direction of firms’ future earnings. Even without narrative or industry-specific information, the LLM outperforms financial analysts in its ability to predict earnings changes directionally. The LLM exhibits a relative advantage over human analysts in situations when the analysts tend to struggle. Furthermore, we find that the prediction accuracy of the LLM is on par with a narrowly trained state-of-the-art ML model. LLM prediction does not stem from its training memory. Instead, we find that the LLM generates useful narrative insights about a company’s future performance. Lastly, our trading strategies based on GPT’s predictions yield a higher Sharpe ratio and alphas than strategies based on other models. Our results suggest that LLMs may take a central role in analysis and decision-making. ...

July 25, 2024 · 2 min · Research Team

Fine-Tuning Large Language Models for Stock Return Prediction Using Newsflow

Fine-Tuning Large Language Models for Stock Return Prediction Using Newsflow ArXiv ID: 2407.18103 “View on arXiv” Authors: Unknown Abstract Large language models (LLMs) and their fine-tuning techniques have demonstrated superior performance in various language understanding and generation tasks. This paper explores fine-tuning LLMs for stock return forecasting with financial newsflow. In quantitative investing, return forecasting is fundamental for subsequent tasks like stock picking, portfolio optimization, etc. We formulate the model to include text representation and forecasting modules. We propose to compare the encoder-only and decoder-only LLMs, considering they generate text representations in distinct ways. The impact of these different representations on forecasting performance remains an open question. Meanwhile, we compare two simple methods of integrating LLMs’ token-level representations into the forecasting module. The experiments on real news and investment universes reveal that: (1) aggregated representations from LLMs’ token-level embeddings generally produce return predictions that enhance the performance of long-only and long-short portfolios; (2) in the relatively large investment universe, the decoder LLMs-based prediction model leads to stronger portfolios, whereas in the small universes, there are no consistent winners. Among the three LLMs studied (DeBERTa, Mistral, Llama), Mistral performs more robustly across different universes; (3) return predictions derived from LLMs’ text representations are a strong signal for portfolio construction, outperforming conventional sentiment scores. ...

July 25, 2024 · 2 min · Research Team

Estimation of bid-ask spreads in the presence of serial dependence

Estimation of bid-ask spreads in the presence of serial dependence ArXiv ID: 2407.17401 “View on arXiv” Authors: Unknown Abstract Starting from a basic model in which the dynamic of the transaction prices is a geometric Brownian motion disrupted by a microstructure white noise, corresponding to the random alternation of bids and asks, we propose moment-based estimators along with their statistical properties. We then make the model more realistic by considering serial dependence: we assume a geometric fractional Brownian motion for the price, then an Ornstein-Uhlenbeck process for the microstructure noise. In these two cases of serial dependence, we propose again consistent and asymptotically normal estimators. All our estimators are compared on simulated data with existing approaches, such as Roll, Corwin-Schultz, Abdi-Ranaldo, or Ardia-Guidotti-Kroencke estimators. ...

July 24, 2024 · 2 min · Research Team

High order approximations and simulation schemes for the log-Heston process

High order approximations and simulation schemes for the log-Heston process ArXiv ID: 2407.17151 “View on arXiv” Authors: Unknown Abstract We present weak approximations schemes of any order for the Heston model that are obtained by using the method developed by Alfonsi and Bally (2021). This method consists in combining approximation schemes calculated on different random grids to increase the order of convergence. We apply this method with either the Ninomiya-Victoir scheme (2008) or a second-order scheme that samples exactly the volatility component, and we show rigorously that we can achieve then any order of convergence. We give numerical illustrations on financial examples that validate the theoretical order of convergence. We also present promising numerical results for the multifactor/rough Heston model and hint at applications to other models, including the Bates model and the double Heston model. ...

July 24, 2024 · 2 min · Research Team