false

Explainable Automated Machine Learning for Credit Decisions: Enhancing Human Artificial Intelligence Collaboration in Financial Engineering

Explainable Automated Machine Learning for Credit Decisions: Enhancing Human Artificial Intelligence Collaboration in Financial Engineering ArXiv ID: 2402.03806 “View on arXiv” Authors: Unknown Abstract This paper explores the integration of Explainable Automated Machine Learning (AutoML) in the realm of financial engineering, specifically focusing on its application in credit decision-making. The rapid evolution of Artificial Intelligence (AI) in finance has necessitated a balance between sophisticated algorithmic decision-making and the need for transparency in these systems. The focus is on how AutoML can streamline the development of robust machine learning models for credit scoring, while Explainable AI (XAI) methods, particularly SHapley Additive exPlanations (SHAP), provide insights into the models’ decision-making processes. This study demonstrates how the combination of AutoML and XAI not only enhances the efficiency and accuracy of credit decisions but also fosters trust and collaboration between humans and AI systems. The findings underscore the potential of explainable AutoML in improving the transparency and accountability of AI-driven financial decisions, aligning with regulatory requirements and ethical considerations. ...

February 6, 2024 · 2 min · Research Team

Exploring the Impact: How Decentralized Exchange Designs Shape Traders' Behavior on Perpetual Future Contracts

Exploring the Impact: How Decentralized Exchange Designs Shape Traders’ Behavior on Perpetual Future Contracts ArXiv ID: 2402.03953 “View on arXiv” Authors: Unknown Abstract In this paper, we analyze traders’ behavior within both centralized exchanges (CEXs) and decentralized exchanges (DEXs), focusing on the volatility of Bitcoin prices and the trading activity of investors engaged in perpetual future contracts. We categorize the architecture of perpetual future exchanges into three distinct models, each exhibiting unique patterns of trader behavior in relation to trading volume, open interest, liquidation, and leverage. Our detailed examination of DEXs, especially those utilizing the Virtual Automated Market Making (VAMM) Model, uncovers a differential impact of open interest on long versus short positions. In exchanges which operate under the Oracle Pricing Model, we find that traders primarily act as price takers, with their trading actions reflecting direct responses to price movements of the underlying assets. Furthermore, our research highlights a significant propensity among less informed traders to overreact to positive news, as demonstrated by an increase in long positions. This study contributes to the understanding of market dynamics in digital asset exchanges, offering insights into the behavioral finance for future innovation of decentralized finance. ...

February 6, 2024 · 2 min · Research Team

Learning to Generate Explainable Stock Predictions using Self-Reflective Large Language Models

Learning to Generate Explainable Stock Predictions using Self-Reflective Large Language Models ArXiv ID: 2402.03659 “View on arXiv” Authors: Unknown Abstract Explaining stock predictions is generally a difficult task for traditional non-generative deep learning models, where explanations are limited to visualizing the attention weights on important texts. Today, Large Language Models (LLMs) present a solution to this problem, given their known capabilities to generate human-readable explanations for their decision-making process. However, the task of stock prediction remains challenging for LLMs, as it requires the ability to weigh the varying impacts of chaotic social texts on stock prices. The problem gets progressively harder with the introduction of the explanation component, which requires LLMs to explain verbally why certain factors are more important than the others. On the other hand, to fine-tune LLMs for such a task, one would need expert-annotated samples of explanation for every stock movement in the training set, which is expensive and impractical to scale. To tackle these issues, we propose our Summarize-Explain-Predict (SEP) framework, which utilizes a self-reflective agent and Proximal Policy Optimization (PPO) to let a LLM teach itself how to generate explainable stock predictions in a fully autonomous manner. The reflective agent learns how to explain past stock movements through self-reasoning, while the PPO trainer trains the model to generate the most likely explanations from input texts. The training samples for the PPO trainer are also the responses generated during the reflective process, which eliminates the need for human annotators. Using our SEP framework, we fine-tune a LLM that can outperform both traditional deep-learning and LLM methods in prediction accuracy and Matthews correlation coefficient for the stock classification task. To justify the generalization capability of our framework, we further test it on the portfolio construction task, and demonstrate its effectiveness through various portfolio metrics. ...

February 6, 2024 · 3 min · Research Team

QuantAgent: Seeking Holy Grail in Trading by Self-Improving Large Language Model

QuantAgent: Seeking Holy Grail in Trading by Self-Improving Large Language Model ArXiv ID: 2402.03755 “View on arXiv” Authors: Unknown Abstract Autonomous agents based on Large Language Models (LLMs) that devise plans and tackle real-world challenges have gained prominence.However, tailoring these agents for specialized domains like quantitative investment remains a formidable task. The core challenge involves efficiently building and integrating a domain-specific knowledge base for the agent’s learning process. This paper introduces a principled framework to address this challenge, comprising a two-layer loop.In the inner loop, the agent refines its responses by drawing from its knowledge base, while in the outer loop, these responses are tested in real-world scenarios to automatically enhance the knowledge base with new insights.We demonstrate that our approach enables the agent to progressively approximate optimal behavior with provable efficiency.Furthermore, we instantiate this framework through an autonomous agent for mining trading signals named QuantAgent. Empirical results showcase QuantAgent’s capability in uncovering viable financial signals and enhancing the accuracy of financial forecasts. ...

February 6, 2024 · 2 min · Research Team

TAC Method for Fitting Exponential Autoregressive Models and Others: Applications in Economy and Finance

TAC Method for Fitting Exponential Autoregressive Models and Others: Applications in Economy and Finance ArXiv ID: 2402.04138 “View on arXiv” Authors: Unknown Abstract There are a couple of purposes in this paper: to study a problem of approximation with exponential functions and to show its relevance for the economic science. We present results that completely solve the problem of the best approximation by means of exponential functions and we will be able to determine what kind of data is suitable to be fitted. Data will be approximated using TAC (implemented in the R-package nlstac), a numerical algorithm for fitting data by exponential patterns without initial guess designed by the authors. We check one more time the robustness of this algorithm by successfully applying it to two very distant areas of economy: demand curves and nonlinear time series. This shows TAC’s utility and highlights how far this algorithm could be used. ...

February 6, 2024 · 2 min · Research Team

DiffsFormer: A Diffusion Transformer on Stock Factor Augmentation

DiffsFormer: A Diffusion Transformer on Stock Factor Augmentation ArXiv ID: 2402.06656 “View on arXiv” Authors: Unknown Abstract Machine learning models have demonstrated remarkable efficacy and efficiency in a wide range of stock forecasting tasks. However, the inherent challenges of data scarcity, including low signal-to-noise ratio (SNR) and data homogeneity, pose significant obstacles to accurate forecasting. To address this issue, we propose a novel approach that utilizes artificial intelligence-generated samples (AIGS) to enhance the training procedures. In our work, we introduce the Diffusion Model to generate stock factors with Transformer architecture (DiffsFormer). DiffsFormer is initially trained on a large-scale source domain, incorporating conditional guidance so as to capture global joint distribution. When presented with a specific downstream task, we employ DiffsFormer to augment the training procedure by editing existing samples. This editing step allows us to control the strength of the editing process, determining the extent to which the generated data deviates from the target domain. To evaluate the effectiveness of DiffsFormer augmented training, we conduct experiments on the CSI300 and CSI800 datasets, employing eight commonly used machine learning models. The proposed method achieves relative improvements of 7.2% and 27.8% in annualized return ratio for the respective datasets. Furthermore, we perform extensive experiments to gain insights into the functionality of DiffsFormer and its constituent components, elucidating how they address the challenges of data scarcity and enhance the overall model performance. Our research demonstrates the efficacy of leveraging AIGS and the DiffsFormer architecture to mitigate data scarcity in stock forecasting tasks. ...

February 5, 2024 · 2 min · Research Team

Neural option pricing for rough Bergomi model

Neural option pricing for rough Bergomi model ArXiv ID: 2402.02714 “View on arXiv” Authors: Unknown Abstract The rough Bergomi (rBergomi) model can accurately describe the historical and implied volatilities, and has gained much attention in the past few years. However, there are many hidden unknown parameters or even functions in the model. In this work, we investigate the potential of learning the forward variance curve in the rBergomi model using a neural SDE. To construct an efficient solver for the neural SDE, we propose a novel numerical scheme for simulating the volatility process using the modified summation of exponentials. Using the Wasserstein 1-distance to define the loss function, we show that the learned forward variance curve is capable of calibrating the price process of the underlying asset and the price of the European-style options simultaneously. Several numerical tests are provided to demonstrate its performance. ...

February 5, 2024 · 2 min · Research Team

AI in ESG for Financial Institutions: An Industrial Survey

AI in ESG for Financial Institutions: An Industrial Survey ArXiv ID: 2403.05541 “View on arXiv” Authors: Unknown Abstract The burgeoning integration of Artificial Intelligence (AI) into Environmental, Social, and Governance (ESG) initiatives within the financial sector represents a paradigm shift towards more sus-tainable and equitable financial practices. This paper surveys the industrial landscape to delineate the necessity and impact of AI in bolstering ESG frameworks. With the advent of stringent regulatory requirements and heightened stakeholder awareness, financial institutions (FIs) are increasingly compelled to adopt ESG criteria. AI emerges as a pivotal tool in navigating the complex in-terplay of financial activities and sustainability goals. Our survey categorizes AI applications across three main pillars of ESG, illustrating how AI enhances analytical capabilities, risk assessment, customer engagement, reporting accuracy and more. Further, we delve into the critical con-siderations surrounding the use of data and the development of models, underscoring the importance of data quality, privacy, and model robustness. The paper also addresses the imperative of responsible and sustainable AI, emphasizing the ethical dimensions of AI deployment in ESG-related banking processes. Conclusively, our findings suggest that while AI offers transformative potential for ESG in banking, it also poses significant challenges that necessitate careful consideration. The final part of the paper synthesizes the survey’s insights, proposing a forward-looking stance on the adoption of AI in ESG practices. We conclude with recommendations with a reference architecture for future research and development, advocating for a balanced approach that leverages AI’s strengths while mitigating its risks within the ESG domain. ...

February 3, 2024 · 2 min · Research Team

Learning the Market: Sentiment-Based Ensemble Trading Agents

Learning the Market: Sentiment-Based Ensemble Trading Agents ArXiv ID: 2402.01441 “View on arXiv” Authors: Unknown Abstract We propose and study the integration of sentiment analysis and deep reinforcement learning ensemble algorithms for stock trading by evaluating strategies capable of dynamically altering their active agent given the concurrent market environment. In particular, we design a simple-yet-effective method for extracting financial sentiment and combine this with improvements on existing trading agents, resulting in a strategy that effectively considers both qualitative market factors and quantitative stock data. We show that our approach results in a strategy that is profitable, robust, and risk-minimal - outperforming the traditional ensemble strategy as well as single agent algorithms and market metrics. Our findings suggest that the conventional practice of switching and reevaluating agents in ensemble every fixed-number of months is sub-optimal, and that a dynamic sentiment-based framework greatly unlocks additional performance. Furthermore, as we have designed our algorithm with simplicity and efficiency in mind, we hypothesize that the transition of our method from historical evaluation towards real-time trading with live data to be relatively simple. ...

February 2, 2024 · 2 min · Research Team

Sparse spanning portfolios and under-diversification with second-order stochastic dominance

Sparse spanning portfolios and under-diversification with second-order stochastic dominance ArXiv ID: 2402.01951 “View on arXiv” Authors: Unknown Abstract We develop and implement methods for determining whether relaxing sparsity constraints on portfolios improves the investment opportunity set for risk-averse investors. We formulate a new estimation procedure for sparse second-order stochastic spanning based on a greedy algorithm and Linear Programming. We show the optimal recovery of the sparse solution asymptotically whether spanning holds or not. From large equity datasets, we estimate the expected utility loss due to possible under-diversification, and find that there is no benefit from expanding a sparse opportunity set beyond 45 assets. The optimal sparse portfolio invests in 10 industry sectors and cuts tail risk when compared to a sparse mean-variance portfolio. On a rolling-window basis, the number of assets shrinks to 25 assets in crisis periods, while standard factor models cannot explain the performance of the sparse portfolios. ...

February 2, 2024 · 2 min · Research Team