false

Less is more: AI Decision-Making using Dynamic Deep Neural Networks for Short-Term Stock Index Prediction

Less is more: AI Decision-Making using Dynamic Deep Neural Networks for Short-Term Stock Index Prediction ArXiv ID: 2408.11740 “View on arXiv” Authors: Unknown Abstract In this paper we introduce a multi-agent deep-learning method which trades in the Futures markets based on the US S&P 500 index. The method (referred to as Model A) is an innovation founded on existing well-established machine-learning models which sample market prices and associated derivatives in order to decide whether the investment should be long/short or closed (zero exposure), on a day-to-day decision. We compare the predictions with some conventional machine-learning methods namely, Long Short-Term Memory, Random Forest and Gradient-Boosted-Trees. Results are benchmarked against a passive model in which the Futures contracts are held (long) continuously with the same exposure (level of investment). Historical tests are based on daily daytime trading carried out over a period of 6 calendar years (2018-23). We find that Model A outperforms the passive investment in key performance metrics, placing it within the top quartile performance of US Large Cap active fund managers. Model A also outperforms the three machine-learning classification comparators over this period. We observe that Model A is extremely efficient (doing less and getting more) with an exposure to the market of only 41.95% compared to the 100% market exposure of the passive investment, and thus provides increased profitability with reduced risk. ...

August 21, 2024 · 2 min · Research Team

When AI Meets Finance (StockAgent): Large Language Model-based Stock Trading in Simulated Real-world Environments

When AI Meets Finance (StockAgent): Large Language Model-based Stock Trading in Simulated Real-world Environments ArXiv ID: 2407.18957 “View on arXiv” Authors: Unknown Abstract Can AI Agents simulate real-world trading environments to investigate the impact of external factors on stock trading activities (e.g., macroeconomics, policy changes, company fundamentals, and global events)? These factors, which frequently influence trading behaviors, are critical elements in the quest for maximizing investors’ profits. Our work attempts to solve this problem through large language model based agents. We have developed a multi-agent AI system called StockAgent, driven by LLMs, designed to simulate investors’ trading behaviors in response to the real stock market. The StockAgent allows users to evaluate the impact of different external factors on investor trading and to analyze trading behavior and profitability effects. Additionally, StockAgent avoids the test set leakage issue present in existing trading simulation systems based on AI Agents. Specifically, it prevents the model from leveraging prior knowledge it may have acquired related to the test data. We evaluate different LLMs under the framework of StockAgent in a stock trading environment that closely resembles real-world conditions. The experimental results demonstrate the impact of key external factors on stock market trading, including trading behavior and stock price fluctuation rules. This research explores the study of agents’ free trading gaps in the context of no prior knowledge related to market data. The patterns identified through StockAgent simulations provide valuable insights for LLM-based investment advice and stock recommendation. The code is available at https://github.com/MingyuJ666/Stockagent. ...

July 15, 2024 · 2 min · Research Team

Learning Not to Spoof

Learning Not to Spoof ArXiv ID: 2306.06087 “View on arXiv” Authors: Unknown Abstract As intelligent trading agents based on reinforcement learning (RL) gain prevalence, it becomes more important to ensure that RL agents obey laws, regulations, and human behavioral expectations. There is substantial literature concerning the aversion of obvious catastrophes like crashing a helicopter or bankrupting a trading account, but little around the avoidance of subtle non-normative behavior for which there are examples, but no programmable definition. Such behavior may violate legal or regulatory, rather than physical or monetary, constraints. In this article, I consider a series of experiments in which an intelligent stock trading agent maximizes profit but may also inadvertently learn to spoof the market in which it participates. I first inject a hand-coded spoofing agent to a multi-agent market simulation and learn to recognize spoofing activity sequences. Then I replace the hand-coded spoofing trader with a simple profit-maximizing RL agent and observe that it independently discovers spoofing as the optimal strategy. Finally, I introduce a method to incorporate the recognizer as normative guide, shaping the agent’s perceived rewards and altering its selected actions. The agent remains profitable while avoiding spoofing behaviors that would result in even higher profit. After presenting the empirical results, I conclude with some recommendations. The method should generalize to the reduction of any unwanted behavior for which a recognizer can be learned. ...

June 9, 2023 · 2 min · Research Team