TradingGPT: Multi-Agent System with Layered Memory and Distinct Characters for Enhanced Financial Trading Performance
ArXiv ID: 2309.03736 “View on arXiv”
Authors: Unknown
Abstract
Large Language Models (LLMs), prominently highlighted by the recent evolution in the Generative Pre-trained Transformers (GPT) series, have displayed significant prowess across various domains, such as aiding in healthcare diagnostics and curating analytical business reports. The efficacy of GPTs lies in their ability to decode human instructions, achieved through comprehensively processing historical inputs as an entirety within their memory system. Yet, the memory processing of GPTs does not precisely emulate the hierarchical nature of human memory. This can result in LLMs struggling to prioritize immediate and critical tasks efficiently. To bridge this gap, we introduce an innovative LLM multi-agent framework endowed with layered memories. We assert that this framework is well-suited for stock and fund trading, where the extraction of highly relevant insights from hierarchical financial data is imperative to inform trading decisions. Within this framework, one agent organizes memory into three distinct layers, each governed by a custom decay mechanism, aligning more closely with human cognitive processes. Agents can also engage in inter-agent debate. In financial trading contexts, LLMs serve as the decision core for trading agents, leveraging their layered memory system to integrate multi-source historical actions and market insights. This equips them to navigate financial changes, formulate strategies, and debate with peer agents about investment decisions. Another standout feature of our approach is to equip agents with individualized trading traits, enhancing memory diversity and decision robustness. These sophisticated designs boost the system’s responsiveness to historical trades and real-time market signals, ensuring superior automated trading accuracy.
Keywords: Large Language Models (LLMs), Multi-agent framework, Layered memory, Trading agents, Decision debate, Equities / Funds
Complexity vs Empirical Score
- Math Complexity: 1.5/10
- Empirical Rigor: 6.0/10
- Quadrant: Street Traders
- Why: The paper introduces a novel LLM multi-agent architecture with layered memory and distinct characters, but the mathematics is limited to conceptual definitions rather than dense formulas or derivations. It demonstrates high empirical rigor through the description of a comprehensive data warehouse, specific API integration (Databento, Alpaca, OpenAI), vector database usage (FAISS), and a planned backtest against ARK fund records.
flowchart TD
A["Research Goal: Enhance LLMs for Financial Trading"] --> B["Methodology: Multi-Agent System with Layered Memory"]
B --> C["Inputs: Historical Market Data & Trading Actions"]
C --> D["Layered Memory Processing with Custom Decay"]
D --> E["Inter-Agent Decision Debate"]
E --> F["Output: Formulated Trading Strategies"]
F --> G["Outcome: Improved Trading Accuracy & Performance"]