FinMem: A Performance-Enhanced LLM Trading Agent with Layered Memory and Character Design

ArXiv ID: 2311.13743 “View on arXiv”

Authors: Unknown

Abstract

Recent advancements in Large Language Models (LLMs) have exhibited notable efficacy in question-answering (QA) tasks across diverse domains. Their prowess in integrating extensive web knowledge has fueled interest in developing LLM-based autonomous agents. While LLMs are efficient in decoding human instructions and deriving solutions by holistically processing historical inputs, transitioning to purpose-driven agents requires a supplementary rational architecture to process multi-source information, establish reasoning chains, and prioritize critical tasks. Addressing this, we introduce \textsc{“FinMem”}, a novel LLM-based agent framework devised for financial decision-making. It encompasses three core modules: Profiling, to customize the agent’s characteristics; Memory, with layered message processing, to aid the agent in assimilating hierarchical financial data; and Decision-making, to convert insights gained from memories into investment decisions. Notably, \textsc{“FinMem”}’s memory module aligns closely with the cognitive structure of human traders, offering robust interpretability and real-time tuning. Its adjustable cognitive span allows for the retention of critical information beyond human perceptual limits, thereby enhancing trading outcomes. This framework enables the agent to self-evolve its professional knowledge, react agilely to new investment cues, and continuously refine trading decisions in the volatile financial environment. We first compare \textsc{“FinMem”} with various algorithmic agents on a scalable real-world financial dataset, underscoring its leading trading performance in stocks. We then fine-tuned the agent’s perceptual span and character setting to achieve a significantly enhanced trading performance. Collectively, \textsc{“FinMem”} presents a cutting-edge LLM agent framework for automated trading, boosting cumulative investment returns.

Keywords: Large Language Models (LLMs), Autonomous Agents, Algorithmic Trading, Financial Decision-Making, Memory Networks

Complexity vs Empirical Score

  • Math Complexity: 6.0/10
  • Empirical Rigor: 7.0/10
  • Quadrant: Holy Grail
  • Why: The paper introduces a novel LLM agent framework with layered memory and character design, involving complex cognitive modeling and adaptive mechanisms that align with financial decision-making. It demonstrates empirical rigor through backtesting on real-world financial datasets, comparing performance against other algorithmic agents and reporting enhanced trading results.
  flowchart TD
    A["Research Goal:<br>Develop LLM Agent for<br>Financial Decision-Making"] --> B["Data Input:<br>Scalable Real-World<br>Financial Dataset"]
    B --> C["Methodology: FinMem Framework<br>Profiling | Memory | Decision-Making"]
    C --> D{"Computational Process:<br>Layered Memory Processing<br>Human-like Cognitive Structure"}
    D --> E["Performance Evaluation<br>vs. Algorithmic Agents"]
    E --> F["Key Findings:<br>Leading Trading Performance<br>Self-Evolving Knowledge<br>Enhanced Cumulative Returns"]