Your AI, Not Your View: The Bias of LLMs in Investment Analysis

ArXiv ID: 2507.20957 “View on arXiv”

Authors: Hoyoung Lee, Junhyuk Seo, Suhwan Park, Junhyeong Lee, Wonbin Ahn, Chanyeol Choi, Alejandro Lopez-Lira, Yongjae Lee

Abstract

In finance, Large Language Models (LLMs) face frequent knowledge conflicts arising from discrepancies between their pre-trained parametric knowledge and real-time market data. These conflicts are especially problematic in real-world investment services, where a model’s inherent biases can misalign with institutional objectives, leading to unreliable recommendations. Despite this risk, the intrinsic investment biases of LLMs remain underexplored. We propose an experimental framework to investigate emergent behaviors in such conflict scenarios, offering a quantitative analysis of bias in LLM-based investment analysis. Using hypothetical scenarios with balanced and imbalanced arguments, we extract the latent biases of models and measure their persistence. Our analysis, centered on sector, size, and momentum, reveals distinct, model-specific biases. Across most models, a tendency to prefer technology stocks, large-cap stocks, and contrarian strategies is observed. These foundational biases often escalate into confirmation bias, causing models to cling to initial judgments even when faced with increasing counter-evidence. A public leaderboard benchmarking bias across a broader set of models is available at https://linqalpha.com/leaderboard

Keywords: Large Language Models (LLMs), Investment Bias, Confirmation Bias, Behavioral Finance, Knowledge Conflicts, Equities

Complexity vs Empirical Score

  • Math Complexity: 4.0/10
  • Empirical Rigor: 6.0/10
  • Quadrant: Street Traders
  • Why: The paper focuses on experimental methodology and bias quantification using LLM prompting, with limited advanced mathematics, but demonstrates high empirical rigor through a structured framework, hypothetical scenario testing, and a public leaderboard benchmark.
  flowchart TD
    A["Research Goal: Quantify LLM Investment Biases in Knowledge Conflicts"] --> B["Methodology: Hypothetical Scenario Testing"]
    B --> C{"Data: Balanced vs. Imbalanced<br/>Sector/Size/Momentum Arguments"}
    C --> D["Process: Extract & Measure<br/>Latent Bias & Persistence"]
    D --> E["Computation: Statistical Analysis<br/>of Model Responses"]
    E --> F["Key Outcomes: Distinct Model-Specific Biases"]
    F --> G["Findings: Tech, Large-Cap,<br/>Contrarian Preference + Confirmation Bias"]