PyFi: Toward Pyramid-like Financial Image Understanding for VLMs via Adversarial Agents

ArXiv ID: 2512.14735 “View on arXiv”

Authors: Yuqun Zhang, Yuxuan Zhao, Sijia Chen

Abstract

This paper proposes PyFi, a novel framework for pyramid-like financial image understanding that enables vision language models (VLMs) to reason through question chains in a progressive, simple-to-complex manner. At the core of PyFi is PyFi-600K, a dataset comprising 600K financial question-answer pairs organized into a reasoning pyramid: questions at the base require only basic perception, while those toward the apex demand increasing levels of capability in financial visual understanding and expertise. This data is scalable because it is synthesized without human annotations, using PyFi-adv, a multi-agent adversarial mechanism under the Monte Carlo Tree Search (MCTS) paradigm, in which, for each image, a challenger agent competes with a solver agent by generating question chains that progressively probe deeper capability levels in financial visual reasoning. Leveraging this dataset, we present fine-grained, hierarchical, and comprehensive evaluations of advanced VLMs in the financial domain. Moreover, fine-tuning Qwen2.5-VL-3B and Qwen2.5-VL-7B on the pyramid-structured question chains enables these models to answer complex financial questions by decomposing them into sub-questions with gradually increasing reasoning demands, yielding average accuracy improvements of 19.52% and 8.06%, respectively, on the dataset. All resources of code, dataset and models are available at: https://github.com/AgenticFinLab/PyFi .

Keywords: Vision Language Models (VLMs), Monte Carlo Tree Search (MCTS), Financial Image Understanding, Multi-Agent Systems, Reinforcement Learning

Complexity vs Empirical Score

  • Math Complexity: 3.5/10
  • Empirical Rigor: 7.0/10
  • Quadrant: Street Traders
  • Why: The paper’s mathematics is relatively light, relying on standard multi-agent frameworks and MCTS rather than advanced theory, while empirical rigor is high due to a large synthetic dataset, code/model availability, and reported accuracy improvements from fine-tuning.
  flowchart TD
    A["Research Goal: Pyramid-like Financial Image Understanding"] --> B["Methodology: PyFi-adv<br>Multi-Agent MCTS Framework"]
    B --> C{"Input: Financial Images"}
    C --> D["Adversarial Process:<br>Challenger vs. Solver Agents"]
    D --> E["Output: PyFi-600K Dataset<br>600K Q/A Pairs in Pyramid Structure"]
    E --> F["Computational Process:<br>Fine-tune Qwen2.5-VL (3B & 7B)"]
    F --> G["Key Outcomes:<br>19.52% & 8.06% Accuracy Improvements<br>Scalable Evaluation & Code Release"]