The Structure of Financial Equity Research Reports – Identification of the Most Frequently Asked Questions in Financial Analyst Reports to Automate Equity Research Using Llama 3 and GPT-4

ArXiv ID: 2407.18327 “View on arXiv”

Authors: Unknown

Abstract

This research dissects financial equity research reports (ERRs) by mapping their content into categories. There is insufficient empirical analysis of the questions answered in ERRs. In particular, it is not understood how frequently certain information appears, what information is considered essential, and what information requires human judgment to distill into an ERR. The study analyzes 72 ERRs sentence-by-sentence, classifying their 4940 sentences into 169 unique question archetypes. We did not predefine the questions but derived them solely from the statements in the ERRs. This approach provides an unbiased view of the content of the observed ERRs. Subsequently, we used public corporate reports to classify the questions’ potential for automation. Answers were labeled “text-extractable” if the answers to the question were accessible in corporate reports. 78.7% of the questions in ERRs can be automated. Those automatable question consist of 48.2% text-extractable (suited to processing by large language models, LLMs) and 30.5% database-extractable questions. Only 21.3% of questions require human judgment to answer. We empirically validate using Llama-3-70B and GPT-4-turbo-2024-04-09 that recent advances in language generation and information extraction enable the automation of approximately 80% of the statements in ERRs. Surprisingly, the models complement each other’s strengths and weaknesses well. The research confirms that the current writing process of ERRs can likely benefit from additional automation, improving quality and efficiency. The research thus allows us to quantify the potential impacts of introducing large language models in the ERR writing process. The full question list, including the archetypes and their frequency, will be made available online after peer review.

Keywords: large language models (LLM), information extraction, equity research reports, automation, text analysis, Equities

Complexity vs Empirical Score

  • Math Complexity: 2.0/10
  • Empirical Rigor: 7.5/10
  • Quadrant: Street Traders
  • Why: The paper relies on statistical counting and text classification rather than advanced mathematical modeling, scoring low on math complexity. It demonstrates high empirical rigor by manually analyzing 72 reports, deriving categories from data, and validating automation potential using specific LLMs (Llama-3-70B and GPT-4) on real financial data.
  flowchart TD
    A["<b>Research Goal</b><br>Quantify automation potential of Equity Research Reports using LLMs"] --> B
    subgraph B ["Data & Methodology"]
        B1["Dataset<br>72 Equity Research Reports"] --> B2["Sentence Classification<br>4,940 sentences into 169 unique question archetypes"]
        B2 --> B3["Automation Assessment<br>Labeling questions as text-extractable, database-extractable, or requiring human judgment"]
    end
    B3 --> C["<b>Computational Validation</b><br>Testing with Llama-3-70B & GPT-4-turbo"]
    C --> D["<b>Key Findings</b>"]
    subgraph D ["Outcomes"]
        D1["78.7% of questions are automatable"]
        D2["48.2% text-extractable (LLM-suitable)"]
        D3["30.5% database-extractable"]
        D4["21.3% require human judgment"]
    end