false

Technology Adoption and Network Externalities in Financial Systems: A Spatial-Network Approach

Technology Adoption and Network Externalities in Financial Systems: A Spatial-Network Approach ArXiv ID: 2601.04246 “View on arXiv” Authors: Tatsuru Kikuchi Abstract This paper develops a unified framework for analyzing technology adoption in financial networks that incorporates spatial spillovers, network externalities, and their interaction. The framework characterizes adoption dynamics through a master equation whose solution admits a Feynman-Kac representation as expected cumulative adoption pressure along stochastic paths through spatial-network space. From this representation, I derive the Adoption Amplification Factor – a structural measure of technology leadership that captures the ratio of total system-wide adoption to initial adoption following a localized shock. A Levy jump-diffusion extension with state-dependent jump intensity captures critical mass dynamics: below threshold, adoption evolves through gradual diffusion; above threshold, cascade dynamics accelerate adoption through discrete jumps. Applying the framework to SWIFT gpi adoption among 17 Global Systemically Important Banks, I find strong support for the two-regime characterization. Network-central banks adopt significantly earlier ($ρ= -0.69$, $p = 0.002$), and pre-threshold adopters have significantly higher amplification factors than post-threshold adopters (11.81 versus 7.83, $p = 0.010$). Founding members, representing 29 percent of banks, account for 39 percent of total system amplification – sufficient to trigger cascade dynamics. Controlling for firm size and network position, CEO age delays adoption by 11-15 days per year. ...

January 6, 2026 · 2 min · Research Team

Prompt-Response Semantic Divergence Metrics for Faithfulness Hallucination and Misalignment Detection in Large Language Models

Prompt-Response Semantic Divergence Metrics for Faithfulness Hallucination and Misalignment Detection in Large Language Models ArXiv ID: 2508.10192 “View on arXiv” Authors: Igor Halperin Abstract The proliferation of Large Language Models (LLMs) is challenged by hallucinations, critical failure modes where models generate non-factual, nonsensical or unfaithful text. This paper introduces Semantic Divergence Metrics (SDM), a novel lightweight framework for detecting Faithfulness Hallucinations – events of severe deviations of LLMs responses from input contexts. We focus on a specific implementation of these LLM errors, {“confabulations, defined as responses that are arbitrary and semantically misaligned with the user’s query. Existing methods like Semantic Entropy test for arbitrariness by measuring the diversity of answers to a single, fixed prompt. Our SDM framework improves upon this by being more prompt-aware: we test for a deeper form of arbitrariness by measuring response consistency not only across multiple answers but also across multiple, semantically-equivalent paraphrases of the original prompt. Methodologically, our approach uses joint clustering on sentence embeddings to create a shared topic space for prompts and answers. A heatmap of topic co-occurances between prompts and responses can be viewed as a quantified two-dimensional visualization of the user-machine dialogue. We then compute a suite of information-theoretic metrics to measure the semantic divergence between prompts and responses. Our practical score, $\mathcal{S”}_H$, combines the Jensen-Shannon divergence and Wasserstein distance to quantify this divergence, with a high score indicating a Faithfulness hallucination. Furthermore, we identify the KL divergence KL(Answer $||$ Prompt) as a powerful indicator of \textbf{“Semantic Exploration”}, a key signal for distinguishing different generative behaviors. These metrics are further combined into the Semantic Box, a diagnostic framework for classifying LLM response types, including the dangerous, confident confabulation. ...

August 13, 2025 · 2 min · Research Team

AI-Powered (Finance) Scholarship

AI-Powered (Finance) Scholarship ArXiv ID: ssrn-5103553 “View on arXiv” Authors: Unknown Abstract This paper describes a process for automatically generating academic finance papers using large language models (LLMs). It demonstrates the process’ efficacy by Keywords: Generative AI, Large Language Models (LLMs), Automated Research, Financial Modeling, NLP, Technology Complexity vs Empirical Score Math Complexity: 1.0/10 Empirical Rigor: 0.5/10 Quadrant: Philosophers Why: The paper focuses on the process of using LLMs to generate academic content, lacking advanced mathematical derivations, while showing minimal evidence of backtesting or implementation-heavy data analysis. flowchart TD A["Research Goal<br>Automate Finance Paper Generation"] --> B["Inputs<br>Financial Data + LLM Prompts"] B --> C{"Methodology<br>Multi-Step Chain-of-Thought"} C --> D["Computational Process<br>LLM Synthesis & Modeling"] D --> E{"Evaluation<br>Human Expert Review"} E --> F["Outcomes<br>High-Quality Finance Papers"] E --> G["Outcomes<br>Validation of LLM Efficacy"] F --> H["Final Result<br>AI-Powered Scholarship Pipeline"] G --> H

January 22, 2025 · 1 min · Research Team

AI-Powered (Finance) Scholarship

AI-Powered (Finance) Scholarship ArXiv ID: ssrn-5060022 “View on arXiv” Authors: Unknown Abstract Keywords: Generative AI, Large Language Models (LLMs), Academic Research, Natural Language Processing, Automation, Technology Complexity vs Empirical Score Math Complexity: 1.0/10 Empirical Rigor: 2.0/10 Quadrant: Philosophers Why: The paper focuses on the conceptual process of using LLMs to generate academic papers, rather than presenting complex mathematical models or empirical backtesting results. flowchart TD A["Research Goal<br>Automate Academic Paper Generation"] --> B{"Methodology"} B --> C["Data/Input<br>LLM & Financial Datasets"] B --> D["Data/Input<br>Research Questions"] C --> E["Computational Process<br>LLM Content Generation"] D --> E E --> F["Key Findings<br>Successful Paper Automation"] E --> G["Key Findings<br>Validated Methodology"]

January 3, 2025 · 1 min · Research Team

A First Look at Financial Data Analysis Using ChatGPT-4o

A First Look at Financial Data Analysis Using ChatGPT-4o ArXiv ID: ssrn-4849578 “View on arXiv” Authors: Unknown Abstract OpenAI’s new flagship model, ChatGPT-4o, released on May 13, 2024, offers enhanced natural language understanding and more coherent responses. In this paper, we Keywords: Large Language Models (LLMs), Natural Language Processing, Generative AI, AI Evaluation, Model Performance, Technology/AI Complexity vs Empirical Score Math Complexity: 4.0/10 Empirical Rigor: 6.5/10 Quadrant: Street Traders Why: The paper involves implementing and comparing specific financial models like ARMA-GARCH, indicating moderate-to-high implementation complexity, but the core mathematics is largely descriptive and comparative rather than novel. Empirical rigor is high due to the use of real datasets (CRSP, Fama-French) and direct backtesting comparisons against Stata. flowchart TD A["Research Goal: Evaluate ChatGPT-4o for Financial Data Analysis"] --> B["Methodology: Zero-shot vs. Chain-of-Thought"] B --> C["Input: Financial Statements & Market Data"] C --> D["Process: Text Generation & Sentiment Analysis"] D --> E["Output: Financial Predictions & Explanations"] E --> F["Key Findings: High Accuracy in NLP Tasks"] F --> G["Outcome: Strong Potential but Limited Numerical Reasoning"]

May 31, 2024 · 1 min · Research Team

FinBERT - A Large Language Model for Extracting Information from Financial Text

FinBERT - A Large Language Model for Extracting Information from Financial Text ArXiv ID: ssrn-3910214 “View on arXiv” Authors: Unknown Abstract We develop FinBERT, a state-of-the-art large language model that adapts to the finance domain. We show that FinBERT incorporates finance knowledge and can bette Keywords: FinBERT, Natural Language Processing, Large Language Models, Financial Text Analysis, Technology/AI Complexity vs Empirical Score Math Complexity: 2.0/10 Empirical Rigor: 8.0/10 Quadrant: Street Traders Why: The paper focuses on fine-tuning a pre-existing transformer model (FinBERT) with specific financial datasets, which is primarily an empirical, implementation-heavy task with significant data preparation and evaluation metrics, while the underlying mathematics is standard deep learning rather than novel or dense derivations. flowchart TD A["Research Goal:<br>Create domain-adapted LLM for finance"] --> B["Data:<br>Financial Documents & Corpora"] B --> C["Preprocessing:<br>Tokenization & Formatting"] C --> D["Core Methodology:<br>BERT Architecture Adaptation"] D --> E["Training:<br>Domain-specific Fine-tuning"] E --> F["Evaluation:<br>Benchmark Testing"] F --> G["Outcome:<br>FinBERT Model"] F --> H["Outcome:<br>Improved Performance vs. General LLMs"] G --> I["Final Result:<br>State-of-the-art Financial NLP"] H --> I

August 27, 2021 · 1 min · Research Team

Advances in Financial Machine Learning: Lecture 2/10 (seminar slides)

Advances in Financial Machine Learning: Lecture 2/10 (seminar slides) ArXiv ID: ssrn-3257415 “View on arXiv” Authors: Unknown Abstract Machine learning (ML) is changing virtually every aspect of our lives. Today ML algorithms accomplish tasks that until recently only expert humans could perform Keywords: Machine learning, Data science, Automation, Technology Complexity vs Empirical Score Math Complexity: 2.5/10 Empirical Rigor: 3.0/10 Quadrant: Philosophers Why: The excerpt introduces concepts like high-dimensional spaces and non-linear relationships but is devoid of advanced formulas, focusing instead on conceptual discussions and examples. It lacks data, backtests, code, or specific implementation metrics, making it more of a high-level overview than an empirical or technical paper. flowchart TD Q["Research Goal: Applying ML to Finance"] --> D["Data: Financial Market Data"] D --> M["Methodology: ML Algorithms"] M --> C["Computational Process: Pattern Recognition"] C --> F["Outcome: Task Automation"] F --> O["Key Finding: Expert-Level Performance"]

September 30, 2018 · 1 min · Research Team