How to Choose a Threshold for an Evaluation Metric for Large Language Models
ArXiv ID: 2412.12148 “View on arXiv”
Authors: Unknown
Abstract
To ensure and monitor large language models (LLMs) reliably, various evaluation metrics have been proposed in the literature. However, there is little research on prescribing a methodology to identify a robust threshold on these metrics even though there are many serious implications of an incorrect choice of the thresholds during deployment of the LLMs. Translating the traditional model risk management (MRM) guidelines within regulated industries such as the financial industry, we propose a step-by-step recipe for picking a threshold for a given LLM evaluation metric. We emphasize that such a methodology should start with identifying the risks of the LLM application under consideration and risk tolerance of the stakeholders. We then propose concrete and statistically rigorous procedures to determine a threshold for the given LLM evaluation metric using available ground-truth data. As a concrete example to demonstrate the proposed methodology at work, we employ it on the Faithfulness metric, as implemented in various publicly available libraries, using the publicly available HaluBench dataset. We also lay a foundation for creating systematic approaches to select thresholds, not only for LLMs but for any GenAI applications.
Keywords: Model Risk Management (MRM), LLM Evaluation, Threshold Selection, Faithfulness Metric, Statistical Rigor, Cross-Asset
Complexity vs Empirical Score
- Math Complexity: 4.5/10
- Empirical Rigor: 7.0/10
- Quadrant: Street Traders
- Why: The paper focuses on practical methodology (MRM guidelines, concrete step-by-step recipes) with only moderate statistical/mathematical content, while demonstrating high empirical rigor through a concrete experiment using public datasets (HaluBench) and implementation of the threshold selection procedure.
flowchart TD
A["<b>Research Goal</b><br/>Propose a robust methodology for threshold<br/>selection for LLM evaluation metrics"] --> B["<b>Methodology Framework</b><br/>Translating MRM guidelines<br/>into a step-by-step recipe"]
B --> C["<b>Phase 1: Risk Identification</b><br/>Identify application risks &<br/>stakeholder risk tolerance"]
C --> D["<b>Phase 2: Threshold Determination</b><br/>Apply statistical procedures on<br/>ground-truth data (e.g., HaluBench)"]
D --> E["<b>Application Example</b><br/>Apply methodology to<br/>Faithfulness metric"]
E --> F["<b>Key Outcomes</b><ul><li>Systematic threshold selection process</li><li>Risk-driven decision framework</li><li>Foundation for GenAI applications</li></ul>"]