Why Groups Matter: Necessity of Group Structures in Attributions

ArXiv ID: 2408.05701 “View on arXiv”

Authors: Unknown

Abstract

Explainable machine learning methods have been accompanied by substantial development. Despite their success, the existing approaches focus more on the general framework with no prior domain expertise. High-stakes financial sectors have extensive domain knowledge of the features. Hence, it is expected that explanations of models will be consistent with domain knowledge to ensure conceptual soundness. In this work, we study the group structures of features that are naturally formed in the financial dataset. Our study shows the importance of considering group structures that conform to the regulations. When group structures are present, direct applications of explainable machine learning methods, such as Shapley values and Integrated Gradients, may not provide consistent explanations; alternatively, group versions of the Shapley value can provide consistent explanations. We contain detailed examples to concentrate on the practical perspective of our framework.

Keywords: Explainable AI (XAI), Shapley Values, Feature Grouping, Domain Knowledge, Model Interpretability

Complexity vs Empirical Score

  • Math Complexity: 8.5/10
  • Empirical Rigor: 2.0/10
  • Quadrant: Lab Rats
  • Why: The paper is mathematically dense, featuring formal axioms, theorems, and proofs related to Shapley values and group structures. However, it lacks any empirical validation such as backtests, datasets, or code implementation, focusing instead on theoretical frameworks and conceptual examples.
  flowchart TD
    A["Research Goal: Reconcile XAI with financial domain knowledge?"] --> B{"Methodology: Analyze Financial Dataset"}
    B --> C["Key Finding: Group structures exist & align with regulations"]
    C --> D["Problem: Standard XAI<br>Shapley/IG inconsistent"]
    D --> E["Solution: Group-based XAI<br>Group-Shapley"]
    E --> F["Outcome: Consistent &<br>domain-conformant explanations"]