Can I Trust the Explanations? Investigating Explainable Machine Learning Methods for Monotonic Models
ArXiv ID: 2309.13246 “View on arXiv”
Authors: Unknown
Abstract
In recent years, explainable machine learning methods have been very successful. Despite their success, most explainable machine learning methods are applied to black-box models without any domain knowledge. By incorporating domain knowledge, science-informed machine learning models have demonstrated better generalization and interpretation. But do we obtain consistent scientific explanations if we apply explainable machine learning methods to science-informed machine learning models? This question is addressed in the context of monotonic models that exhibit three different types of monotonicity. To demonstrate monotonicity, we propose three axioms. Accordingly, this study shows that when only individual monotonicity is involved, the baseline Shapley value provides good explanations; however, when strong pairwise monotonicity is involved, the Integrated gradients method provides reasonable explanations on average.
Keywords: Explainable Machine Learning, Shapley Value, Integrated Gradients, Monotonic Models, Science-informed Machine Learning, General (Machine Learning in Finance)
Complexity vs Empirical Score
- Math Complexity: 8.5/10
- Empirical Rigor: 2.0/10
- Quadrant: Lab Rats
- Why: The paper introduces formal axioms and theoretical comparisons between Integrated Gradients and Baseline Shapley, relying on derivations and proofs that demonstrate high mathematical density. However, the provided excerpt and summary focus on theoretical properties without any discussion of backtesting, specific datasets, code implementations, or statistical metrics, indicating low empirical rigor.
flowchart TD
A["Research Goal<br/>How do XAI methods perform<br/>on monotonic science-informed models?"] --> B{"Methodology"}
B --> C["Define 3 Monotonicity Axioms"]
C --> D["Data: Synthetic & Financial Datasets"]
D --> E["Models: Monotonicity-enforced<br/>ML (e.g., Neural Networks)"]
E --> F["Process: Apply & Compare<br/>Shapley Values vs. Integrated Gradients"]
F --> G{"Key Findings"}
G --> H["Individual Monotonicity:<br/>Shapley Values perform well"]
G --> I["Strong Pairwise Monotonicity:<br/>Integrated Gradients perform better"]