false

Attribution Methods in Asset Pricing: Do They Account for Risk?

Attribution Methods in Asset Pricing: Do They Account for Risk? ArXiv ID: 2407.08953 “View on arXiv” Authors: Unknown Abstract Over the past few decades, machine learning models have been extremely successful. As a result of axiomatic attribution methods, feature contributions have been explained more clearly and rigorously. There are, however, few studies that have examined domain knowledge in conjunction with the axioms. In this study, we examine asset pricing in finance, a field closely related to risk management. Consequently, when applying machine learning models, we must ensure that the attribution methods reflect the underlying risks accurately. In this work, we present and study several axioms derived from asset pricing domain knowledge. It is shown that while Shapley value and Integrated Gradients preserve most axioms, neither can satisfy all axioms. Using extensive analytical and empirical examples, we demonstrate how attribution methods can reflect risks and when they should not be used. ...

July 12, 2024 · 2 min · Research Team

Can I Trust the Explanations? Investigating Explainable Machine Learning Methods for Monotonic Models

Can I Trust the Explanations? Investigating Explainable Machine Learning Methods for Monotonic Models ArXiv ID: 2309.13246 “View on arXiv” Authors: Unknown Abstract In recent years, explainable machine learning methods have been very successful. Despite their success, most explainable machine learning methods are applied to black-box models without any domain knowledge. By incorporating domain knowledge, science-informed machine learning models have demonstrated better generalization and interpretation. But do we obtain consistent scientific explanations if we apply explainable machine learning methods to science-informed machine learning models? This question is addressed in the context of monotonic models that exhibit three different types of monotonicity. To demonstrate monotonicity, we propose three axioms. Accordingly, this study shows that when only individual monotonicity is involved, the baseline Shapley value provides good explanations; however, when strong pairwise monotonicity is involved, the Integrated gradients method provides reasonable explanations on average. ...

September 23, 2023 · 2 min · Research Team