false

DeFi TrustBoost: Blockchain and AI for Trustworthy Decentralized Financial Decisions

DeFi TrustBoost: Blockchain and AI for Trustworthy Decentralized Financial Decisions ArXiv ID: 2512.00142 “View on arXiv” Authors: Swati Sachan, Dale S. Fickett Abstract This research introduces the Decentralized Finance (DeFi) TrustBoost Framework, which combines blockchain technology and Explainable AI to address challenges faced by lenders underwriting small business loan applications from low-wealth households. The framework is designed with a strong emphasis on fulfilling four crucial requirements of blockchain and AI systems: confidentiality, compliance with data protection laws, resistance to adversarial attacks, and compliance with regulatory audits. It presents a technique for tamper-proof auditing of automated AI decisions and a strategy for on-chain (inside-blockchain) and off-chain data storage to facilitate collaboration within and across financial organizations. ...

November 28, 2025 · 2 min · Research Team

Why Bonds Fail Differently? Explainable Multimodal Learning for Multi-Class Default Prediction

Why Bonds Fail Differently? Explainable Multimodal Learning for Multi-Class Default Prediction ArXiv ID: 2509.10802 “View on arXiv” Authors: Yi Lu, Aifan Ling, Chaoqun Wang, Yaxin Xu Abstract In recent years, China’s bond market has seen a surge in defaults amid regulatory reforms and macroeconomic volatility. Traditional machine learning models struggle to capture financial data’s irregularity and temporal dependencies, while most deep learning models lack interpretability-critical for financial decision-making. To tackle these issues, we propose EMDLOT (Explainable Multimodal Deep Learning for Time-series), a novel framework for multi-class bond default prediction. EMDLOT integrates numerical time-series (financial/macroeconomic indicators) and unstructured textual data (bond prospectuses), uses Time-Aware LSTM to handle irregular sequences, and adopts soft clustering and multi-level attention to boost interpretability. Experiments on 1994 Chinese firms (2015-2024) show EMDLOT outperforms traditional (e.g., XGBoost) and deep learning (e.g., LSTM) benchmarks in recall, F1-score, and mAP, especially in identifying default/extended firms. Ablation studies validate each component’s value, and attention analyses reveal economically intuitive default drivers. This work provides a practical tool and a trustworthy framework for transparent financial risk modeling. ...

September 13, 2025 · 2 min · Research Team

Bridging Human Cognition and AI: A Framework for Explainable Decision-Making Systems

Bridging Human Cognition and AI: A Framework for Explainable Decision-Making Systems ArXiv ID: 2509.02388 “View on arXiv” Authors: N. Jean, G. Le Pera Abstract Explainability in AI and ML models is critical for fostering trust, ensuring accountability, and enabling informed decision making in high stakes domains. Yet this objective is often unmet in practice. This paper proposes a general purpose framework that bridges state of the art explainability techniques with Malle’s five category model of behavior explanation: Knowledge Structures, Simulation/Projection, Covariation, Direct Recall, and Rationalization. The framework is designed to be applicable across AI assisted decision making systems, with the goal of enhancing transparency, interpretability, and user trust. We demonstrate its practical relevance through real world case studies, including credit risk assessment and regulatory analysis powered by large language models (LLMs). By aligning technical explanations with human cognitive mechanisms, the framework lays the groundwork for more comprehensible, responsible, and ethical AI systems. ...

September 2, 2025 · 2 min · Research Team

Explainable-AI powered stock price prediction using time series transformers: A Case Study on BIST100

Explainable-AI powered stock price prediction using time series transformers: A Case Study on BIST100 ArXiv ID: 2506.06345 “View on arXiv” Authors: Sukru Selim Calik, Andac Akyuz, Zeynep Hilal Kilimci, Kerem Colak Abstract Financial literacy is increasingly dependent on the ability to interpret complex financial data and utilize advanced forecasting tools. In this context, this study proposes a novel approach that combines transformer-based time series models with explainable artificial intelligence (XAI) to enhance the interpretability and accuracy of stock price predictions. The analysis focuses on the daily stock prices of the five highest-volume banks listed in the BIST100 index, along with XBANK and XU100 indices, covering the period from January 2015 to March 2025. Models including DLinear, LTSNet, Vanilla Transformer, and Time Series Transformer are employed, with input features enriched by technical indicators. SHAP and LIME techniques are used to provide transparency into the influence of individual features on model outputs. The results demonstrate the strong predictive capabilities of transformer models and highlight the potential of interpretable machine learning to empower individuals in making informed investment decisions and actively engaging in financial markets. ...

June 1, 2025 · 2 min · Research Team

Case-based Explainability for Random Forest: Prototypes, Critics, Counter-factuals and Semi-factuals

Case-based Explainability for Random Forest: Prototypes, Critics, Counter-factuals and Semi-factuals ArXiv ID: 2408.06679 “View on arXiv” Authors: Unknown Abstract The explainability of black-box machine learning algorithms, commonly known as Explainable Artificial Intelligence (XAI), has become crucial for financial and other regulated industrial applications due to regulatory requirements and the need for transparency in business practices. Among the various paradigms of XAI, Explainable Case-Based Reasoning (XCBR) stands out as a pragmatic approach that elucidates the output of a model by referencing actual examples from the data used to train or test the model. Despite its potential, XCBR has been relatively underexplored for many algorithms such as tree-based models until recently. We start by observing that most XCBR methods are defined based on the distance metric learned by the algorithm. By utilizing a recently proposed technique to extract the distance metric learned by Random Forests (RFs), which is both geometry- and accuracy-preserving, we investigate various XCBR methods. These methods amount to identify special points from the training datasets, such as prototypes, critics, counter-factuals, and semi-factuals, to explain the predictions for a given query of the RF. We evaluate these special points using various evaluation metrics to assess their explanatory power and effectiveness. ...

August 13, 2024 · 2 min · Research Team

Why Groups Matter: Necessity of Group Structures in Attributions

Why Groups Matter: Necessity of Group Structures in Attributions ArXiv ID: 2408.05701 “View on arXiv” Authors: Unknown Abstract Explainable machine learning methods have been accompanied by substantial development. Despite their success, the existing approaches focus more on the general framework with no prior domain expertise. High-stakes financial sectors have extensive domain knowledge of the features. Hence, it is expected that explanations of models will be consistent with domain knowledge to ensure conceptual soundness. In this work, we study the group structures of features that are naturally formed in the financial dataset. Our study shows the importance of considering group structures that conform to the regulations. When group structures are present, direct applications of explainable machine learning methods, such as Shapley values and Integrated Gradients, may not provide consistent explanations; alternatively, group versions of the Shapley value can provide consistent explanations. We contain detailed examples to concentrate on the practical perspective of our framework. ...

August 11, 2024 · 2 min · Research Team

Explainable AI in Request-for-Quote

Explainable AI in Request-for-Quote ArXiv ID: 2407.15038 “View on arXiv” Authors: Unknown Abstract In the contemporary financial landscape, accurately predicting the probability of filling a Request-For-Quote (RFQ) is crucial for improving market efficiency for less liquid asset classes. This paper explores the application of explainable AI (XAI) models to forecast the likelihood of RFQ fulfillment. By leveraging advanced algorithms including Logistic Regression, Random Forest, XGBoost and Bayesian Neural Tree, we are able to improve the accuracy of RFQ fill rate predictions and generate the most efficient quote price for market makers. XAI serves as a robust and transparent tool for market participants to navigate the complexities of RFQs with greater precision. ...

July 21, 2024 · 2 min · Research Team

Explainable Automated Machine Learning for Credit Decisions: Enhancing Human Artificial Intelligence Collaboration in Financial Engineering

Explainable Automated Machine Learning for Credit Decisions: Enhancing Human Artificial Intelligence Collaboration in Financial Engineering ArXiv ID: 2402.03806 “View on arXiv” Authors: Unknown Abstract This paper explores the integration of Explainable Automated Machine Learning (AutoML) in the realm of financial engineering, specifically focusing on its application in credit decision-making. The rapid evolution of Artificial Intelligence (AI) in finance has necessitated a balance between sophisticated algorithmic decision-making and the need for transparency in these systems. The focus is on how AutoML can streamline the development of robust machine learning models for credit scoring, while Explainable AI (XAI) methods, particularly SHapley Additive exPlanations (SHAP), provide insights into the models’ decision-making processes. This study demonstrates how the combination of AutoML and XAI not only enhances the efficiency and accuracy of credit decisions but also fosters trust and collaboration between humans and AI systems. The findings underscore the potential of explainable AutoML in improving the transparency and accountability of AI-driven financial decisions, aligning with regulatory requirements and ethical considerations. ...

February 6, 2024 · 2 min · Research Team

Enhanced Local Explainability and Trust Scores with Random Forest Proximities

Enhanced Local Explainability and Trust Scores with Random Forest Proximities ArXiv ID: 2310.12428 “View on arXiv” Authors: Unknown Abstract We initiate a novel approach to explain the predictions and out of sample performance of random forest (RF) regression and classification models by exploiting the fact that any RF can be mathematically formulated as an adaptive weighted K nearest-neighbors model. Specifically, we employ a recent result that, for both regression and classification tasks, any RF prediction can be rewritten exactly as a weighted sum of the training targets, where the weights are RF proximities between the corresponding pairs of data points. We show that this linearity facilitates a local notion of explainability of RF predictions that generates attributions for any model prediction across observations in the training set, and thereby complements established feature-based methods like SHAP, which generate attributions for a model prediction across input features. We show how this proximity-based approach to explainability can be used in conjunction with SHAP to explain not just the model predictions, but also out-of-sample performance, in the sense that proximities furnish a novel means of assessing when a given model prediction is more or less likely to be correct. We demonstrate this approach in the modeling of US corporate bond prices and returns in both regression and classification cases. ...

October 19, 2023 · 2 min · Research Team

Explaining AI in Finance: Past, Present, Prospects

Explaining AI in Finance: Past, Present, Prospects ArXiv ID: 2306.02773 “View on arXiv” Authors: Unknown Abstract This paper explores the journey of AI in finance, with a particular focus on the crucial role and potential of Explainable AI (XAI). We trace AI’s evolution from early statistical methods to sophisticated machine learning, highlighting XAI’s role in popular financial applications. The paper underscores the superior interpretability of methods like Shapley values compared to traditional linear regression in complex financial scenarios. It emphasizes the necessity of further XAI research, given forthcoming EU regulations. The paper demonstrates, through simulations, that XAI enhances trust in AI systems, fostering more responsible decision-making within finance. ...

June 5, 2023 · 2 min · Research Team