false

Responsible LLM Deployment for High-Stake Decisions by Decentralized Technologies and Human-AI Interactions

Responsible LLM Deployment for High-Stake Decisions by Decentralized Technologies and Human-AI Interactions ArXiv ID: 2512.04108 “View on arXiv” Authors: Swati Sachan, Theo Miller, Mai Phuong Nguyen Abstract High-stakes decision domains are increasingly exploring the potential of Large Language Models (LLMs) for complex decision-making tasks. However, LLM deployment in real-world settings presents challenges in data security, evaluation of its capabilities outside controlled environments, and accountability attribution in the event of adversarial decisions. This paper proposes a framework for responsible deployment of LLM-based decision-support systems through active human involvement. It integrates interactive collaboration between human experts and developers through multiple iterations at the pre-deployment stage to assess the uncertain samples and judge the stability of the explanation provided by post-hoc XAI techniques. Local LLM deployment within organizations and decentralized technologies, such as Blockchain and IPFS, are proposed to create immutable records of LLM activities for automated auditing to enhance security and trace back accountability. It was tested on Bert-large-uncased, Mistral, and LLaMA 2 and 3 models to assess the capability to support responsible financial decisions on business lending. ...

November 28, 2025 · 2 min · Research Team

Bridging Human Cognition and AI: A Framework for Explainable Decision-Making Systems

Bridging Human Cognition and AI: A Framework for Explainable Decision-Making Systems ArXiv ID: 2509.02388 “View on arXiv” Authors: N. Jean, G. Le Pera Abstract Explainability in AI and ML models is critical for fostering trust, ensuring accountability, and enabling informed decision making in high stakes domains. Yet this objective is often unmet in practice. This paper proposes a general purpose framework that bridges state of the art explainability techniques with Malle’s five category model of behavior explanation: Knowledge Structures, Simulation/Projection, Covariation, Direct Recall, and Rationalization. The framework is designed to be applicable across AI assisted decision making systems, with the goal of enhancing transparency, interpretability, and user trust. We demonstrate its practical relevance through real world case studies, including credit risk assessment and regulatory analysis powered by large language models (LLMs). By aligning technical explanations with human cognitive mechanisms, the framework lays the groundwork for more comprehensible, responsible, and ethical AI systems. ...

September 2, 2025 · 2 min · Research Team