Numerous novel explanation techniques have been developed for explainable AI (XAI). How can developers choose which technique to implement for various users and use cases? Which explanations would be more suitable for specific user goals? We present the XAI Framework of Reasoned Explanations to help guide how to choose different XAI feature based on user goals and human reasoning methods and biases.
The framework describes how people reason rationally ① and heuristically but subject to cognitive biases ③, how XAI facilities do support specific rational reasoning processes ②, and can be designed to target decision errors ④. The framework identifies pathways between human reasoning and XAI facilities that can help organize explanations and identify gaps to develop new explanations given an unmet reasoning need.
The conceptual XAI framework for Reasoned Explanations describes how human reasoning processes (left) informs XAI techniques (right).
Each bullet point describe different constructs: theories of reasoning, XAI techniques, and strategies for designing XAI. Elements describe specific elements for each construct.
Arrows indicate pathway connections: red arrows for how theories of human reasoning inform XAI features, and grey arrows for inter-relations between different reasoning processes and associations between XAI features.
For example, hypothetico-deductive reasoning can be interfered by System 1 thinking and cause confirmation bias (grey arrow). Confirmation bias can be mitigated (follow the red line) by presenting information about the prior probability or input attributions. Next, we can see that input attributions can be implemented as lists and visualized using tornado plots (follow the grey line).
Click on XAI feature in chart (below) and/or GUI module to see which explanation is implemented with which application feature.
Screenshot of the AI-driven medical diagnosis tool with explanation sketches showing a patient with high predicted risk of acute myocardial infarction (AMI), heart disease, diabetes with complications, shock, etc.
Interpretation: e.g., explanations suggest that the AI thinks that the patient has shock because of low oxygen saturation and blood pressure.
XAI application developers can use the framework as follows:
Furthermore, XAI researchers can extend the framework by
For example, Informal Logic could be integrated into reasoning theories ① and informal fallacies into reasoning errors ③.
Wang, D., Yang, Q., Abdul, A., Lim, B. Y. 2019. Designing Theory-Driven User-Centric Explainable AI. Proceedings of the international Conference on Human Factors in Computing Systems. CHI ’19.
Lim, B. Y., Yang, Q., Abdul, A. and Wang, D. 2019. Why these Explanations? Selecting Intelligibility Types for Explanation Goals. In IUI 2019 Second Workshop on Explainable Smart Systems (ExSS 2019).
Lim, B. Y. 2019. Handout for XAI Framework of Reasoned Explanations.. In IUI 2019 Second Workshop on Explainable Smart Systems (ExSS 2019).