Designing Theory-Driven User-Centric Explainable AI: Interactive Tutorial

Numerous novel explanation techniques have been developed for explainable AI (XAI). How can developers choose which technique to implement for various users and use cases? Which explanations would be more suitable for specific user goals? We present the XAI Framework of Reasoned Explanations to help guide how to choose different XAI feature based on user goals and human reasoning methods and biases.

The framework describes how people reason rationally ① and heuristically but subject to cognitive biases ③, how XAI facilities do support specific rational reasoning processes ②, and can be designed to target decision errors ④. The framework identifies pathways between human reasoning and XAI facilities that can help organize explanations and identify gaps to develop new explanations given an unmet reasoning need.

ActiVis [Kahng], Bayesian Rule Lists [Letham 2015], GA2M [Caruana 2015], Grad-CAM [Selvaraju 2016], LIME [Ribeiro 2016], LRP [Lapuschkin 2016], Influence Functions [Koh 2017], Integrated Gradients [Sundararajan 2017],
Intelligibility Question Types [Lim 2009, Lim 2010], Interpretable Decision Sets [Lakkaraju], MMD-Critic [Kim], SHAP [Lundberg 2017], TCAV [Kim 2018], etc.

The conceptual XAI framework for Reasoned Explanations describes how human reasoning processes (left) informs XAI techniques (right).

Each bullet point describe different constructs: theories of reasoning, XAI techniques, and strategies for designing XAI. Elements describe specific elements for each construct.

**Click on a construct or element word to reveal pathways connecting elements between quadrants.**

This will reveal pathways between framework quadrants and also reveal the explanation as implemented in the example visualization.

Arrows indicate pathway connections: red arrows for how theories of human reasoning inform XAI features, and grey arrows for inter-relations between different reasoning processes and associations between XAI features.
For example, hypothetico-deductive reasoning can be interfered by System 1 thinking and cause confirmation bias (grey arrow). Confirmation bias can be mitigated (follow the red line) by presenting information about the prior probability or input attributions. Next, we can see that input attributions can be implemented as lists and visualized using tornado plots (follow the grey line).

XAI Framework of Reasoned Explanations

Example Application: Medical Diagnosis

Understanding Users informs Explaining AI

How People should Reason and Explain

  • Explanation goals
    • filter causes
    • generalize and learn
    • predict and control
    • moderate trust
  • Inquiry and reasoning
    • induction
    • analogy
    • deduction
    • abduction
    • hypothetico-deductive model
  • Causality
    • contrastive
    • counterfactual
    • attribution
  • Rational choice decisions
    • probability
    • risk
    • expected utility

How People actually Reason with Errors

  • Dual process model
    • system 1 thinking (fast, heuristic)
    • system 2 thinking (slow, rational)
  • System 1 heuristic biases
    • representativeness
    • availability
    • anchoring
    • confirmation
  • System 2 weaknesses
    • lack of knowledge
    • misattributed trust

How XAI Generates Explanations

  • Bayesian probability
    • prior
    • conditional
    • posterior
  • Similarity modeling
    • clustering
    • classification
    • rule boundaries
    • dimensionality reduction
  • Intelligibility queries
    • what
    • inputs
    • outputs
    • certainty
    • why
    • why not
    • what if
    • how to
  • XAI elements
    • attribution
    • name
    • value
    • clause
    • instance
  • Data structures
    • lists
    • rules
    • trees
    • graphs
    • objects
  • Visualizations
    • tornado plot
    • saliency heatmap
    • partial dependence plot

How XAI Mitigates Reasoning Errors

  • Mitigate representative bias
    • similar prototype
    • input attributions
    • contrastive
  • Mitigate availability bias
    • prior probability
  • Mitigate anchoring bias
    • input attributions
    • contrastive
  • Mitigate confirmation bias
    • prior probability
    • input attributions
  • Moderating trust
    • transparency
    • posterior certainty
    • scrutable contrasts

Click on XAI feature in chart (below) and/or GUI module to see which explanation is implemented with which application feature.

Screenshot of the AI-driven medical diagnosis tool with explanation sketches showing a patient with high predicted risk of acute myocardial infarction (AMI), heart disease, diabetes with complications, shock, etc.

Selected Pathways

Medical Example

Explanations include:

Interpretation: e.g., explanations suggest that the AI thinks that the patient has shock because of low oxygen saturation and blood pressure.

Procedure to use Framework

XAI application developers can use the framework as follows:

  1. Consider the user's reasoning goals ① and biases ③ for their respective apps. This can be informed through literature review, ethnography, participatory design, etc.
  2. Next, identify which explanations help reasoning goals ② or reduce cognitive biases ④ using pathways in the framework (Figure 2, red arrows).
  3. Finally, integrate these XAI facilities to create explainable UIs.

Furthermore, XAI researchers can extend the framework by

  1. Examining new XAI facilities ② to understand how they were inspired by, dependent on, or built from reasoning theories ① and
  2. Identifying common biases and reasoning errors ③ related to reasoning theories ① and then identifying appropriate mitigation strategies ② to select specific XAI facilities.

For example, Informal Logic could be integrated into reasoning theories ① and informal fallacies into reasoning errors ③.

Glossary (some terms)

Further reading

Wang, D., Yang, Q., Abdul, A., Lim, B. Y. 2019. Designing Theory-Driven User-Centric Explainable AI. Proceedings of the international Conference on Human Factors in Computing Systems. CHI ’19.

Lim, B. Y., Yang, Q., Abdul, A. and Wang, D. 2019. Why these Explanations? Selecting Intelligibility Types for Explanation Goals. In IUI 2019 Second Workshop on Explainable Smart Systems (ExSS 2019).

Lim, B. Y. 2019. Handout for XAI Framework of Reasoned Explanations.. In IUI 2019 Second Workshop on Explainable Smart Systems (ExSS 2019).