As AI becomes more sophisticated, they are prime to play a role in healthcare by helping doctors make faster diagnoses. However, before doctors can trust in these automated making decision making, they need to understand how these models make decisions. We are working together with clinical collaborators to develop Explainable AI for medical decision making based on how clinicians actually reason. We draw on literature on medical decision making, conduct user elicitation studies, and evaluate with trained clinicians to inform on the user requirements for automatically generated explanations.

Relevant papers:

  • Wang, D., Yang, Q., Abdul, A., Lim, B. Y. 2019. Designing Theory-Driven User-Centric Explainable AI. In Proceedings of the international Conference on Human Factors in Computing Systems. CHI ’19.
  • Lim, B. Y., Wang, D., Loh, T. P., and Ngiam, K. Y. 2018. Interpreting Intelligibility under Uncertain Data Imputation. ACM IUI 2018 Workshop on Explainable Smart Systems (ExSS 2018).