=Paper= {{Paper |id=Vol-2578/ETMLP3 |storemode=property |title=Explainable AI through Rule-based Interactive Conversation |pdfUrl=https://ceur-ws.org/Vol-2578/ETMLP3.pdf |volume=Vol-2578 |authors=Christian Werner |dblpUrl=https://dblp.org/rec/conf/edbt/Werner20 }} ==Explainable AI through Rule-based Interactive Conversation== https://ceur-ws.org/Vol-2578/ETMLP3.pdf
    Explainable AI through Rule-based Interactive Conversation
                                                                        Christian Werner
                                                                  Christian.Werner@viadee.de
ABSTRACT                                                                               3    PRELIMINARY RESULTS
This is a work-in-progress paper which proposes a rule-based,                             System goals. Trust is essential when humans communicate
interactive and conversational agent for explainable AI (XAI)                          with a system and driver for XAI [12]. However, to generate trust,
called ERIC. It includes research from XAI, human computer                             an XAI system must first and foremost provide transparency
interaction and social science to provide selected, personalized                       regarding its decision making process [13]. Furthermore, the
and interactive explanations.                                                          system must present information in an understandable manner
                                                                                       and avoid inconsistencies within the information it presents [15].
KEYWORDS
                                                                                          Intelligibility types. Intelligibility types describe a set of intel-
explainable, artificial intelligence, conversational, agent                            ligible elements which form a query paradigm that is derived
                                                                                       from questions users of intelligent systems often ask [6]. Results
                                                                                       from various experiments hint that these question-answer con-
1    INTRODUCTION
                                                                                       structs can help to build mental models of a system in a user’s
Nowadays, artificial intelligence (AI) has an ubiquitous impact                        mind who can then develop a certain level of trust regarding the
on our life. This involves product recommendations, risk assess-                       system’s reasoning [9] [5]. Among others, ERIC implements the
ment and systems that are essential for people’s survival such                         following intelligibility types: Why, Why-not, What-if, How-to.
as medical diagnosis systems. Especially in case of such critical                      Suitable explanations such as rule-based explanations, feature
decisions being made by a system, the question arises why and                          attributions and counterfactual explanations are used as output.
how it came to a specific decision [3]. The problem is that many
of the underlying algorithms of such systems appear as a black-                           Provide selected explanations. Selecting the right explanations
box to the user and therefore suffer in terms of transparency [1].                     for a context is one of the major challenges for an XAI agent. Not
This is the driver for the research field of so-called explainable                     every explanation type is suitable to answer a user issued ques-
AI (XAI). It provides a set of methods which can be used to de-                        tion and not every XAI method is applicable in every situation
scribe the behaviour of a machine learning (ML) model and as                           [7]. Thus, ERIC includes specific domain knowledge about when
such provides a certain degree of transparency [1]. The current                        to present what type of explanation based on contextual factors.
research focuses on the development of new and mostly isolated
                                                                                          Provide personalized explanations. Explanations provided to
XAI methods, such as Surrogate Models, Partial Dependency
                                                                                       a user must be tailored to the specific need and interest of the
Plots, or Accumulated Local Effects rather than on what really
                                                                                       user. This involves the complexity of the explanations (number
makes up a good overall approach to explain a model’s behaviour
                                                                                       of elements), the prioritization of information (which elements
to the user [10]. The research question is how the results of such
                                                                                       are important for the user), and the presentation format (textual
methods can be used to answer the questions humans have about
                                                                                       vs. visual) [14]. ERIC seeks to personalize explanations for a
ML decision making? This work-in-progress paper introduces
                                                                                       user by extracting preferences from user actions and by direct
a new XAI system called ERIC - a Rule-based, Interactive and
                                                                                       information elicitation.
Conversational agent for Explainable AI. ERIC applies the most
popular XAI methods on a ML model to extract knowledge that is                            Provide interactive explanations. One of the main insights about
stored within a rule-based system. A potential user can communi-                       explanations from social science is that an explanation naturally
cate with ERIC through a chat-like conversational interface and                        happens in an interactive conversation [11]. Hence, a user should
receive appropriate explanations about the ML model’s reasoning                        have the possibility to actively explore the underlying ML model
behaviour. This system is specifically targeted to domain experts                      as a continuous process. By doing that, the user can develop
and seeks to provide everyday explanations. It combines insights                       step-by-step trust in the system [4]. ERIC implements a dialogue
from the research fields of AI, human computer interaction and                         model that enables the user to iteratively query different types
social science [12]. Other than existing related conversational                        of information. The presentation of an explanation is never an
system (e.g. the Iris agent for performing data science tasks [2],                     end point and allows for further inquiries.
or the LAKSA agent for explaining context-aware applications
[8]), ERIC focuses on the explanations of ML models.                                   4    STATE AND FUTURE DIRECTIONS
                                                                                       A first prototype of ERIC is implemented using the rule-based
2    METHODOLOGY                                                                       programming language CLIPS and a Python interface revealing
Research proposed in this paper follows a Design Science Re-                           promising results. The prototype allows for a basic interaction
search (DSR) approach that is aimed to iteratively elaborate re-                       about a Python-based ML model using the proposed intelligi-
quirements, implement and test them with real users. Require-                          bility types. Further requirements need to be elaborated and
ments are drawn up from theoretical investigations in literature,                      implemented to further specify ERIC’s capabilities. User test-
existing solution approaches and findings from user experiments.                       ing is essential to validate the effectiveness of ERIC and is still
                                                                                       pending. An online available prototype is being planned.
© 2020 Copyright for this paper by its author(s). Published in the Workshop Proceed-
ings of the EDBT/ICDT 2020 Joint Conference (March 30-April 2, 2020, Copenhagen,
Denmark) on CEUR-WS.org. Use permitted under Creative Commons License At-              REFERENCES
tribution 4.0 International (CC BY 4.0)                                                 [1] A. Adadi and M. Berrada. 2018. Peeking Inside the Black-Box: A Survey on
                                                                                            Explainable Artificial Intelligence (XAI). IEEE Access 6 (2018), 52138–52160.
     https://doi.org/10.1109/ACCESS.2018.2870052
 [2] Ethan Fast, Binbin Chen, Julia Mendelsohn, Jonathan Bassen, and Michael S
     Bernstein. 2018. Iris: A conversational agent for complex tasks. In Proceedings
     of the 2018 CHI Conference on Human Factors in Computing Systems. ACM,
     473.
 [3] Leilani Gilpin, David Bau, Ben Z. Yuan, Ayesha Bajwa, Michael Specter, and
     Lalana Kagal. 2018. Explaining Explanations: An Overview of Interpretability
     of Machine Learning. 80–89. https://doi.org/10.1109/DSAA.2018.00018
 [4] Robert R Hoffman, Gary Klein, and Shane T Mueller. 2018. Explaining Explana-
     tion For “Explainable Ai”. In Proceedings of the Human Factors and Ergonomics
     Society Annual Meeting, Vol. 62. SAGE Publications Sage CA: Los Angeles, CA,
     197–201.
 [5] Todd Kulesza, Simone Stumpf, Margaret Burnett, Sherry Yang, Irwin Kwan,
     and Weng-Keen Wong. 2013. Too Much, Too Little, or Just Right? Ways
     Explanations Impact End Users’ Mental Models. Proceedings of IEEE Sympo-
     sium on Visual Languages and Human-Centric Computing, VL/HCC. https:
     //doi.org/10.1109/VLHCC.2013.6645235
 [6] Brian Y Lim. 2012. Improving understanding and trust with intelligibility in
     context-aware applications. Ph.D. Dissertation. figshare.
 [7] Brian Y Lim and Anind K Dey. 2010. Toolkit to support intelligibility in context-
     aware applications. In Proceedings of the 12th ACM international conference on
     Ubiquitous computing. ACM, 13–22.
 [8] Brian Y Lim and Anind K Dey. 2011. Design of an intelligible mobile context-
     aware application. In Proceedings of the 13th international conference on human
     computer interaction with mobile devices and services. ACM, 157–166.
 [9] Brian Y Lim and Anind K Dey. 2013. Evaluating intelligibility usage and use-
     fulness in a context-aware application. In International Conference on Human-
     Computer Interaction. Springer, 92–101.
[10] Prashan Madumal, Tim Miller, Liz Sonenberg, and Frank Vetere. 2019. A
     Grounded Interaction Protocol for Explainable Artificial Intelligence. CoRR
     abs/1903.02409 (2019). arXiv:1903.02409 http://arxiv.org/abs/1903.02409
[11] Prashan Madumal, Tim Miller, Liz Sonenberg, and Frank Vetere. 2019. A
     Grounded Interaction Protocol for Explainable Artificial Intelligence. In Pro-
     ceedings of the 18th International Conference on Autonomous Agents and Multi-
     Agent Systems. International Foundation for Autonomous Agents and Multia-
     gent Systems, 1033–1041.
[12] Tim Miller. 2019. Explanation in artificial intelligence: Insights from the social
     sciences. Artificial Intelligence 267 (2019), 1–38.
[13] Ingrid Nunes and Dietmar Jannach. 2017. A systematic review and taxonomy
     of explanations in decision support and recommender systems. User Modeling
     and User-Adapted Interaction 27, 3-5 (2017), 393–444.
[14] Johannes Schneider and Joshua Peter Handali. 2019. Personalized explanation
     for machine learning: A conceptualization. (2019).
[15] Andreas Theodorou, Robert H. Wortham, and Joanna J. Bryson. 2016. Why is
     my robot behaving like that? Designing transparency for real time inspection
     of autonomous robots.