=Paper=
{{Paper
|id=Vol-2068/exss7
|storemode=property
|title=Normative vs. Pragmatic: Two Perspectives on the Design of Explanations in Intelligent Systems
|pdfUrl=https://ceur-ws.org/Vol-2068/exss7.pdf
|volume=Vol-2068
|authors=Malin Eiband,Hanna Schneider,Daniel Buschek
|dblpUrl=https://dblp.org/rec/conf/iui/EibandSB18
}}
==Normative vs. Pragmatic: Two Perspectives on the Design of Explanations in Intelligent Systems==
Normative vs Pragmatic: Two Perspectives on the Design of Explanations in Intelligent Systems Malin Eiband, Hanna Schneider, Daniel Buschek LMU Munich, Munich, Germany {malin.eiband, hanna.schneider, daniel.buschek}@ifi.lmu.de ABSTRACT explainable [16]) and deterioration of user experiences (as This paper compares two main perspectives on explanations in explanatory information can quickly clutter the interface or intelligent systems: 1) A normative view, based on recent leg- overwhelm users [7]). islation and ethical considerations, which motivates detailed We often trust human decision making without completely and comprehensive explanations of algorithms in intelligent understanding the rationale behind it. Why do we not invest systems. 2) A pragmatic view, motivated by benefits for usabil- the same trust in AI calculations that consistently yield good ity and efficient use, achieved through better understanding results? In this position paper we analyze two arguments for of the system. We introduce and discuss design dimensions transparency: a normative one emphasizing the right to receive for explanations in intelligent systems and their desired real- explanations and a pragmatic one viewing transparency as a izations as motivated by these two perspectives. We conclude precondition for effective use. We illustrate how both perspec- that while the normative view ensures a minimal standard tives differ and how they affect the design of explanations in as a “right to explanation”, the pragmatic view is likely the intelligent systems. more challenging perspective and will benefit the most from knowledge and research in HCI to ensure a usable integration of explanations into intelligent systems and to work on best THE NORMATIVE VIEW: A RIGHT TO EXPLANATION practices to do so. “[Algorithmic] decisions that seriously affect individu- als’ capabilities must be constructed in ways that are ACM Classification Keywords comprehensible as well as contestable. If that is not pos- H.5.m. Information Interfaces and Presentation (e.g. HCI): sible, or, as long as this is not possible, such decisions Miscellaneous are unlawful [...]” [6] A normative view on algorithmic transparency implies that Author Keywords intelligent systems may only be used if their underlying rea- Explanations; Intelligent Systems; Transparency. soning can be (adequately) explained to users. Following Hildebrandt’s argumentation above, this would also concern INTRODUCTION cases in which intelligent systems might yield better results Explaining how a system works and thus making its under- than non-intelligent ones – transparency is to be favored over lying reasoning transparent can contribute positively to user efficiency and effectiveness out of ethical and legal reasoning. satisfaction and perceived control [8, 9, 14] as well as to This view can also be found in the GDPR in Articles 13 to 15 overall trust in the system [11], and its decisions and recom- that, together with Articles 21 and 22, express what has been mendations [3, 13]. The legal obligation to make intelligent called a “right to explanation” [5], granting access to “mean- systems transparent – as enforced by European Union’s Gen- ingful information about the logic involved, as well as the eral Data Protection Regulation 1 (GDPR) in May 2018 – is significance and the envisaged consequences of [automated nevertheless strongly disputed. Integrating transparency is a decision-making] for the data subject”2 . But what does “mean- complex challenge and there are no agreed upon methods and ingful information” signify and what are the consequences best practices to do so. Critics argue that such regulations of this perspective when we want to design intelligent sys- will lead to deceleration of technical innovations (as many tems? Most of us do not fully understand even the workings useful machine learning algorithms are not or not entirely of non-intelligent systems we interact with in everyday life, 1 ec.europa.eu/justice/data-protection/; accessed 27 September 2017. including some that may have a serious impact on our safety and well-being, such as cars or other means of transportation. Do we apply double standards or are there unique properties of intelligent systems that justify this scepticism? One possible answer is that in non-intelligent systems, no matter how com- plex they may be, we theoretically have the option to inform ourselves about their workings, in particular in cases in which © 2018. Copyright for the individual papers remains with the authors. 2 http://eur-lex.europa.eu/legal-content/EN/TXT/, accessed 15 De- Copying permitted for private and academic purposes. cember 2017. ExSS ’18, March 11, Tokyo, Japan. the system does not react as expected. This option is currently lot of time and effort (see Level of Detail). At the same time, not available in most intelligent systems, which brings up it is not necessary that users go through explanations to use several interesting questions: Is the mere option to obtain an the system, the mere presence of the option for explanation explanation about a system’s workings more important than might be enough for many users. In that sense, the normative the actual design of this explanation (i.e., what is explained view uses explanations also with the goal of creating general and how)? Does having this option alone already strengthen “background trust”. the trust in a system? This would imply that an explanation does not necessarily have to be usable nor seamlessly inte- In contrast, the pragmatic view employs explanations to grated into the interface or the workflow – most importantly, achieve a (possibly limited, non-comprehensive) level of un- it should be available to users, and it should reflect the under- derstanding that facilitates usability and effective use of the system (see Presentation). Thus, it is necessary that users lying algorithmic processing in detail and as comprehensively encounter explanations at some point before or during their as possible. main tasks with the system (see Temporal Embedding). To THE PRAGMATIC VIEW: FOCUS ON USABILITY ensure this, systems may want to integrate explanations more “No, no! The adventures first, explanations take such a closely (see Spatial Embedding) to achieve what we might call dreadful time.” [1] “foreground trust”. From a pragmatic perspective, the current lack of transparency Foundation in intelligent systems hampers usability since users might not The Foundation informs the content of the explanation (i.e., be able to comprehend algorithmic decision-making, resulting what to explain?). in misuse or even disuse of the system [12]. Explanations thus serve as a means to foster efficient and effective use of The normative view may take into account an expert’s mental an intelligent system, and should be deployed wherever neces- model as a “gold standard” to cover all details of the under- sary to support users and their understanding of the system’s lying algorithm in a comprehensive, but still human-readable workings. The mere option for explanations or the right to form. explanation would not suffice in this case, since a pragmatic In contrast, the pragmatic view also puts more emphasis on solution might also ask for a minimum of cognitive load and a considering the users’ mental models, for example to tailor seamless integration of explanations into the interface and the explanations to particularly assess and address incorrect or workflow – excessive explanations would additionally hinder incomplete aspects of these models [4]. usability and interfere with the user experience. This perspec- tive is challenging in practice, since designers have to find the Presentation sweet spot between several different requirements: What kind The Presentation dimension covers how the explanation is of information, and in what detail, is actually interesting and presented to the user. helpful to users in a particular situation or during a particu- lar interaction? How can it be presented to the user without To achieve a comprehensive detailed understanding, the norma- hampering usability? As text or visualization? If so, which tive view could employ almost any format, including videos, wording or what kind of visualization is appropriate to not plots, interactive exploration and dedicated contact/help op- overwhelm users but still adequately reflect the complexity tions, possibly even a “hotline” service. of the algorithm? To approach the design of explanations In contrast, the pragmatic view aims for a presentation that in intelligent systems from a pragmatic point of view, HCI facilitates a balance between explanation and the actual main research has brought forth exemplary prototypes [7, 17] one UI elements. This might be achieved, for example, with mark- may consider for guidance, or design guidelines, such as Lim ers/icons, details-on-demand techniques, textual or pictorial and Dey’s intelligibility types [10]. However, best practices annotations, or modifications of layout and UI elements. are still missing to date. Level of Detail DESIGN DIMENSIONS The desired Level of Detail of the explanation also varies We describe several design dimensions to characterize possi- between the two perspectives: ble explanations which might arise from either one of the two presented perspectives. Some of these dimensions, such as The normative view favors a highly detailed explanation with Spatial Embedding or Temporal Embedding, have been simi- the goal of comprehensive understanding of the intelligent larly presented in prior work, e.g., on system intelligibility [15] system’s underlying algorithms. or meta user interfaces [2]. Table 1 presents an overview of In contrast, the pragmatic view may favor a less detailed these dimensions. The following sections introduce them in overview to facilitate a basic understanding. To do so effi- more detail, also pointing out connections between them. ciently, this view may focus on certain aspects and neglect Goal others deemed less important. This focus could be informed The main Goal of the explanation summarizes the different by a user-centred design process (see Foundation). motivations for the two perspectives: Spatial Embedding The normative view aims to achieve a comprehensive and The Spatial Embedding describes how the explanation is inte- detailed understanding on the user’s part – even if this takes a grated into the system’s GUI overall. Dimension Normative Realization Pragmatic Realization Goal understanding, background trust usability, effective use, foreground trust Foundation expert mental model symbiosis of expert and user mental models Presentation videos, plots, interactive exploration, markers, details-on-demand, UI elements and contact/help options annotations Level of Detail high, comprehensive overview, efficient Spatial Embedding separate view, “help page” directly integrated into UI Temporal Embedding accessed before/after main tasks interleaved with main tasks Reference underlying algorithms in general specific content, e.g., a specific recommendation Table 1. Design dimensions for explanations in intelligent systems and their desired realizations as motivated by the two perspectives. The normative view motivates a detailed explanation which approaches to designing explanations. If one takes a normative might thus not be embedded into the main GUI at all. Instead, standpoint, the mere option to receive explanations about an systems could add a separate view, such as a “help page”. algorithm is critical and sufficient. Explanations need to be de- tailed enough to satisfy users’ needs for information. To avoid In contrast, the pragmatic view is motivated to embed expla- cluttering the interface, these detailed holistic explanations nations directly into the GUIs used for the main tasks of the might be separated from the main interface, e.g., in a help system. This dimension is thus strongly linked to the presenta- function. If one takes a pragmatic standpoint, explanations tion choices (see Presentation). detached from the interface and workflow are unlikely to be effective, as one can expect that very few users will make use Temporal Embedding of this option. The goal of the pragmatic approach is rather The Temporal Embedding describes how the explanation is to integrate small bites of explanations into the interface to integrated into the temporal workflow with the system. increase users’ understanding of the system slowly and effort- The normative view motivates a detailed explanation which lessly over time. It is the design of such well thought-through might thus not be embedded into the main task workflow at all. interface concepts that reveal the systems functioning during Instead, the user might optionally access it before or after the the interaction where HCI knowledge and research will be main task (e.g., on a separate page, see Spatial Embedding). most needed and impactful. Hence, once accessed, the full explanation is revealed at once. That said, both perspectives are not to be regarded as mutually In contrast, the pragmatic view is motivated to embed explana- exclusive but can likely be combined appropriately. The nor- tions directly into the workflow, for example using annotations mative perspective can then be regarded as “must have” and or other details-on-demand within the main GUI views. This the right to receive explanations as a minimal standard, even implies that the explanation is revealed gradually over the if explanations are not integrated in a user-friendly fashion. course of the user’s main tasks with the system. Integrating explanations elegantly where they are interesting and useful for users will then be the challenge to work on and Reference we invite HCI researchers to jointly work on this already now. The Reference dimension describes to which elements the REFERENCES explanations relate to primarily. 1. Lewis Carroll. 2011. Alice’s Adventures in Wonderland. The normative view aims to reveal the underlying algorithms, Broadview Press. yet may not be interested in doing so for specific cases that 2. Joëlle Coutaz. 2006. Meta-User Interfaces for Ambient users encounter during their individual workflow. Spaces. In International Workshop on Task Models and In contrast, by integrating explanations more directly, the prag- Diagrams for User Interface Design. Springer, 1–15. matic view’s references for explanations are the specific cases 3. Henriette Cramer, Vanessa Evers, Satyan Ramlal, encountered by the individual user during their interactions. Maarten van Someren, Lloyd Rutledge, Natalia Stash, Lora Aroyo, and Bob Wielinga. 2008. The Effects of CONCLUSION Transparency on Trust in and Acceptance of a In this paper, we sketched out two perspectives on transparency Content-based Art Recommender. User Modeling and in intelligent systems – a normative and a pragmatic view. The User-Adapted Interaction 18, 5 (20 Aug 2008), 455. DOI: distinction between these two allows us to discuss different http://dx.doi.org/10.1007/s11257-008-9051-3 4. Malin Eiband, Hanna Schneider, Mark Bilandzic, Julian NY, USA, 195–204. DOI: Fazekas-Con, Mareike Haug, and Heinrich Hussmann. http://dx.doi.org/10.1145/1620545.1620576 2018. Bringing Transparency Design into Practice. To 11. Joseph B. Lyons, Garrett G. Sadler, Kolina Koltai, Henri appear in Proceedings of the 23rd International Battiste, Nhut T. Ho, Lauren C. Hoffmann, David Smith, Conference on Intelligent User Interfaces (IUI ’18). Walter Johnson, and Robert Shively. 2017. Shaping Trust 5. Bryce Goodman and Seth Flaxman. 2016. European through Transparent Design: Theoretical and Union Regulations on Algorithmic Decision-making and Experimental Guidelines. In Advances in Human Factors a “Right to Explanation”. arXiv preprint in Robots and Unmanned Systems. Springer, 127–136. arXiv:1606.08813 (2016). 12. Bonnie M. Muir. 1994. Trust in Automation: Part I. 6. Mireille Hildebrandt. 2016. The New Imbroglio. Living Theoretical Issues in the Study of Trust and Human with Machine Algorithms. Janssens, L.(ed.), The Art of Intervention in Automated Systems. Ergonomics 37, 11 Ethics in the Information Society. Mind you (2016), (1994), 1905–1922. 55–60. 13. James Schaffer, Prasanna Giridhar, Debra Jones, Tobias 7. Todd Kulesza, Margaret Burnett, Weng-Keen Wong, and Höllerer, Tarek Abdelzaher, and John O’Donovan. 2015. Simone Stumpf. 2015. Principles of Explanatory Getting the Message?: A Study of Explanation Interfaces Debugging to Personalize Interactive Machine Learning. for Microblog Data Analysis. In Proceedings of the 20th In Proceedings of the 20th International Conference on International Conference on Intelligent User Interfaces Intelligent User Interfaces (IUI ’15). ACM, New York, (IUI ’15). ACM, New York, NY, USA, 345–356. DOI: NY, USA, 126–137. DOI: http://dx.doi.org/10.1145/2678025.2701406 http://dx.doi.org/10.1145/2678025.2701399 14. Nava Tintarev and Judith Masthoff. 2007. A Survey of 8. Todd Kulesza, Simone Stumpf, Margaret Burnett, and Explanations in Recommender Systems. In Proceedings Irwin Kwan. 2012. Tell Me More?: The Effects of Mental of the 2007 IEEE 23rd International Conference on Data Model Soundness on Personalizing an Intelligent Agent. Engineering Workshop (ICDEW ’07). IEEE Computer In Proceedings of the SIGCHI Conference on Human Society, Washington, DC, USA, 801–810. DOI: Factors in Computing Systems (CHI ’12). ACM, New http://dx.doi.org/10.1109/ICDEW.2007.4401070 York, NY, USA, 1–10. DOI: 15. Jo Vermeulen. 2014. Designing for Intelligibility and http://dx.doi.org/10.1145/2207676.2207678 Control in Ubiquitous Computing Environments. Ph.D. 9. Todd Kulesza, Simone Stumpf, Margaret Burnett, Dissertation. Weng-Keen Wong, Yann Riche, Travis Moore, Ian Oberst, 16. Nick Wallace. 2017. EU’s Right to Explanation: A Amber Shinsel, and Kevin McIntosh. 2010. Explanatory Harmful Restriction on Artificial Intelligence. (25 Debugging: Supporting End-User Debugging of January 2017). Retrieved 15 December 2017 from Machine-Learned Programs. In Proceedings of the 2010 http://www.techzone360.com/topics/techzone/articles/ IEEE Symposium on Visual Languages and 2017/01/25/429101-eus-right-explanation-harmful- Human-Centric Computing (VLHCC ’10). IEEE restriction-artificial-intelligence.htm. Computer Society, Washington, DC, USA, 41–48. DOI: http://dx.doi.org/10.1109/VLHCC.2010.15 17. Rainer Wasinger, James Wallbank, Luiz Pizzato, Judy Kay, Bob Kummerfeld, Matthias Böhmer, and Antonio 10. Brian Y. Lim and Anind K. Dey. 2009. Assessing Krüger. 2013. Scrutable User Models and Personalised Demand for Intelligibility in Context-aware Applications. Item Recommendation in Mobile Lifestyle Applications. In Proceedings of the 11th International Conference on Springer Berlin Heidelberg, Berlin, Heidelberg, 77–88. Ubiquitous Computing (UbiComp ’09). ACM, New York, DOI:http://dx.doi.org/10.1007/978-3-642-38844-6_7