=Paper= {{Paper |id=Vol-2068/preface-exss |storemode=property |title=None |pdfUrl=https://ceur-ws.org/Vol-2068/preface-exss.pdf |volume=Vol-2068 }} ==None== https://ceur-ws.org/Vol-2068/preface-exss.pdf
      ExSS 2018: Workshop on Explainable Smart Systems
           Brian Lim                                       Alison Smith                              Simone Stumpf
    Department of Computer                        Decisive Analytics Corporation              Centre for HCI Design, School
  Science, School of Computing                         Arlington, VA, USA                      of Mathematics, Computer
      National University of                           alison.smith@dac.us                      Science and Engineering
            Singapore                                                                          City, University of London
   brianlim@comp.nus.edu.sg                                                                   Simone.Stumpf.1@city.ac.uk
ABSTRACT                                                                attention, such as:
Smart systems that apply complex reasoning to make
decisions and plan behavior are often difficult for users to            •   What is an explanation? What should they look like?
understand. While research to make systems more                         •   Are explanations always a good idea? Can explanations
explainable and therefore more intelligible and transparent                 “hurt” the user experience, and in what circumstances?
is gaining pace, there are numerous issues and problems                 •   When are the optimal points at which explanations are
regarding these systems that demand further attention. The                  needed for a particular system?
goal of this workshop is to bring academia and industry                 •   How can we measure the value of explanations or how
together to address these issues. The workshop includes a                   the explanation is provided? What human factors
keynote, poster panels, and group activities, towards                       influence the value of explanations?
developing concrete approaches to handling challenges                   •   What are “more explainable” models that still have
related to the design, development, and evaluation of                       good performance in terms of speed and accuracy?
explainable smart systems.
                                                                        This workshop brings together industry and academic
Author Keywords                                                         researchers in the area of explainable smart systems to
Explanations; visualizations; machine learning; intelligent             exchange perspectives, approaches, and results.
systems; intelligibility; transparency.
                                                                        WORKSHOP OVERVIEW
ACM Classification Keywords
                                                                        Keynote Speaker
H.5.m. Information interfaces and presentation (e.g., HCI):
                                                                        The workshop keynote will be provided by David Gunning.
Miscellaneous.                                                          David Gunning is DARPA program manager in the
INTRODUCTION                                                            Information Innovation Office (I2O) and manages the
Smart systems that apply complex reasoning to make                      Explainable AI (XAI) [2] and the Communicating with
decisions and plan behaviour, such as clinical decision                 Computers (CwC) programs. Prior to these programs, he
support systems, personalized recommendations, home                     managed the Personalized Assistant that Learns (PAL)
automation, machine learning classifiers, robots and                    project that produced Siri and the Command Post of the
autonomous vehicles, are difficult for users to understand              Future (CPoF) project that was adopted by the US Army as
[1]. Textual explanations and graphical visualizations are              their Command and Control system for use in Iraq and
often provided by a system to give users insight into what it           Afghanistan. He has previously worked at Pacific Research
is doing and why it is doing it [3,7,11,13]. Previous work              National Lab (PNNL), the Palo Alto Research Center
has stressed the importance of explaining various aspects of            (PARC), Vulcan Inc. and the Air Force Research Labs.
the decision-making process to users [8], and these different
                                                                        Accepted Papers
kinds of intelligibility types – for example, Confidence [5,9]
                                                                        Fifteen papers were accepted to ExSS 2018 after a peer-
showing the probability of the diagnosis being correct,                 review process; each paper was reviewed by three members
either as a percentage or a pie chart, and Why and Why Not              of the Program Committee:
[10] providing facts used in reasoning about the diagnosis –
have been used in smart systems [6,10].                                 •   Enrico Bertini, New York University, USA
MOTIVATION, TOPICS AND CONTRIBUTION                                     •   Maya Cakmak, University of Washington, USA
Research to make smart systems explainable is gaining                   •   Fan Du, University of Maryland, USA
pace, partly stimulated through a recent DARPA call on                  •   Dave Gunning, DARPA, USA
Explainable AI (XAI) [2], which seeks to develop more                   •   Judy Kay, University of Sydney, Australia
explainable models and interfaces that allow users to                   •   Bran Knowles, University of Lancaster, UK
understand, appropriately trust and interact with these new             •   Todd Kulesza, Microsoft, USA
systems. However, there are numerous issues and problems                •   Mark W. Newman, University of Michigan, USA
regarding explainable smart systems that demand further                 •   Deokgun Park, University of Maryland, USA
© 2018. Copyright for the individual papers remains with the authors.
Copying permitted for private and academic purposes. ExSS '18, March
11, Tokyo, Japan.
•      Forough Poursabzi-Sangdeh, University of Colorado,       Health Research & Technology (BIGHEART) and the
       Boulder, USA                                             Sensor-enhanced Social Media Centre (SeSaMe) at NUS.
•      Jo Vermeulen, Aarhus University, Denmark                 Alison Smith is the Lead Engineer of the Machine
The papers will be presented during the themed poster           Learning Visualization Lab for Decisive Analytics
panel session, which is organized into five themes:1            Corporation, where her focus is on enhancing end users’
                                                                understanding and analysis of complex data without
•      How to glean explainable information from machine        requiring expertise in data science or machine learning. She
       learning systems – “The design and validation of an      is also a PhD student at the University of Maryland,
       intuitive confidence measure” (van der Waa et al.),      College Park, and her research focuses on human-centred
       “An Axiomatic Approach to Linear Explanations in         design for interactive machine learning [12].
       Data Classification” (Sliwinski et al.), “Explaining
       Contrasting Categories” (Pazzani et al.), Explaining     Dr. Simone Stumpf is a Senior Lecturer (Associate
       Complex Scheduling Decisions” (Ludwig et al.).           Professor) at City, University of London, UK, where she
•      Explainable/semantically meaningful features –           researches designing end-user interactions with intelligent
       “Explainable Movie Recommendation Systems by             systems [4,6,14]. Her current projects include designing
       using Story-based Similarity” (Lee and Jung),            user interfaces for smart heating systems and smart home
       “Labeling images by interpretation from Natural          self-care systems for people with dementia or Parkinson’s
       Viewing” (Guo et al.)                                    disease.
•      How to design and present explanations – “Normative      REFERENCES
       vs. Pragmatic: Two Perspectives on the Design of         1. Alyssa Glass, Deborah L. McGuinness, and Michael
       Explanations in Intelligent Systems” (Eiband et al.),       Wolverton. 2008. Toward establishing trust in adaptive
       “Explaining Recommendations by Means of User                agents. In Proceedings of the 13th international
       Reviews” (Donkers et al.), “What Should Be in an XAI        conference on Intelligent user interfaces - IUI ’08, 227.
       Explanation? What IFT Reveals” (Dodge et al.),              https://doi.org/10.1145/1378773.1378804
       “Interpreting Intelligibility under Uncertain Data
       Imputation” (Lim et al.)                                 2. Dave Gunning. 2016. Explainable Artificial Intelligence
•      Explanations’ impact on user behavior and experience        (XAI). Retrieved December 20, 2017 from
       – “Explanation to Avert Surprise” (Gervasio et al.),        https://www.darpa.mil/program/explainable-artificial-
       “Representing Repairs in Configuration Interfaces: A        intelligence
       Look at Industrial Practices” (Leclercq et al.),         3. Jonathan L. Herlocker, Joseph A. Konstan, and John
       “Explaining smart heating systems to discourage             Riedl. 2000. Explaining collaborative filtering
       fiddling with optimized behavior” (Stumpf et al.)           recommendations. In Proceedings of the 2000 ACM
•      User feedback/interactive explanations – “Working           conference on Computer supported cooperative work -
       with Beliefs: AI Transparency in the Enterprise”            CSCW                     ’00,              241–250.
       (Chander et al.), “The Problem of Explanations without      https://doi.org/10.1145/358916.358995
       user Feedback” (Smith and Nolan)
                                                                4. Todd Kulesza, Margaret Burnett, Weng-Keen Wong,
The main part of the workshop is structured around two             and Simone Stumpf. 2015. Principles of Explanatory
hands-on activity sessions in small subgroups of 3-5               Debugging to Personalize Interactive Machine
participants. The activities are grounded in example               Learning. In Proceedings of the 20th International
systems provided by industry participants. The first session       Conference on Intelligent User Interfaces - IUI ’15,
identifies challenges and high-level approaches for the            126–137. https://doi.org/10.1145/2678025.2701399
example systems while the second session in explores
concrete explanation or study designs for the example           5. Todd Kulesza, Simone Stumpf, Margaret Burnett, Weng
systems. Each of the subgroups works on the activities in          Keen Wong, Yann Riche, Travis Moore, Ian Oberst,
parallel, and the outcomes are shared in a final presentation      Amber Shinsel, and Kevin McIntosh. 2010. Explanatory
                                                                   debugging: Supporting end-user debugging of machine-
session.
                                                                   learned programs. In Proceedings - 2010 IEEE
Workshop Organizers                                                Symposium on Visual Languages and Human-Centric
Dr. Brian Lim is an Assistant Professor in the Department          Computing,          VL/HCC        2010,       41–48.
of Computer Science at the National University of                  https://doi.org/10.1109/VLHCC.2010.15
Singapore (NUS), Singapore, where he researches
ubiquitous computing and intelligible data analytics for        6. Todd Kulesza, Simone Stumpf, Weng-Keen Wong,
healthcare and smart cities [8–10]. He is also Principal           Margaret M. Burnett, Stephen Perona, Andrew Ko, and
Investigator at both the Biomedical Institute for Global           Ian Oberst. 2011. Why-oriented end-user debugging of
                                                                   naive Bayes text classification. ACM Transactions on
                                                                   Interactive Intelligent Systems 1, 1: 1–31.
1
    The papers are also published in this order.                   https://doi.org/10.1145/2030365.2030367
7. Carmen Lacave and Francisco J. Díez. 2002. A review         11. Pearl Pu and Li Chen. 2006. Trust building with
   of explanation methods for Bayesian networks.                   explanation interfaces. In Proceedings of the 11th
   Knowledge Engineering Review 17, 107–127.                       international conference on Intelligent user interfaces -
   https://doi.org/10.1017/S026988890200019X                       IUI ’06, 93. https://doi.org/10.1145/1111449.1111475
8. Brian Y. Lim and Anind K. Dey. 2010. Toolkit to             12. Alison Smith, Varun Kumar, Jordan Boyd-Graber,
   support intelligibility in context-aware applications. In       Kevin Seppi, and Leah Findlater. 2018. Closing the
   Proceedings of the 12th ACM international conference            Loop: User-Centered Design and Evaluation of a
   on Ubiquitous computing - Ubicomp ’10, 13.                      Human-in-the-Loop Topic Modeling System. In
   https://doi.org/10.1145/1864349.1864353                         Intelligent User Interfaces.
9. Brian Y. Lim and Anind K. Dey. 2011. Investigating          13. William Swartout, Cecile Paris, and Johanna Moore.
   intelligibility for uncertain context-aware applications.       1991. Explanations in knowledge systems: Design for
   In Proceedings of the 13th international conference on          Explainable Expert Systems. IEEE Expert 6, 3: 58–64.
   Ubiquitous computing - UbiComp ’11, 415.                        https://doi.org/10.1109/64.87686
   https://doi.org/10.1145/2030112.2030168
                                                               14. K Yarrow and I Sverdrup-Stueland. 2004. Fixing the
10. Brian Y. Lim, Anind K. Dey, and Daniel Avrahami.               Program My Computer Learned: Barriers for End Users,
    2009. Why and why not explanations improve the                 Barriers for the Machine Todd. Openaccess.City.Ac.Uk
    intelligibility of context-aware intelligent systems.          47, May: 552–567. https://doi.org/10.1007/978-3-540-
    Proceedings of the 27th international conference on            25939-8
    Human factors in computing systems - CHI 09: 2119–
    2129. https://doi.org/10.1145/1518701.1519023