=Paper= {{Paper |id=Vol-2582/abstract1 |storemode=property |title=ExSS-ATEC: Explainable Smart Systems and Algorithmic Transparency in Emerging Technlologies 2020 (workshop abstract) |pdfUrl=https://ceur-ws.org/Vol-2582/abstract1.pdf |volume=Vol-2582 |authors=Alison Smith-Renner,Styliani Kleanthous,Brian Lim,Tsvi Kuflik,Simone Stumpf,Jahna Otterbacher,Advait Sarkar,Casey Dugan,Avital Shulner |dblpUrl=https://dblp.org/rec/conf/iui/Smith-RennerKLK20a }} ==ExSS-ATEC: Explainable Smart Systems and Algorithmic Transparency in Emerging Technlologies 2020 (workshop abstract)== https://ceur-ws.org/Vol-2582/abstract1.pdf
         ExSS-ATEC: Explainable Smart Systems and Algorithmic
              Transparency in Emerging Technologies 2020
       Alison Smith-Renner                                            Styliani Kleanthous                                    Brian Lim
   Decisive Analytics Corporation                                 OUC & Rise Research Centre                     National University of Singapore
        Arlington, VA, USA                                                Nicosia, Cyrpus                                  Singapore
        alison.renner@dac.us                                     styliani.kleanthous@gmail.com                     brianlim@com.nus.edu.sg

               Tsvi Kuflik                                                 Simone Stumpf                                Jahna Otterbacher
           University of Haifa                                      City, University of London                     OUC & Rise Research Centre
               Haifa, Israel                                               London, UK                                    Nicosia, Cyprus
          tsvikak@is.haifa.ac.il                                   simone.stumpf.1@city.ac.uk                       jahna.otterbacher@me.com

             Advait Sarkar                                                 Casey Dugan                                    Avital Shulner
          Microsoft Research                                              IBM Research                                   University of Haifa
            Cambridge, UK                                               Cambridge, MA, US                                    Haifa, Israel
         advait@microsoft.com                                          cadugan@us.ibm.com                             avitalshulner@gmail.com


ABSTRACT                                                                           KEYWORDS
Smart systems that apply complex reasoning to make decisions and                   Explanations; visualizations; machine learning; intelligent systems;
plan behavior, such as decision support systems and personalized                   intelligibility; transparency; fairness; accountability
recommendations, are difficult for users to understand. Algorithms
allow the exploitation of rich and varied data sources, in order to                ACM Reference format:
support human decision-making and/or taking direct actions;                        Alison Smith-Renner, Styliani Kleanthous, Brian Lim, Tsvi Kuflik, Simone
however, there are increasing concerns surrounding their                           Stumpf, Jahna Otterbacher, Advait Sarkar, Casey Dugan, and Avital
transparency and accountability, as these processes are typically                  Shulner. 2020. ExSS-ATEC: Explainable Smart Systems and Algorithm
opaque to the user. Transparency and accountability have attracted                 Transparency in Emerging Technologies 2020. In Proceedings of the IUI
increasing interest to provide more effective system training, better              workshop on Explainable Smart Systems and Algorithmic Transparency in
reliability and improved usability. This workshop will provide a                   Emerging Technologies (ExSS-ATEC’20), Cagliari, Italy, 2 pages.
venue for exploring issues that arise in designing, developing and
evaluating intelligent user interfaces that provide system
                                                                                   1 Background
transparency or explanations of their behavior. In addition, our goal
is to focus on approaches to mitigate algorithmic biases that can be               Smart systems that apply complex reasoning to make decisions and
applied by researchers, even without access to a given system’s                    plan behavior, such as clinical decision support systems,
inter-workings, such as awareness, data provenance, and validation.                personalized recommendations, home automation, machine
                                                                                   learning classifiers, robots and autonomous vehicles, are difficult for
                                                                                   a user to understand [2]. Fairness, accountability and transparency
                                                                                   are currently hotly discussed aspects of machine learning systems,
CCS CONCEPTS                                                                       especially for deep learning systems that are seen to be very difficult
                                                                                   to explain to users. Textual explanations and graphical
• Human-centered          computing~Human         computer
                                                                                   visualizations are often provided by a system to give users insight
interaction (HCI)~Interactive systems and tools • Computing
                                                                                   into what the systems is doing and why it is doing it [3–6] and work
methodologies~Machine             learning      • Computing
                                                                                   is starting to investigate how to best engage in transparency design
methodologies~Artificial intelligence
                                                                                   [1]. However, there are still numerous issues and problems
                                                                                   regarding explanations and algorithm transparency that demand
ExSS-ATEC ’20. March 17, 2020, Cagliari, Italy
Copyright © 2020 for this paper by its authors. Use permitted under Creative       further attention, such as how can we build (better) explanations or
Commons License Attribution 4.0 International (CC BY 4.0)                          transparent systems, what should be included in an explanation and
                                                                                   how should they be presented, when should explanations be
                                                                                   deployed, or when do they detract from the user experience, how
                                                                                   can transparency expose biases in data or algorithmic processes,
                                                                                   and how can we evaluate explanations or system transparency,
                                                                                   especially from a user perspective.
 ExSS-ATEC ’20, March 17, 2020, Cagliari, Italy                                                                                   Smith-Renner et al.

                                                                         machine learning, and that some of the world’s best AI innovations
      The ExSS-ATEC 2020 workshop brings together academia               come from the humanities powered by computing.
and industry together to address these issues. This workshop is a
follow-on from the ExSS 2018 and 2019 workshops in combination           3.1 Workshop Committee
with the ATEC 2019 workshop previously held at IUI. This                 The workshop committee includes Gagan Bansal (UW), Veronika
workshop includes a keynote, paper panels, and group activities,         Bogina (Haifa University), Robin Burke (UC Boulder), Jonathan
with the goal of developing concrete approaches to handling              Dodge (OSU), Fan Du (Adobe), Malin Eiband (LMU), Michael
challenges related to the design and development of explanations         Ekstrand (Boise State), Melinda Gervasio (SRI), Fausto Giunchiglia
and system transparency. ExSS-ATEC 2020 is supported by the              (U Toronto), Alan Hartman (Afeka), Judy Kay (U Sydney), Bran
Cyprus Center for Algorithm Transparency (CyCAT).                        Knowles (U Lancaster), Todd Kulesza (Google), Tak Lee (Adobe),
                                                                         Loizos Michael (Cyprus), Shabnam Najafan (Delft), Alicja
                                                                         Piotrkowicz (U Leeds), Forough Poursabzi-Sangdeh (Microsoft),
2 Workshop Overview                                                      Gonazalo Ramos (MSR), Stephanie Rosenthal (CMU), Martin
The workshop keynote is Dr. Carrie Cai, focusing on current              Schuessler (TU Berlin), Ramya Srinivasan (Fujitsu), Mike Terry
challenges for explainable smart systems. Nine accepted papers are       (Google), Sarah Völkel (U Munich), and Jürgen Ziegler (U
presented as three themed panel sessions. Accepted papers are:           Duisburg).
•    Jung et al. “Transparency of Data and Algorithms in a Persona
     System: Explaining Data-Driven Personas to End Users”               3.1 Workshop Organizers
•    Dodge and Burnett, “Position: We Can Measure XAI                    The workshop organizing committee includes: Alison Smith-
     Explanations Better with ‘Templates’”                               Renner, Director of the Machine Learning Visualization Lab for
•    Hepenstal et al., “What Are You Thinking? Explaining                Decisive Analytics Corporation and PhD Candidate at University of
     Conversational Agent Responses for Criminal Investigations”         Maryland, College Park; Dr. Styliani Kelanthous, senior
•    Stockdill et al., “Cross-Domain Correspondences for                 researcher in the Faculty of Pure and Applied Sciences at Open
     Explainable Recommendations”                                        University of Cyprus and RISE Research Centre, Cyprus; Dr. Brian
•    Lindvall and Molin, “Verification Staircase: A Design Strategy      Lim, Assistant Professor in the Department of Computer Science at
     for Actionable Explanations”                                        the National University of Singapore, Prof. Tsvi Kuflik, professor
•    Larasati et al., “The Effects of Explanation Styles on Users’       and former head of the Information Systems Department at the
     Trust”                                                              University of Haifa, Israel; Dr. Simone Stumpf, Senior Lecturer
•    Ferreira and Monteiro, “Do ML Experts Discuss Explainability        at City, University of London, Jahna Otterbacher, founder of the
     for AI Systems? A Discussion Case in the Industry for a             Behavioral & Language Traces research lab, which is housed in the
     Domain-Specific Solution”                                           Faculty of Pure and Applied Sciences, Open University of Cyprus;
•    Zürn et al., “What If? Interaction with Recommendations”            Dr. Advait Sarkar, senior researcher at Microsoft Research in
•    Chromik and Schuessler, “A Taxonomy for Human Subject               Cambridge (UK); Casey Dugan, manager of the AI Experience
                                                                         Team at IBM Research Cambridge (MA, USA); and Avital
     Evaluation of Black-Box Explanations in XAI”
                                                                         Shulner, PhD student in the Information Systems Department at
The second part of the workshop is structured around hands-on
                                                                         the University of Haifa, Israel.
activity sessions in small subgroups of 3-5 participants.

                                                                         REFERENCES
3 Key People                                                             [1]   Malin Eiband, Hanna Schneider, Mark Bilandzic, Julian Fazekas-Con, Mareike
                                                                               Haug, and Heinrich Hussmann. 2018. Bringing Transparency Design into
                                                                               Practice. In Proceedings of the 2018 Conference on Human
3.1 Keynote Speaker                                                            InformationInteraction&Retrieval. https://doi.org/10.1145/3172944.3172961
                                                                         [2]   Alyssa Glass, Deborah L. McGuinness, and Michael Wolverton. 2008. Toward
Dr. Carrie Cai is a senior research scientist at Google Brain and              establishing trust in adaptive agents. In Proceedings of the 13th international
PAIR (Google’s People+AI Research Initiative). Her research aims               conference     on     Intelligent  user    interfaces    -   IUI    ’08,   227.
                                                                               https://doi.org/10.1145/1378773.1378804
to make human-AI interactions more productive and enjoyable to           [3]    Jonathan L. Herlocker, Joseph A. Konstan, and John Riedl. 2000. Explaining
end-users, ranging from novel tools to help doctors steer AI cancer-           collaborative filtering recommendations. In ACM Conference on Computer
                                                                               Supported          Cooperative          Work          (CSCW),          241–250.
diagnostic systems in real-time, to frameworks for effectively                 https://doi.org/10.1145/358916.358995
onboarding end-users to AI assistants. Her work has been published       [4]   Carmen Lacave and Francisco J. Díez. 2002. A review of explanation methods
in HCI venues such as CHI, IUI, CSCW, and VL/HCC, receiving 4                  for Bayesian networks. Knowledge Engineering Review 17, 107–127.
                                                                               https://doi.org/10.1017/S026988890200019X
best paper / honorable mention awards and profiled on TechCrunch         [5]    Pearl Pu and Li Chen. 2006. Trust building with explanation interfaces. In
and the Boston Globe. Before joining Google, Carrie completed her              International conference on Intelligent User Interfaces (IUI), 93.
                                                                               https://doi.org/10.1145/1111449.1111475
PhD in computer science at MIT, where she created intelligent wait-      [6]   William Swartout, Cecile Paris, and Johanna Moore. 1991. Explanations in
learning systems to help people accomplish long-term goals in short            knowledge systems: Design for Explainable Expert Systems. IEEE Expert 6, 3:
                                                                               58–64. https://doi.org/10.1109/64.87686
chunks while waiting. Carrie first learned to program at age 24,
after having completed undergraduate degrees in human biology
and education at Stanford. She feels that it’s never too late to learn