=Paper= {{Paper |id=Vol-3222/invited |storemode=property |title=From User Control and Explainability in Recommendation Interfaces to Visual XAI |pdfUrl=https://ceur-ws.org/Vol-3222/invited.pdf |volume=Vol-3222 |authors=Denis Parra |dblpUrl=https://dblp.org/rec/conf/recsys/Parra22 }} ==From User Control and Explainability in Recommendation Interfaces to Visual XAI== https://ceur-ws.org/Vol-3222/invited.pdf
From User Control and Explainability in
Recommendation Interfaces to Visual XAI
Denis Parra1,2,3,4
1
  Pontificia Universidad Catolica de Chile, Santiago, Chile
2
  National Research Center for Artificial Intelligence, FB210017 (CENIA), Santiago, Chile
3
  Millenium Institute for Healthcare Engineering, ICN2021_004 (iHealth), Santiago, Chile
4
  Millenium Institute for Data Fundamentals, ICN17_002 (IMFD), Santiago, Chile


                                         Abstract
                                         Transparency and explainability are topics studied for more than two decades in the area of recommender
                                         systems, due to its impact on the user experience of personalized systems. Interestingly, only in recent
                                         years these topics have reached importance within Artificial Intelligence (AI) as a whole, under the
                                         umbrella of the term XAI (eXplainable AI). Some authors have shown that advances in XAI from different
                                         fields (computer science, design, HCI, IR, AI, etc.) have not been properly integrated into a common
                                         body of knowledge due to lack of connection among these communities. This talk gives one small step
                                         to bridge this gap, by showing how works on explainability, transparency, visualization, user interfaces
                                         and user control in recommender systems are significantly related to XAI and can inspire new ideas of
                                         research on visual XAI.




1. Introduction
Transparency and explainability in artificial intelligence are very important topics, specially
considering how quickly AI is permeating critical domains such as medicine, law, finance,
and defense [1]. Since the early days of recommendation systems (RecSys), transparency and
explainability have also been topics of high relevancy in the area [2, 3], due to its impact on the
user experience of personalized information filtering systems. Despite this relevance, when
the topic of eXplainable AI (XAI) was introduced by David Gunning in a DARPA challenge
starting in 2017 [1], related research from RecSys was not frequently cited outside its own
community. Although this could be seen as a specific disconnection between the applied
research on recommendation systems and the more theoretical work of researchers on artificial
intelligence, Abdul et al. [4] found that it is indeed common that very related research on
intelligent systems from different conferences is rarely cited across different communities, such
as computer-human interaction (CHI), machine learning (AAAI, Neurips, ICML) and applied
AI topics such as recommendation systems (RecSys), information retrieval (SIGIR), intelligent
user interfaces (IUI) or natural language processing (ACL, NAACL, EMNLP), for naming just

IntRS’22: Joint Workshop on Interfaces and Human Decision Making for Recommender Systems, September 23rd, 2022,
Seattle, WA
Envelope-Open dparras@uc.cl (D. Parra)
GLOBE https://dparra.sitios.ing.puc.cl (D. Parra)
Orcid 0000-0001-9878-8761 (D. Parra)
                                       © 2022 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
    CEUR
    Workshop
    Proceedings
                  http://ceur-ws.org
                  ISSN 1613-0073
                                       CEUR Workshop Proceedings (CEUR-WS.org)
a few. Abdul et al. [4] point toward the need for creating bridges among theses communities
doing similar research in order to develop a deeper and wider progress of transparency and
explainability in AI and applied domains.
   In this talk, Prof. Denis Parra attempts a small step to bridge this gap. He surveys several
works on explainability, transparency, visualization, user interfaces and user control in RecSys
[5] and then shows how they inspire and contribute to current research on XAI. He also shows
research on Visual XAI emerging thanks to lessons from these diverse fields.


2. Bio
Denis Parra is Associate Professor at the Department of Computer Science, in the School of
Engineering at Pontificia Universidad Católica de Chile. He is also principal researcher at
the excellence research centers CENIA (National Center for Research in Artificial Intelligence
in Chile), iHealth (Millennium Institute for Intelligent Healthcare Engineering), and adjunct
researcher at the IMFD (Millennium Institute for Research on Data Fundamentals). He earned
a Fulbright scholarship to pursue his PhD studies between 2008-2013 at the University of
Pittsburgh, USA. Prof. Parra has published numerous articles in prestigious journals such as
ACM TiiS, ACM CSUR, IJHCS, ESWA, and PloS ONE, as well as in conferences such ACM IUI,
ACM RecSys, UMAP, ECIR, and EuroVis among others. Prof. Parra has been awarded a student
best paper award at UMAP conference 2011, as well as candidate best paper awards twice at ACM
IUI, in 2018 and 2019, for his research on intelligent user interfaces for recommender systems
and on AI medical applications. Prof. Parra has served as senior PC chair in conferences such
as IUI, RecSys, UMAP, SIGIR, The Web Conference and WSDM. Prof. Parra research interests
are Recommender Systems, Intelligent User Interfaces, Applications of Machine Learning
(Healthcare, Creative AI) and Information Visualization. He is currently leading the Human-
centered AI and Visualization (HAIVis) research group as well as co-leading the CreativAI Lab
with professor Rodrigo Cádiz. He is also a faculty member of the PUC Artificial Intelligence
Laboratory, IA Lab.


References
[1] D. Gunning, Explainable artificial intelligence (xai), Defense advanced research projects
    agency (DARPA), nd Web 2 (2017) 1.
[2] N. Tintarev, J. Masthoff, Effective explanations of recommendations: user-centered design,
    in: Proc. of the 2007 ACM RecSys Conference, RecSys ’07, ACM, 2007, pp. 153–156.
[3] J. L. Herlocker, J. A. Konstan, J. Riedl, Explaining collaborative filtering recommendations,
    in: Proc. of the 2000 ACM CSCW conference, CSCW ’00, ACM, 2000, pp. 241–250.
[4] A. Abdul, J. Vermeulen, D. Wang, B. Y. Lim, M. Kankanhalli, Trends and trajectories for
    explainable, accountable and intelligible systems: An hci research agenda, in: Proc. of the
    2018 CHI conference, 2018, pp. 1–18.
[5] C. He, D. Parra, K. Verbert, Interactive recommender systems: A survey of the state of the
    art and future research challenges and opportunities, Expert Systems with Applications 56
    (2016) 9–27.