Kaisa Väänänen1, Ashley Colley2 and Jonna Häkkilä2 AI systems are increasingly integrated into people’s daily life, yet their widespread adoption may be hindered by the opaque nature of their functionality. XAI seeks to demystify AI decisions for users, thereby building trust. While existing XAI research has largely been focusing on digital interfaces offering numerical, textual, or visual explanations, the growing integration of AI into physical devices calls for tangible interfaces to explain AI’s decisions. To address this, we have introduced a preliminary conceptual framework for Tangible Explainable AI – TangXAI. This approach explores communicating XAI via physical objects, drawing on data physicalization and tangible human-AI interaction. In this workshop paper we will present ways in which tangible AI user interfaces might offer solutions to diverse users and situations by adapting AI’s explanations to suit the user needs and contexts. This approach may increase AI users’ curiosity and engagement, and advance the acceptance of future AI systems. Explainable AI, Tangible User Interfaces 1 Artificial Intelligence (AI) has rapidly grown to be a major theme in the research and development of interactive systems. AI is expected to be integrated to virtually all application domains across different life sectors, and will affect people on both individual and societal levels. The characteristics of AI will drive a shift from reactive information tools to proactive agents and invisible actors, and set new challenges for human-centered design [1]. AI systems are easily perceived as black boxes by people interacting with or affected by them, and transparency is a key quality criterion of human-AI interaction. Explainability is associated with the notion of explanation as an interface between humans and a decision maker that is both an accurate proxy of the decision maker and comprehensible to humans [2]. Explainable AI (XAI) helps users to understand the algorithms and decisions of AI, e.g. giving a reason for a particular decision [3]. Explainability can be considered as a bridge to avoid unwanted or even unethical use of algorithmic outputs. From a social viewpoint, explainability can be seen as the capacity to reach and guarantee fairness in AI [4]. To date, research into XAI has primarily focused on the use of graphical user interfaces, presenting explanations in numeric, textual or graphical format, e.g. [5]. However, the penetration of AI into physical systems – such as smart devices and embedded systems – is increasing, and hence the need for explainability in physical or tangible user interfaces (TUI) is also becoming apparent. Research on tangible interfaces for explainable AI - which we refer to as Tangible XAI (TangXAI) - is only just beginning to emerge. In our earlier paper [6], we presented an initial conceptual framework highlighting how the fields of XAI and TUI can be brought together to create intuitive interfaces for a variety of future smart devices. The framework was constructed based on merging of the concepts from existing Joint Proceedings of the ACM IUI Workshops 2024, March 18-21, 2024, Greenville, South Carolina, USA kaisa.vaananen@tuni.fi (K. Väänänen); ashley.colley@ulapland.fi (A. Colley); jonna.hakkila@ulapland.fi (J. Häkkilä) 0000-0002-3565-6021 (K. Väänänen); 0000-0001-7750-2058 (A. Colley); 0000-0003-2172-6233 (J. Häkkilä) © 2024 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). CEUR Workshop Proceedings (CEUR-WS.org) CEUR ceur-ws.org Workshop ISSN 1613-0073 Proceedings XAI and TUI frameworks found in literature, specifically the XAI framework by Belle & Papantonis [7] and the TUI framework by Hornecker & Buur [8]. This initial Tangible XAI (TangXAI) framework can be used to conceptualize and design for different kinds of tangible interactions to help explain AI's decisions to users. We have also conducted an initial user study of two TangXAI concepts to explore the viability of the approach [9], showing promise for some of the tangible interactions but also challenges for the users to distinguish between AI’s function and its explanations. In this workshop paper, we will discuss how tangible explainable AI may offer support to diverse groups of users by adapting AI’s explanations to suit the context and user needs through data physicalisation and tangible user interfaces. We brainstormed ideas with ChatGPT (chat.openai.com) and present five ways in which tangible interactions could help adapt XAI. Our previous paper [6] introduced a framework combining XAI and tangible user interfaces (TUI), aiming to create interfaces for smart devices that make AI decisions transparent. This framework, which we call TangXAI, integrates insights from a XAI framework by Belle & Papantonis [7] and TUI framework by Hornecker & Buur [8], providing an initial guide for tangible XAI design and research. See Figure 1. Figure 1: TangXAI conceptual framework combining explainable AI approaches (from Belle & Papantonis [7]) to be communicated by tangible interaction themes (extracted from Hornecker & Buur [8]). (Figure from our earlier paper [6]) Hornecker and Buur have presented a tangible interaction framework with the following themes: Expressive Representation focuses on the potential to convey expressive meaning though the material qualities and digital representations of a tangible interaction systems. Tangible manipulation focuses on a user's tactile interaction with physical objects, which are coupled to computational systems. Hornecker and Buur highlight grabbing and moving interface elements, rapid feedback during interaction and the importance of metaphor between the interaction and its effect. Spatial interaction builds on humans natural understanding of the spatial relationships of objects and our ability to move within in, configuring the space around us. Embodied Facilitation highlights the effect of objects placement and movement in space to influence our social interactions. [8] Explainable AI approaches proposed by Belle & Papantonis [7] are: Feature Relevance refers to assessing the impact of each input parameter on a model's output, with higher scores indicating greater importance. Shapley values are a common method used here [29]. However, this approach can overlook interactions between parameters and often only the most influential parameters are highlighted for explanations. Local Explanations provide justifications for individual decisions made by the AI, focusing on data points near the decision. Local explanations are useful for responding to queries about why a particular decision was made by the AI. Simplified Rule Extraction involves creating a simplified model from a complex one to interpret decisions. It often involves setting a balance between simplicity and accuracy. Such models could be made interactive and tangible for users to understand the decision-making process. Visual Explanations use graphical plots to illustrate how AI model decisions change across different input values. Techniques like Individual Conditional Expectation (ICE) and Partial Dependence Plots (PDP) help simplify these visualizations by focusing on one parameter at a time. Tangible interfaces, like bar charts, could be used to make the information more accessible. Combining TUI approaches with XAI approaches can provide grounding for designing novel human-AI interaction concepts. The scope of TUIs that may provide novel interface when coupled with AI is broad, stretching from materiality, texture and shape changing to spatial interaction [8]. Furthermore, data physicalization can be used to transform data beyond visual representation on paper or screens and give data a physical form [10]. In the context of XAI, data physicalization can be leveraged to provide an intuitive means for users to interact with a physical proxy representing the complex data in the AI system model [11]. This section provides a summary of our earlier TangXAI user study [9]. To assess the potential of tangible XAI interfaces, we created mock-up interfaces for two AI use cases, a cooking recipe recommendation which used the feature relevance XAI approach, and the selection of a jogging route that used the local explanations XAI approach (Figure 2). The mock-ups were used to demonstrate how the XAI approaches could make AI decisions understandable. As the primary focus was on the user experience, a Wizard of Oz study approach was used [12], with the test moderator simulating the AI system according to a set of predefined rules. We run five user study sessions, each including two participants. At the start of the session, the test moderator introduced the general concept of AI. After this, each of the cases was presented and explored in turn. The think-aloud process was used during the study, which was audio recorded for later analysis. [9] Figure 2: Left: Recipe recommendation tangible XAI interface. The tangible Lego XAI interface presenting the 2 most relevant parameters used in the AI’s decision (XAI approach: feature relevance). Right: Jogging route recommendation tangible XAI interface (XAI approach: local explanations). The position of the puck can be used to explore the effect of the two input parameters of the AI recommender. [9] Misunderstanding XAI. Some users initially misunderstood the purpose of XAI, thinking they were merely using a tool to filter options like recipes or jogging routes, rather than to gain insight into AI decision-making. For example, one test participant thought the XAI was just a refined search tool to narrow down dinner recipes by price and prep time. To clarify, a visual cue in the form of an Amazon Alexa image was added to emphasize that the recommendation was not controlled by the XAI interface. Training Data and Trust. Discussions on trust in AI from test participants often centered on the accuracy of the training dataset and the parameters presented by the XAI interface. Instead of the AI's actual performance, trust was reflected upon by comparing different apps for tasks like route planning. The purpose of an XAI interface is to calibrate users' trust in the AI, which can vary depending on the context and may lead to more or less trust after interaction. The XAI may also stimulate user curiosity and critical evaluation of the AI model and its outputs, which is vital for XAI effectiveness. The Role of Tangibility. Participants were unclear about the difference between inputting data into the AI and using XAI to understand and trust AI decisions. Feature relevance as an XAI approach was more comprehensible yet also occasionally misunderstood. Participants liked the tangible representation of data like the time and cost parameters using Lego blocks but mistook it for a simple selection interface. The local explanations interface was less clear, indicating a need for improved design. Tangible XAI interfaces were noted for slow-paced interactions, prompting deeper thought from users, and potentially leading to better understanding of the AI model. However, tangible XAI may not be ideal for applications where quick interaction is key. In summary, our preliminary user study suggests that tangible human-AI interaction may support people’s reflections with AI’s explanations, and hence help them gain trust or question the functionality of AI. However, the practical concept designs require careful considerations and thorough user evaluations to highlight the essential aspects of AI explanations, not just tangible AI interactions. Adaptivity refers to the system’s capability to accommodate the physical and mental abilities of the user as well as the situation of use and platform capabilities [13]. User interfaces can be adaptive in terms of their presentation or navigation, or both. Adaptivity has great potential to improve usability and personalized user experience of the system, and to increase inclusion of diverse user groups [14]. While adaptivity has been traditionally primarily concerned with software, there is recent work on shape-changing materials and interfaces [15], as well as data physicalization [16] that may offer methods for embodied interaction with AI-driven cyber- physical systems. In the following, we present potential ways in which tangible interactions could support AI system adaptivity, especially in terms of XAI. We used ChatGPT Plus to brainstorm ideas for this topic. While ChatGPT is well-known for its occasional hallucinatory traits, Schmidt et al. [17] have argued that it may provide useful information to support – but not replace – human-centered design, especially in the early requirement definition phase. Initial ideas for adaptive tangible XAI were generated with ChatGPT Plus (in January 3-12, 2024) with the following queries: 1. How could tangible explainable AI adapt its behaviour to the needs of diverse user groups? 2. How could tangible explainable AI adapt its behaviour to the needs of diverse user groups and usage situations? 3. Please use the following text to provide the list of potential uses. [Gave ChatGPT the text of MUM’22 paper by Colley et al. [6], which presents the initial TangXAI framework] 4. Add more insights for the tangible aspects of Human-AI interactions (interfaces). These four queries resulted in 29 purposes/themes suggested by ChatGPT for adaptive TangXAI, many of them overlapping. We used our expertise to merge and rephrase ideas from these themes, and omitted topics that were not specific to tangible interaction (such as generic context- awareness). As a result, five prominent themes were synthesised by the first author of this paper. In the following we present ways in which tangibility could support explainability in human-AI interaction are presented below. 4.1. Accessibility and inclusion Tangible interaction with XAI may ensure that users with different needs and requirements, such as disability-related issues, can better understand AI’s explanations. Multi-sensory feedback can advance such inclusion, for instance by offering tactile interfaces for the visually impaired and simplified interaction mechanisms for those with motor impairments. Different physical objects, shapes and feedback types may also support people with cognitive impairments. Customizability of physical elements of the XAI user interface can enable users or system designers to reconfigure it according to different people’s skills and capabilities for interacting with AI explanations. 4.2. Target group appropriateness As a baseline for human-centered AI design, the system should match explanations to the intended users’ needs and preferences. • Cultural sensitivity: Tangible XAI can incorporate cultural norms and practices into its design, ensuring that interactions are intuitive and respectful of cultural differences. This could involve using tangible interaction methods – forms, shapes, physical feedback types, objects – that are familiar within different cultural contexts. • Age groups: Different age groups may have varying levels of technological proficiency and cognitive abilities. Tangible XAI could adapt by offering different levels of explanation complexity and interactivity, from simple and engaging shapes and objects for children to more detailed and technical ones for adults. • Educational background: Tangible XAI could tailor its interactions and explanations based on the user's level of education or familiarity with AI concepts, avoiding technical output for laypersons while providing in-depth data for experts. The AI system could adjust its behavior based on the context in which it is used. Tangible XAI could change its interaction behavior depending on the physical, social and task context, such as home use, public space or classroom settings. Tangible AI interfaces could provide appropriate feedback to support the natural interaction modalities in different contexts, like using vibrations in noisy areas. As another example, in a professional setting, tangible XAI might offer more detailed explanations than in a casual, everyday scenario. Furthermore, a mobile tangible AI device with explanation capabilities could utilize shape-changing materials and adapt to the requirements of the context. Physical interfaces can be designed to recognize and respond to users’ bodily movements and positioning, enabling a more natural or intuitive interaction. Utilizing shapes and movements in the AI-interfacing device that correspond to familiar actions may allow users understand AI’s processes through instinctive bodily knowledge. Incorporating dynamic elements that change in real time, such as shape-shifting materials or responsive surfaces, can provide immediate feedback from AI interactions. For example, an increase in the importance of a specific AI decision parameter could be demonstrated in a 3D bar chart with the bar providing harder resistance towards the user’s interaction with it. Such embodied interactions can support people’s holistic bodily-cognitive processes and provide increased understanding of the decision making of the AI system. Enabling multiple users to interact with the tangible system simultaneously in the same space can facilitate inquiry of the AI systems functioning in a collaborative way. Such group learning and decision-making may be especially useful in educational or professional settings. Embodied learning helps people explore AI together, ask questions and share insights and possible concerns of AI’s decision making. Each of the five themes presented above may help form insights for how to enhance the user experience with XAI by leveraging the natural ways humans interact with the physical world. Tangible XAI may advance bridging the gap between complex AI systems and intuitive, human- centered interaction design. This paper extends the earlier work on tangible explainable AI by formulating themes for adaptive tangible XAI. Tangibility can provide a new dimension to embodied and multisensory human-AI interaction provide means to adapt to user expectations and behaviour, as well as to contexts of use. While the five themes presented in Section 4 are still quite high-level, they can be concretized by more exact tangible design choices. Considering Hornecker & Buur’s tangible interaction themes [8] of expressive representation, tangible manipulation, spatial interaction and embodied facilitation, as well as the concepts of data physicalisation [16], design researchers can create a variety of experiments for TangXAI. Benefits may include improved expressive meaning through the material qualities and digital-spatial representations of complex data, improved social sharing of human-AI interaction, and natural embodied explorations of AI models. This is in line with the Hoffman et al.’s [18] argument that explainable AI’s “goodness” is affected by users’ matching mental models, curiosity, and trust. By incorporating tangible features, XAI systems have potential to become more inclusive and better serve the needs of a wide array of user groups, making AI's decision-making processes clearer and more accessible to everyone. This paper aims to open the discussion on this topic. We acknowledge the use of ChaGPT Plus in this workshop paper in two ways: First, as was explained in the beginning of Section 4, ChatGPT was used for brainstorming, to gain initial ideas for how tangibility of XAI could advance adaptivity. Second, ChatGPT was used to paraphrase and shorten some text sections from our own earlier papers [6] and [9][8], after which we paraphrased these paragraphs further and used them in Sections 1-3 of this paper. [1] Thomas Olsson and Kaisa Väänänen. 2021. How does AI challenge design practice? interactions 28, 4 (2021), 62–64. [2] Riccardo Guidotti, Anna Monreale, Salvatore Ruggieri, Franco Turini, Fosca Giannotti, and Dino Pedreschi. 2018. A survey of methods for explaining black box models. ACM computing surveys (CSUR) 51, 5 (2018), 1–42. [3] Wei Xu. 2019. Toward human-centered AI: a perspective from human-computer interaction. interactions 26, 4 (2019), 42–46. [4] Alejandro Barredo Arrieta, Natalia Díaz-Rodríguez, Javier Del Ser, Adrien Bennetot, Siham Tabik, Alberto Barbado, Salvador García, Sergio Gil-López, Daniel Molina, Richard Benjamins, et al. 2020. Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information fusion 58(2020), 82–115. [5] Gulsum Alicioglu and Bo Sun. 2022. A survey of visual analytics for Explainable Artificial Intelligence methods. Computers & Graphics 102 (2022), 502–520. [6] Ashley Colley, Kaisa Väänänen, and Jonna Häkkilä. 2022. Tangible Explainable AI - an Initial Conceptual Framework. Proceedings of MUM 2022, November 27–30, 2022, Lisbon, Portugal. ACM, New York, NY, USA 6 Pages. https://doi.org/10.1145/3568444.3568456 [7] Vaishak Belle and Ioannis Papantonis. 2021. Principles and practice of explainable machine learning. Frontiers in big Data (2021), 39. [8] Eva Hornecker and Jacob Buur. 2006. Getting a grip on tangible interaction: a framework on physical space and social interaction. Proceedings of the SIGCHI conference on Human Factors in computing systems. 437–446. [9] Ashley Colley, Matilda Kalving, Jonna Häkkilä, and Kaisa Väänänen. Exploring Tangible Explainable AI (TangXAI): A User Study of Two XAI Approaches. Proceedings of OzCHI 2023, Wellington, New Zealand, December 4-6, 2023, ACM. [10] Yvonne Jansen, Pierre Dragicevic, Petra Isenberg, Jason Alexander, Abhijit Karnik, Johan Kildal, Sriram Subramanian, and Kasper Hornbæk. 2015. Opportunities and challenges for data physicalization. In Proceedings of CHI 2015, ACM. 3227–3236. [11] Martin Spindler, Christian Tominski, Heidrun Schumann, and Raimund Dachselt. 2010. Tangible views for information visualization. Proceeedings of ACM International Conference on Interactive Tabletops and Surfaces. 157–166. [12] Paul Green and Lisa Wei-Haas. 1985. The rapid development of user interfaces: Experience with the Wizard of Oz method. Proceedings of the human factors society annual meeting, Vol. 29. SAGE Publications Sage CA: Los Angeles, CA, 470–474. [13] Constantinis Stephanidis. Adaptive Techniques for Universal Access. User Modeling and User-Adapted Interaction 11, 159–179 (2001). https://doi.org/10.1023/A:1011144232235 [14] Mahdi H. Miraz, Maaruf Ali, Peter S. Excell, Adaptive user interfaces and universal usability through plasticity of user interface design, Computer Science Review, Volume 40, 2021. https://doi.org/10.1016/j.cosrev.2021.100363 [15] Marcelo Coelho, Jamie Zigelbaum. Shape-changing interfaces. Pers Ubiquit Comput 15, 161– 173 (2011). https://doi.org/10.1007/s00779-010-0311-y [16] Kim Sauvé and Steven Houben. (2022). From data to physical artifact: challenges and opportunities in designing physical data artifacts for everyday life. interactions, 29(2), 40-45. [17] Albrecht Schmidt, Passant Elagroudy, Fiona Draxler, Frauke Kreuter, and Robin Welsch. 2024. Simulating the Human in HCD with ChatGPT: Redesigning Interaction Design with AI. interactions 31 (1), January – February 2024, 24–31. https://doi.org/10.1145/3637436 [18] Robert Hoffman, et al (2023). Measures for explainable AI: Explanation goodness, user satisfaction, mental models, curiosity, trust, and human-AI performance. Frontiers in computer science 10.3389/fcomp.2023.1096257