Autonomous Critical Help provided by an Artificial Agent in the field of Cultural Heritage Filippo Cantucci*,† , Rino Falcone‡ Institute of Cognitive Science and Technology, National Research Council of Italy, (ISTC-CNR), Rome Abstract In this work we introduce a computational cognitive model that provide an intelligent agent (e.g. robot, virtual assistant) with the capability to personalize a museum visit. The personalization is based on the goals, interests of the user that intends to visit the museum and considers also the goals, interests of the museum curators that have designed the exhibition. We introduce and evaluate a special type of help provided by the agent, called Critical Help, that can lead to a change of the user’s request, with the goal to consider needs that the same user cannot or has not been able to assess. The computational model has been implemented by exploiting the well known agent oriented programming framework Jason. We recruited 26 real participants in a preliminary robotic experiment that we conducted in order to test the computational model. Each participant has interacted with the humanoid robot Nao, widely used in Human-Robot Interaction (HRI) scenarios. Keywords Human-Agent Interaction, User Modelling, Adjustable Social Autonomy, Cultural Heritage 1. Introduction In recent years, the purpose of several cultural heritage venues (e.g. museums), has shifted from providing static information of the resources they handle (e.g. collection of artworks), to provide a much more wide user experience that, sometimes, goes beyond the visit on-site [1, 2, 3]. Different Human-Machine Interaction (HMI) approaches have been proposed, in order to design intelligent systems able to properly interact with users in museums [4, 5, 6]. One of the most challenging results that cultural heritage try to achieve is the Personalization of the user experience [7]. Recent developments in digital technologies regarding the cultural heritage domain have driven technological trends in comfortable and convenient traveling, by offering interactive and personalized user experiences. The emergence of big data analytics [8], recommendation systems [9] and personalization techniques have created a smart research field, augmenting cultural heritage visitor’s experience [10]. Most of the existing approaches are strongly oriented to give to the user the most suitable experience they can have every time they get in touch with cultural heritage venues, by leveraging on complex user models. But contextually to the ability to consider the user needs, an intelligent system should also take into account the interests, goals, plans of those who manage the cultural heritage and WOA 2022: 23rd Workshop From Objects to Agents, September 1–2, Genova, Italy * filippo.cantucci@istc.cnr.it † These authors contributed equally. $ filippo.cantucci@istc.cnr.it (F. Cantucci); rino.falcone@istc.cnr.it (R. Falcone) © 2022 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). CEUR Workshop Proceedings http://ceur-ws.org ISSN 1613-0073 CEUR Workshop Proceedings (CEUR-WS.org) allow its usage. These plans, goals and interests are in general implicit in the restrictions and mandatory choices that the museum makes available to the users. However, they can be adapted and personalized for each user. In practice, on the basis of the mental attitudes attributed to the user, and the constraints or needs attributable to the museum curators, a mediation system between these two subjects (e.g. users and museum curators) can play a role of museum visit customization, in order to best satisfy both parties. The result of this kind of mediation can lead to a substantial change of the user’s request with the objective of taking into account of needs that the same user cannot or has not been able to assess. In this work we propose a computational cognitive model that provide an intelligent agent (e.g. robot, virtual assistant) with the capability to personalize a museum tour with respect to the goals and interests of the users that intend to visit the museum, by taking into account also the goals and interests of the museum curators that have designed the exhibition. In particular, the computational model confers the capabilities to: • Investigate the artistic interests of the user and model the user with respect those interests, by attributing to her specific mental states (beliefs, goals, plans); • Model the beliefs, goals, plans of the museum curators; • Select the most suitable museum tour as a result of a negotiation, internal to the agent, between the represented mental states of the user and the represented mental states of the exhibition curators; • Investigate different dimensions of the user’s satisfaction with respect the tour proposed by the intelligent agent. We provide the results of a Human-Robot Interaction preliminary experiment that we designed in order to test the capabilities of the computational model. We recruited 26 real participants that have interacted with the humanoid robot Nao, widely used in Human-Robot Interaction scenarios [11, 12]. The robot plays the role of a museum assistant, in a virtual museum and it has the goal to provide to the user a museum exhibition to visit. At the end of each interaction the robot proposes a short survey to the user, with the aim of investigating different dimensions of her satisfaction with respect to the presented exhibition. The computational model has been implemented by exploiting the agent oriented programming (AOP) framework Jason [13]. 2. Background The human capability to attribute mental representations and states to AI agents becomes crucial in the context of Human-Agent Cooperation [14], where it is desirable that the role of such agents is not that of a passive executor, but it becomes that of an active collaborator. Let’s consider a collaborative scenario in which a human X and an artificial agent Y, share the same plan. In this context, X relies on Y for realizing some part of their common plan or of the X’s plan (task delegation); on its side, Y decides to help X, to achieve some of her goals by replacing itself in some role/action of X’s plan and achieving some goal (task adoption). Now, in order to do something for X, Y has to understand the X’s goals and beliefs, for example X’s expectations about Y’s behavior. Cooperation and consequently task delegation/adoption, implies more than simple obedience to orders or simple execution of a prescribed action. From the artificial agents point of view, delegation and adoption distinguishes a collaborator from a simple tool, and presuppose intelligence and autonomy [15]. In their much complex sense, cooperation and help are not just order/task execution; they require more autonomy and even initiative. Let’s focus on a deep level of cooperation, where the agent Y can adopt a task delegated by X, at different levels of effective help. The different levels of adoption can be individuated, according with [14]: • Sub help: The agent Y satisfies a sub-part of the delegated world-state (so satisfying just a sub-goal of the agent X), • Literal help: the agent Y adopts exactly what has been delegated by the agent X, • Over help: the agent Y goes beyond what has been delegated by the agent X without changing X plan (but including it within a hierarchically superior plan), • Critical-Over help: the agent Y realizes an over help and in addition modifies also the original plan/action (included in the new meta-plan), • Critical help: the agent Y satisfies the relevant results of the requested plan/action (the goal), but modifies that plan/action, • Critical-Sub help: the agent Y realizes a sub help and in addition modifies the (sub) plan/action. Theory of delegation and adoption, and more in general theory of adjustable social auton- omy [15] represent the core theoretical background underlying the design of the computational cognitive model proposed in this work. Two additional theoretical tools, applied in the field of HAI, have supported the design process: theory of mind (ToM) and BDI agent modelling. Theory of mind [16] can be defined as the ability of an agent (human or artificial) to ascribe to other agents specifics mental states, and to take them into account for making decisions. Modelling other agents is one of the most important abilities learned by humans when they cooperate with each others. Humans have a strong predisposition to anthropomorphise anything surround them and to evaluate or predict behaviors of others humans on the basis of a strong ToM of their interlocutors, with the result to foster an intelligent collaboration. However, the increasing but recent introduction of intelligent systems in society has not yet allowed people (mainly non-specialists) to have a ToM of the systems based on right assumptions. Providing artificial agents with the capability to build complex models of the interlocutors mental states and to adapt their decisions on the basis of these models, represent a crucial point for promoting an intelligent and trustworthy collaboration. BDI agent modelling [17] is one of the most popular models in agent theory [18]. Originally inspired by the theory of human practical reasoning developed by Michael Bratman [19], BDI model focuses on the role of intentions in reasoning and allows to characterize agents using a human-like point of view. Very briefly, in the BDI model the agent has beliefs, information representing what it perceives in the environment and communicates with other agents, and desires, mean states of the world that the agent might to accomplish. The agent deliberates on its desires and decides to commit to one of them: committed desires become intentions. To satisfy its intentions, it executes plans in the form of a course of actions or sub-goals to achieve. The behaviour of the agent is thus described or predicted by what it committed to carry out. An important feature of BDI agents is the property to react to changes in their environment as soon as possible while keeping their pro-active behaviour. 3. A description of the computational cognitive model The proposed computational model provides an artificial cognitive agent (with its own beliefs, goals, intentions and so on) with the capability to personalize a museum tour, on the basis of the mental states of the user that intends to visit the museum, by taking into account also the mental states of the museum curators that have designed the exhibition. The final tour to recommend is the result of an agent’s internal process of negotiation between the mental states of the user and those one of the museum curators. The mental states of the agent are stored in the Beliefs Base, a database where are collected: i) the current state of the environment, excluding the agents involved in the scenario; ii) the mental states of the user, that means beliefs, goals, plans that the agent attributes to the user thanks to the capability to have a ToM of the user herself; iii) the mental states of other agents involved in the scenario. In this case, the agents are the museum curators, that means those one who designed, realized and maintained the museum exhibition; iv) general beliefs, that correspond to the agent’s knowledge. The computational model provides the agent with the tools to interact with the user, in order to map into its Beliefs Base the information it consider relevant for adapting the museum visit to the user herself. The agent establishes an initial interaction with the user, with the goal to profile her, by investigating her artistic interests. Through a voice interaction and supported by interactive tools (e.g. Graphical User Interface), the agent is able to extract information and collect them into a user profile 𝑃𝑈 =< 𝑝𝐹 , 𝑃𝐷 , 𝐴𝑐𝑐𝑢 >, defined as a tuple of features encoding: i) the artistic period favourite by the user, ii) the artistic periods for which the user has no interest, iii) the level of accuracy with which the user intends to view the material proposed during the visit to the museum (please refer to section 4.1 for a much more detailed definition of accuracy). In addition to the user, the cognitive model allows the agent to model the mental states of other agents involved in the scenario. In this case, the agent is able to model in its Beliefs Base some beliefs, goals, plans ascribed to the museum curators that have designed the entire exhibition. Unlike the user model, which is created at run time based on her profile, the exhibition curators model was previously described in the agent’s belief base. While representing an a priori knowledge of the agent, this model can be modified by the agent itself through interaction with the museum curators themselves. After investigating the user artistic interests, profiling her and attributing mental states consistent with the profile created, the agent has to select a museum visit to propose to the user. The cognitive model defines multiple heuristics that can be exploited by the agent for identifying the most suitable museum tour; these heuristics implement different internal negotiation processes that the agent triggers with the aim of mediating the choice of the museum visit, considering the mental states of the user and those of the curators of the exhibition. The selection of the most suitable heuristic depends on the mental states that are modeled on the agent’s belief base. In the section 4 we will describe the heuristic exploited by the agent in the preliminary experiment. 4. The experiment This section describes a Human-Robot Interaction (HRI) preliminary experiment that we de- signed, in order to test the capabilities of the computational model. We recruited 26 participants that have interacted with the Nao robot. The robot plays the role of a museum assistant, in a virtual museum and it has the goal to provide to the user a museum exhibition to visit. During the interaction the robot collects information for profiling the user and therefore plays the role of assistant to visit the museum, by offering the possibility to listen the descriptions of the art works read by itself. At the end of each tour, the robot proposes a short survey to the user, with the aim of investigating different dimensions of her satisfaction with respect to the recommended exhibition. The robot helps users to visit the part of the museum that is the most appropriate to their artistic interests and that represent a mediation between these interests and those one of the museum curators. The museum tour resulting from the mediation process can be suited to the artistic interests explicitly declared by the user or it could be slightly different from the declared user interest. In the first case the robot provides a literal help to the user, in the second case it provides a critical help to the user. In case the robot provides critical help, it tries to satisfy the user by leveraging on the implicit assumptions that are based on the user artistic interests explicitly declared. 4.1. Experimental Design The museum that the user explores is organized in multiple thematic tours, each containing art works that belong to the same artistic period (e.g. Impressionism, Surrealism, Baroque, Greek Art and so on). The museum is designed in such a way that it covers the entire body of history of art. The final model that the agent attributes to the user will be a collection of beliefs, goals, etc., that the agent infers on the basis of the features perceived during the profiling phase. The museum is organized in thematic tours. Each thematic tour is described by three attributes: Relevance, Accuracy and Category. • The Relevance of an artistic period is defined on the basis of the originality of the artworks that compose it and the impact they had in the field of art history. • The Accuracy, on the other hand, specifies the detail in the description of each artwork present in a thematic room. • Each thematic tour (artistic period) belongs to a Category, which collects different artis- tic periods; for example, the "Impressionism" tour belongs to the same category as the "Surrealism" and "Cubism" tour, which are in the more general class named "modern art". This is replicated for any artistic period. The categorization of the history of art periods allows us to establish potential assumptions believed by the users: for example the artistic periods belonging to the same category are more homogeneous and therefore correspond- ing in the preferences of the users with respect to artistic periods of other categories. For example, an user that indicates as the preferred artistic period the "impressionism", probably will be more inclined to the "modern art" rather than the "ancient art". For the experiment we defined three levels of relevance (high, medium, low) and two levels of accuracy (high, low). The user can explore the museum room by choosing the artwork she wishes and she can leave the museum at any time. 4.2. The heuristic for the tour selection Here we describe the heuristic exploited by the agent in order to select the most appropriate section to visit. The algorithm takes as input the user preferred artistic period, the periods of non-interest and the level of accuracy chosen by the user to visit the exhibition. After obtaining the values of relevance, accuracy and the category of the tour corresponding to the user preferred artistic period, the algorithm check multiple conditions. The first condition (Condition 𝐶1 ) requires to verify if the same artistic period required by the user has maximum relevance from the museum curators point of view and, in the case, if the accuracy of its description corresponds with that chosen by the user. If these two conditions are true, then the robot will recommend the visit of the corresponding tour. If just the accuracy condition is not satisfied, however, the algorithm chosen the period required by the user and presents it with a level of accuracy different from that indicated. The accuracy will be the one believed by the museum curators (Condition 𝐶2 ). If the condition 𝐶2 is not verified either, then the algorithm investigates the tours corresponding to the artistic periods that the user has not discarded. If there is a tour with a high level of relevance, which belongs to the same category as the user preferred artistic period and which requires a level of accuracy equal to that chosen by the user, then the robot will recommend the visit of the corresponding tour (Condition 𝐶3 ). If not even this condition is verifiable, then the algorithm will try to select a tour with a high level of relevance, which belongs to the same category as the user preferred artistic period, regardless of the level of accuracy it requires; the accuracy will be the one believed by the museum curators (Condition 𝐶4 ). The condition 𝐶5 instead occurs when, having not found any tour to recommend in the same class in which the required artistic period was contained, there is a tour that correspond to an artistic period belonging to the next or previous category to that of the user preferred artistic period and has a level of relevance immediately following to that of the user preferred artistic period. Finally, if even the 𝐶5 is not respected, then the algorithm selects a random tour among those corresponding to the artistic periods not discarded by the user (Condition 𝐶6 ). 4.3. Experimental procedure 26 participants, aged between 25 and 75, were recruited for this pilot study. Each participant carried out an entire interaction with the robot (trial). Each participant aims to visit a tour of the virtual museum, corresponding to a specific artistic period. Each participant is aware of the fact that the tour to visit will be chosen by the robot that manages the virtual museum, who will choose the most suitable tour for her. Each trial develops in the following phases: 1. Starting Interaction: the robot introduces itself to the user, describing its role and the virtual museum it manages. 2. User artistic profiling: the robot proposes a series of questions to the user which aim to investigate her artistic interests, in terms of her favorite artistic periods and artistic periods of no interest. In this phase the interaction is supported by a GUI through which the user can express her artistic preferences and the robot can collect useful data to profile the user. In addition to defining the artistic periods of interest and non-interest of the user, the robot asks the user with what degree of accuracy she intends to visit the section. 3. Tour visit: once the user profile has been established, the robot exploit the heuristic defined in the section 4.2 to select the tour on behalf of the user. Once the selection has been made, the robot activates the corresponding tour in the virtual museum and leaves the control to the user, who can visit the room, selecting the artworks inside. 4. End museum tour: the user can leave the recommended tour and therefore the museum. Once this happens, the robot returns to interact with the user, asking her questions. These questions, which belong to a short survey, are used to investigate how has been satisfied the user after the visit. We have decided to adopt a five-level scale to encode the user responses. where value 1 is the worst case and 5 is the best one. In particular, the questions that the user had to answer are the following: • Q1: How satisfied were you with the duration of the visit? • Q2: How satisfied were you with the quality of the art works? • Q3: How satisfied were you with the number of the art works? • Q4: How surprised were you with the artistic period recommended by the robot compared to the artistic period initially chosen by you? • Q5: How satisfied are you with the robot’s recommendation given the artistic period initially chosen by you? 5. Results Preferred User Recommended Tour Q4 Q5 User Q1 Q2 Q3 Artistic Period Accuracy Tour Accuracy (Surprise) (Satisfaction) 1 Baroque Medium Caravaggio High 4 5 4 5 5 3 Baroque High Caravaggio High 4 5 5 4 4 4 Impressionism Medium Romanticism Medium 5 4 4 5 3 4 Cubism High Espressionism Medium 3 2 3 5 1 5 700’ Sculpture High 700’ Painting High 5 5 4 3 4 8 Cubism High Neoclassicism High 5 4 4 4 3 10 Impressionism High Espressionism Low 4 1 3 5 1 12 Impressionism High Surrealism Medium 3 3 4 3 4 15 Art Nouveau Medium Romanticism Medium 5 5 5 4 3 17 Art Nouveau Medium Neoclassicism High 5 5 4 3 4 20 Futurism Medium Romanticism Low 4 2 4 5 1 21 Cubism Medium Surrealism Medium 5 4 3 3 3 22 Baroque High 400’ Painting High 2 5 3 4 1 23 Romanticism Medium Simbolism High 1 5 4 3 4 24 Cubism Medium Surrealism Medium 5 3 5 5 4 Table 1 This table reports the answers to the questions in the survey proposed by the robot, after it provides critical help to the user. In these cases the robot recommend a tour slightly different to the artistic period the user indicated as preferred in the history of art. The pilot study has been designed with the goal to answer to the following Research Questions: • RQ1: How risky/acceptable is the critical help compared to the literal help? Does the heuristic proposed help to make this help much more acceptable? Preferred User Recommended Tour Q4 Q5 User Q1 Q2 Q3 Artistic Period Accuracy Tour Accuracy (Surprise) (Satisfaction) 2 500’ Italian Painting High 500’ Italian Painting High 4 5 4 1 5 5 500’ Italian Painting High 500’ Italian Painting High 5 5 5 2 5 6 Greek Art Medium Greek Art Medium 5 5 5 1 5 7 Gothic Medium Gothic Medium 5 5 5 1 4 9 500’ Italian Painting Medium 500’ Italian Painting High 5 4 4 2 3 11 Caravaggio Low Caravaggio High 5 4 4 1 4 13 Gothic Low Gothic Low 1 1 1 1 2 14 Contemporary Art Medium Contemporary Art Medium 5 3 3 1 5 16 700’ Painting High 700’ Painting Medium 4 4 5 2 5 18 500’ Italian Painting High 500’ Italian Painting High 5 3 4 1 5 19 Contemporary Art High Contemporary Art Low 5 4 4 1 5 Table 2 This table reports the answers to the questions in the survey proposed by the robot, after it provides literal help to the user. In these cases the robot recommend a tour corresponding to the artistic period the user indicated as preferred in the history of art. • RQ2: Given the risks that the critical help in any case determines, in what situations and how much critical help can be useful? Here we report the results obtained in the pilot study. We divided the results into users who have received critical help from the agent (the preferred artistic period chosen by the user does not match with the tour recommended by the robot), summarized in Table 1 and those who have received literal help (the preferred artistic period selected by the user coincides with the tour recommended by the robot), summarized in Table 2. We can observe that 15 users received critical help, while 11 received literal help. We are interested in investigating the answers to the questions Q4, Q5; these questions are designed in order to understand what is the impact of the robot’s ability to propose to the user a tour in fact different from what she expected. In this experiment the questions Q1, Q2, Q3 are not deeply analyzed, but they have been asked to contextualize the user and to ensure the user could focus on specific questions, before answering to the questions Q4 and Q5. In this way we try to get the user’s attention to the contents of the tour and therefore focus on these and not only on the quality of the interaction with the robot (and on the modes of critical or literal help) her reflection. In order to answer the research question RQ1, we ran an independent samples t-test. From the parametric analysis of the answers to question Q5, reported in Table 3, we observe that users who have received a tour recommendation consistent with the initial choice of their preferred artistic period (literal help), show a level of satisfaction on average higher than that of the users to whom the robot has proposed the tour referred to an artistic period different from the one initially chosen. This demonstrates how critical help implies the risk of leaving the user, at least partially, dissatisfied because the robot expressly violated her requests. However, although the difference between the averages in the two cases is significant (𝐷 = 1.36), the mean value of satisfaction referred to critical help (M = 3) demonstrates how this type of help does not rise a low level of satisfaction. Especially if we consider that no justification has been provided by the robot for its behavior in contracting with the user requests. Indeed, the value 3 in the scale exploited for the survey, corresponds to an medium level of satisfaction. We recall that the heuristic for the tour selection is designed so that, if the robot does not find a tour that match with the artistic period chosen by the user, it tries to recommend a tour corresponding to an artistic period belonging to the same category of that selected by the user (e.g. impressionism, surrealism, romanticism are all belonging to modern art). Group Literal Help Critical Help Mean 4.36 3.00 SD 1.03 1.36 N 11 15 Table 3 Indipendent t-test conducted in order to answer to RQ1 . Please notice that there is significant difference between the means of two groups: the p-value is 𝑝 = 0.0103). Much more interesting is the analysis regarding the critical help. To answer the research question RQ2 we focus only on the group of users who have been received critical help (table 1). Figure 1: The histogram reports the level of user satisfaction investigated through the question Q5, in the case of robot critical help. Figure 1 shows how, among 15 participants who received critical help, only 4 evaluated the tour recommended by the robot unsatisfactory, while 4 users evaluated it with a medium satisfaction value, 6 users evaluate it satisfying and 1 of them evaluate it strongly satisfying. The 73.3% of the participants that have received a critical help evaluate it positively. This result is particularly relevant if we consider that the visitors had not had any prior notice about the possibility that their request could be changed, by modifying the artistic period chosen by them. How we have seen, this change finds justification in the will of the museum curators to offer the visit of high relevant artistic periods (given that considering the collection owned by the museum, the artistic period offered has greater relevance than the one chosen by the visitor) and thus to favor the goals of the user. But this information is not communicated to the visitor and it is difficult that it can be directly deducted by her. However the user satisfaction is particularly high. The surprise effect (encoded by the answer to the question Q4) confirms this unexpected choice of the robot in the face of an explicit different request. If this change could be explained, probably the number of those who have given a negative judgment about the robot’s suggestion would further down. Surely, much depends also by the collections owned by the museum and their value or presentation, as well as the artistic flair of the user. In any case, the awareness of a choice made by the robot to much better the user artistic taste would certainly play a positive role in the final satisfaction of the user. 6. Conclusions In this paper we present a computational cognitive model that provides an artificial agent with the capability to personalize a museum tour with respect to the goals and interests of the users that intend to visit the museum. The model does not consider only the mental states of the user, related to her artistic interests, but it takes into account also the constraints, goals related to the curators that have designed the exhibitions hosted by the museum. In this way, the artificial agent assumes the role of mediator between the user and curators, with the goal to offer an experience that is as satisfying as possible for the user. The negotiation process that emerges between users mental states and constraints/goals of the exhibition curators can lead the artificial agent to suggest, in some case, a museum tour that is very close to the user artistic interests (literal help); in other cases the agent can suggest a tour that diverges from the user’s more explicit interests, but which always tries to satisfy interests/goals that although not explicitly declared may be attributable to her (critical help). This form of help, based on the consolidated theory of Adjustable Social Autonomy [15], has the main goal to keep the level of the user satisfaction high, making a choice that is as suitable as possible for the user but which also takes into account constraints that could determine low levels of user satisfaction. Naturally, a change in the user’s requests without negotiation involves the risk of dissatisfaction of the user. But it remains useful to evaluate how an alternative choice of the robot, however for the protection of user goals/interests, can be accepted by the user. We conducted a Human-Robot Interaction pilot study with 26 real participants, in order to investigate the potential of the cognitive model. The participants have interacted with the humanoid robot Nao, which played the role of a museum assistant, in a virtual museum and it had the goal to provide to the user a museum’s exhibition to visit. At the end of each interaction the robot proposed a short survey to the user, with the aim of investigating different dimensions of her satisfaction with respect to the presented exhibition. The exploratory study has shown promising results. In fact, despite the literal help, compared to the critical help, results to be more satisfactory for users, in most cases where users have received critical help, they have evaluated positively the museum tour recommended by the robot. This result is particularly relevant by virtue of the fact that users do not know the reasons that led to a choice different from the one they expected. Despite this, even though they were surprised by the tour recommended, they maintain high levels of satisfaction, after the visit of the exhibition. 7. Future works First of all our goal is to follow up on this pilot study, in order to systematize the preliminary results obtained and to give consistence to the research questions we investigate. Another relevant (for us) future work will be focus on explainability. In particular we want to design other experiments in order to evaluate different dimensions of the user satisfaction, every time the robot provide an explanation of the reasons that led it to recommend that specific tour to the user. We are convinced that providing an explanation of the reasons that led the robot, for example, to suggest a museum tour different than the one the user expects, has a decisive impact on the user acceptance of the critical help, which tries to satisfy the results requested by the user, but adapting the request to a context that may be unfavorable, compared to the initial request. Finally, we intend to extend the computational model by integrating other levels of help, as provided by the delegation and adoption theory and to test their impact through other HRI experiments in real Cultural Heritage scenarios. References [1] Y. Wang, N. Stash, R. Sambeek, Y. Schuurmans, L. Aroyo, G. Schreiber, P. Gorgels, Culti- vating personalized museum tours online and on-site, Interdisciplinary science reviews 34 (2009) 139–153. [2] A. J. Wecker, Personalized cultural heritage experience outside the museum: Connecting the museum experience to the outside world, in: International conference on user modeling, adaptation, and personalization, Springer, 2014, pp. 496–501. [3] T. Kuflik, A. J. Wecker, J. Lanir, O. Stock, An integrative framework for extending the boundaries of the museum visit experience: linking the pre, during and post visit phases, Information Technology & Tourism 15 (2015) 17–47. [4] M. Bennewitz, F. Faber, D. Joho, M. Schreiber, S. Behnke, Towards a humanoid museum guide robot that interacts with multiple persons, in: 5th IEEE-RAS International Conference on Humanoid Robots, 2005., IEEE, 2005, pp. 418–423. [5] A. Chianese, F. Piccialli, I. Valente, Smart environments and cultural heritage: a novel approach to create intelligent cultural spaces, Journal of Location Based Services 9 (2015) 209–234. [6] A. Tavčar, J. Zupančič, M. Gams, Virtual assistants for the cultural heritage domain, in: International Conference on VR Technologies in Cultural Heritage, Springer, 2018, pp. 234–244. [7] L. Ardissono, T. Kuflik, D. Petrelli, Personalization in cultural heritage: the road travelled and the one ahead, User modeling and user-adapted interaction 22 (2012) 73–99. [8] F. Amato, V. Moscato, A. Picariello, F. Colace, M. D. Santo, F. A. Schreiber, L. Tanca, Big data meets digital cultural heritage: Design and implementation of scrabs, a smart context- aware browsing assistant for cultural environments, Journal on Computing and Cultural Heritage (JOCCH) 10 (2017) 1–23. [9] G. Pavlidis, Recommender systems, cultural heritage applications, and the way forward, Journal of Cultural Heritage 35 (2019) 183–196. [10] A. Tavčar, A. Csaba, E. V. Butila, Recommender system for virtual assistant supported museum tours, Informatica 40 (2016). [11] R. Andreasson, B. Alenljung, E. Billing, R. Lowe, Affective touch in human–robot inter- action: conveying emotion to the nao robot, International Journal of Social Robotics 10 (2018) 473–491. [12] A. Quach, H. Dhanens, C. Chen, D. Vaughan, M. Harker, Human-robot interaction using the nao robot (2019). [13] R. H. Bordini, J. F. Hübner, M. Wooldridge, Programming multi-agent systems in AgentS- peak using Jason, John Wiley & Sons, 2007. [14] C. Castelfranchi, R. Falcone, Towards a theory of delegation for agent-based systems, Robotics and Autonomous Systems 24 (1998) 141–157. [15] R. Falcone, C. Castelfranchi, The human in the loop of a delegated agent: The theory of adjustable social autonomy, IEEE Transactions on Systems, Man, and Cybernetics-Part A: Systems and Humans 31 (2001) 406–418. [16] B. M. Scassellati, Foundations for a Theory of Mind for a Humanoid Robot, Ph.D. thesis, Massachusetts Institute of Technology, 2001. [17] A. S. Rao, M. P. Georgeff, et al., Bdi agents: from theory to practice., in: ICMAS, volume 95, 1995, pp. 312–319. [18] M. Wooldridge, N. R. Jennings, Agent theories, architectures, and languages: a survey, in: International Workshop on Agent Theories, Architectures, and Languages, Springer, 1994, pp. 1–39. [19] M. Bratman, Intention, plans, and practical reason, volume 10, Harvard University Press Cambridge, MA, 1987.