<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>Workshop on sociAL roboTs for peRsonalized, continUous and adaptIve aSsisTance,
Workshop on Behavior Adaptation and Learning for Assistive Robotics, Workshop on Trust, Acceptance and Social Cues in
Human-Robot Interaction, and Workshop on Weighing the benefits of Autonomous Robot persoNalisation. August</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Personalising Explanations and Explaining Personalisation - Extended Abstract</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Tamlin Love</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Antonio Andriella</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Guillem Alenyà</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Artificial Intelligence Research Institute (IIIA-CSIC)</institution>
          ,
          <addr-line>Campus de la UAB, 08193 Bellaterra, Barcelona</addr-line>
          ,
          <country country="ES">Spain</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Institut de Robòtica i Informàtica Industrial, CSIC-UPC</institution>
          ,
          <addr-line>Llorens i Artigas 4-6, 08028, Barcelona</addr-line>
          ,
          <country country="ES">Spain</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2024</year>
      </pub-date>
      <volume>26</volume>
      <issue>2024</issue>
      <fpage>0000</fpage>
      <lpage>0001</lpage>
      <abstract>
        <p>Both personalisation and explainability have become popular research topics in social robotics, each capable of improving human-robot interactions. However, challenges have been identified in both fields, from issues of transparency, bias and privacy in personalisation to issues of identifying and communicating relevant explanations in explainability. In this work, we examine the intersection of these two fields - using personalisation to improve explanations and explainability to improve personalisation - and identify a number of research directions that could be of benefit to both communities.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Explainability</kwd>
        <kwd>Interpretability</kwd>
        <kwd>Personalisation</kwd>
        <kwd>Personalization</kwd>
        <kwd>Social Robot</kwd>
        <kwd>Human-Robot Interaction</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        The personalisation of social robots - that is, adapting their behaviour to the needs and preferences
of individual users - has been shown to improve perceptions of competence and trust [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], interaction
quality, engagement and motivation [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], and can improve public acceptance of these technologies [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ].
For example, consider a robot placed in a domestic environment to assist an elderly patient in daily
living. Personalisation can be employed to adapt the robot’s behaviours, such as ofering personalised
reminders for meals and medication, or factoring in preferences when guiding the patient through
cognitive or physical exercises. In this way, the robot could improve the interaction quality for the
patient and foster trust and acceptance for both the patient and caregiver.
      </p>
      <p>
        However, drawbacks have been identified for personalisation, such as introducing biases in the
robot’s behaviour [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], lack of transparency [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ], and the privacy concerns surrounding the collection
of personal information [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. To ensure transparency and traceability, and to instil confidence and trust
in the robot, all stakeholders should be able to understand exactly how a robot’s behaviour is impacted
by personalisation.
      </p>
      <p>
        Explainability seeks to improve a user’s understanding of a decision-making system by explaining
the reasons for its decisions, and has seen a recent surge in popularity, for both machine learning [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]
and robotics [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. Robots that can explain their decisions have the potential to address the identified
drawbacks of personalisation, by exposing biases and communicating how personal information is
used in decision-making. However, explainability has its own challenges. In a review of explanations
in the social sciences, Miller [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] argues that explanations are contrastive (i.e. that “Why  ?" questions
are better understood as “Why  and not  ?" questions, even if the contrast is implicit) and selected
(i.e. that explanations should focus on a few, relevant causes rather than overloading recipients with
all possible causes). Automatically identifying the most appropriate contrast and selecting the most
relevant causes remain open challenges for explainability. Additionally, in Human-Robot Interaction
(HRI) settings, it can be challenging to resolve the communication ambiguities in expressing and
interpreting queries and explanations [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. Personalisation could prove useful in addressing these
challenges by considering the unique needs and preferences of individual users.
      </p>
      <p>Clearly, HRI practitioners working in personalisation and explainability stand to gain from combining
approaches in both fields. Thus, the aim of this work is to identify works at the intersection of these
ifelds and propose research directions towards realising interpretable and user-aware social robots.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Explaining Personalisation</title>
      <p>To improve transparency, the factors used by the personalisation system (such as the user’s needs
and preferences) can be incorporated into the state used by explainability algorithms to identify the
reasons for the robot’s decisions. While such systems have not been employed for HRI scenarios, they
have been utilised in intelligent tutoring systems [13], recommendation systems [14] and robot mission
planning [15].</p>
      <p>
        Given the contrastive nature of explanations [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ], an intuitive way of explaining the impact of
personalisation is through counterfactual explanations, which examine how the robot’s behaviour
would change given a change to its input [16]. Using counterfactual explanations, a user could pose
a query to the system (e.g. “Why did you recommend I exercise now and not in the evening?") and
an explanation can be generated that identifies both a reason and how a diferent set of preferences
or needs could result in a diferent decision (e.g. “Because I think you prefer exercising in the morning.
If not, I would have suggested exercising in the evening."). Given that preferences are often obtained
from user data, we can in turn explain the robot’s beliefs about preferences (e.g. “I think you prefer
exercising in the morning because you seem happier when exercising in the morning versus in the
evening."), thus fostering transparency over multiple levels of decision-making.
      </p>
      <p>Explainability for bias detection has seen some attention [17, 18], and this can extend to detecting
biases introduced by personalisation. Explanations of behaviour can be generated and assessed by
various stakeholders (e.g. patients and caregivers) to determine if the reasons for these decisions align
with their expectations and values [19]. Likewise, explainability can improve transparency surrounding
personal information, allowing users to understand how their data is used to make decisions [20],
though precautions should be taken to ensure that privacy is preserved during explanations.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Personalising Explanations</title>
      <p>
        Just as explainability can address some drawbacks of personalisation, so too can personalisation address
challenges in explainability. To this end, we present a framework for explainability in HRI settings
inspired by Anjomshoae et al. [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] and Matarese et al. [12], depicted in Fig. 1. For each component of this
framework over which the robot has control, we identify opportunities for personalising explanations.
      </p>
      <p>The process typically begins with the user requesting an explanation from the robot, mediated
by an query interface that determines the communication channels between the human and the
robot. The robot could also generate explanations unprompted through self-query. After receiving a
query, the robot must interpret it (query interpretation), transforming it into a formal query that can
impose conditions on the search for explanations. Such a process involves resolving any ambiguities
in the query, such as those created by implicit contrasts in a “why (not)" question. Once the query
has been interpreted, one or more suitable explanations must be generated that match the query
(explanation generation). Finally, once one or more suitable explanations have been found, they must
be communicated in a human-understandable format (explanation communication).</p>
      <sec id="sec-3-1">
        <title>Query Interface</title>
        <p>The query interface can be personalised in several ways. Firstly, the interface may support multiple
question-asking modalities (e.g. natural language, a GUI, etc.), and the use of one over the other
can reflect user roles or preferences. The exact presentation of the interface could also be adapted,
leveraging personalisation of user interfaces [21].</p>
        <p>The types of query supported could be adapted to the role of the user. For example, a programmer
might be able to ask a range of technical questions to the robot for debugging purposes which may not be
presented to lay users. Similarly, the decisions allowed for querying could be restricted to certain users.</p>
        <p>If the robot is capable of self-query then the frequency of explanations, situations in which
explanations are warranted, and the question the robot asks itself could all be personalised [22].</p>
      </sec>
      <sec id="sec-3-2">
        <title>Query Interpretation</title>
        <p>Personalisation can be used to infer the context implicit in user queries. Herbold et al. [23] address
the problem of automatically detecting implicit contrasts in the context of rule-based systems. Among
other things, they factor the user’s relationship to the triggered rule (e.g. as the creator of the rule)
in whether or not a user is likely to expect that rule to be fired. Such personalisation could be extended,
for example, to consider the user’s history of interactions with the robot, their role (e.g. a caregiver
might have diferent expectations than a patient) and preferences (e.g. a user might want to know
why their preferred behaviour wasn’t triggered).</p>
      </sec>
      <sec id="sec-3-3">
        <title>Explanation Generation</title>
        <p>Within this component, there are several opportunities for personalisation. Firstly, the choice of
state variables featured in the explanation can be afected by user needs or preferences. For example,
to preserve privacy, explanations involving protected variables might be restricted to certain users.
If a layered system is used, with features ranging from low-level to high-level, the “depth" of the
explanation could be personalised (e.g. a patient might be interested in the high-level human activity
the robot detected, while a programmer might be interested in the individual keypoints of the detected
skeleton). However, care should be taken not to “over-personalise" explanations using unimportant
variables [24]. The length (in terms of number of variables), diversity and number of explanations
could also be personalised based on user roles and preferences [25].</p>
      </sec>
      <sec id="sec-3-4">
        <title>Explanation Communication</title>
        <p>Explanations can be communicated in a number of modalities (e.g. text, images, embodied actions,
etc.), and the choice of modality could be informed by user needs and preferences. After a modality
is chosen, the format of the explanation can be personalised, adapting the explanation template[26],
choice of words [15], types of graphical elements [27] or the presentation of a user interface [28]. If
multiple explanations are provided, the order in which they are presented could be personalised [25].</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Conclusion</title>
      <p>In conclusion, there is room for personalisation’s drawbacks to be addressed by explainability, while
similarly, explanations can potentially be improved through personalisation. This work represents
a step towards bringing together the two fields, identifying research directions at their intersection.
Our intention is that each of these directions can be explored in future work, especially in HRI, to
facilitate the development of interpretable, user-aware social robots.</p>
    </sec>
    <sec id="sec-5">
      <title>Acknowledgments</title>
      <p>This work was supported by Horizon Europe under the MSCA grant agreement No 101072488 (TRAIL);
by the “European Union NextGenerationEU/PRTR" project CHLOE-GRAPH PID2020-118649RB-I00
funded by MCIN/ AEI /10.13039/ 501100011033; by the EU-founded project grant agreement No
101070930 (VALAWAI); and by the Research Council of Norway under the project SECUROPS
(INTNO/0875).
a systematic literature review, in: Proceedings of the International Conference on Autonomous
Agents and Multiagent Systems (AAMAS), 2019, pp. 1078–1088.
[12] M. Matarese, F. Rea, A. Sciutti, A user-centred framework for explainable artificial intelligence in
human-robot interaction, arXiv preprint arXiv:2109.12912 (2021).
[13] C. Conati, O. Barral, V. Putnam, L. Rieger, Toward personalized XAI: A case study in intelligent
tutoring systems, Artificial intelligence 298 (2021) 103503.
[14] S. Arnórsson, F. Abeillon, I. Al-Hazwani, J. Bernard, H. Hauptmann, M. El-Assady, Why am I
reading this? Explaining personalized news recommender systems, in: EuroVis Workshop on
Visual Analytics (EuroVA), The Eurographics Association, 2023, pp. 67–72.
[15] R. Wohlrab, M. Vierhauser, E. Nilsson, What impact do my preferences have? A framework for
explanation-based elicitation of quality objectives for robotic mission planning, in: International
Working Conference on Requirements Engineering: Foundation for Software Quality, Springer,
2024, pp. 111–128.
[16] R. Guidotti, Counterfactual explanations and how to find them: literature review and benchmarking,</p>
      <p>Data Mining and Knowledge Discovery (2022) 1–55.
[17] A. Mikołajczyk, M. Grochowski, A. Kwasigroch, Towards explainable classifiers using the
counterfactual approach-global explanations for discovering bias in data, Journal of Artificial Intelligence
and Soft Computing Research 11 (2021) 51–67.
[18] K. Alikhademi, B. Richardson, E. Drobina, J. E. Gilbert, Can explainable AI explain unfairness? A
framework for evaluating explainable AI, arXiv preprint arXiv:2106.07483 (2021).
[19] A. B. Arrieta, N. Díaz-Rodríguez, J. Del Ser, A. Bennetot, S. Tabik, A. Barbado, S. García, S. Gil-López,
D. Molina, R. Benjamins, et al., Explainable Artificial Intelligence (XAI): Concepts, taxonomies,
opportunities and challenges toward responsible AI, Information fusion 58 (2020) 82–115.
[20] C. Meske, E. Bunde, J. Schneider, M. Gersch, Explainable artificial intelligence: objectives,
stakeholders, and future research opportunities, Information Systems Management 39 (2022) 53–63.
[21] H. Al-Samarraie, S. M. Sarsam, H. Guesgen, Predicting user preferences of environment design: a
perceptual mechanism of user interface customisation, Behaviour &amp; Information Technology 35
(2016) 644–653.
[22] Z. Gong, Y. Zhang, Behavior explanation as intention signaling in human-robot teaming, in:
Proceedings of the IEEE International Symposium on Robot and Human Interactive Communication
(RO-MAN), IEEE, 2018, pp. 1005–1011.
[23] L. Herbold, M. Sadeghi, A. Vogelsang, Generating context-aware contrastive explanations in
rule-based systems, arXiv preprint arXiv:2402.13000 (2024).
[24] R. Nimmo, M. Constantinides, K. Zhou, D. Quercia, S. Stumpf, User characteristics in explainable
AI: The rabbit hole of personalization?, in: Proceedings of the CHI Conference on Human Factors
in Computing Systems, 2024, pp. 1–13.
[25] M. Naiseh, N. Jiang, J. Ma, R. Ali, Personalising explainable recommendations: literature and
conceptualisation, in: Trends and Innovations in Information Systems and Technologies, Springer,
2020, pp. 518–533.
[26] M. Sadeghi, L. Herbold, M. Unterbusch, A. Vogelsang, SmartEx: A framework for generating
user-centric explanations in smart environments, in: Proceedings of the International Conference
on Pervasive Computing and Communications (PerCom), IEEE, 2024, pp. 106–113.
[27] S. Schömbs, S. Pareek, J. Goncalves, W. Johal, Robot-assisted decision-making: Unveiling the role
of uncertainty visualisation and embodiment, in: Proceedings of the CHI Conference on Human
Factors in Computing Systems, 2024, pp. 1–16.
[28] J. Schneider, J. Handali, Personalized explanation in machine learning: A conceptualization, arXiv
preprint arXiv:1901.00770 (2019).</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>S.</given-names>
            <surname>Schneider</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Kummert</surname>
          </string-name>
          ,
          <article-title>Comparing robot and human guided personalization: adaptive exercise robots are perceived as more competent and trustworthy</article-title>
          ,
          <source>International Journal of Social Robotics</source>
          <volume>13</volume>
          (
          <year>2021</year>
          )
          <fpage>169</fpage>
          -
          <lpage>185</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>B.</given-names>
            <surname>Irfan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Céspedes</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Casas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Senft</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L. F.</given-names>
            <surname>Gutiérrez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Rincon-Roncancio</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. A.</given-names>
            <surname>Cifuentes</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Belpaeme</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Múnera</surname>
          </string-name>
          ,
          <article-title>Personalised socially assistive robot for cardiac rehabilitation: Critical reflections on long-term interactions in the real world, User Modeling and User-Adapted Interaction 33 (</article-title>
          <year>2023</year>
          )
          <fpage>497</fpage>
          -
          <lpage>544</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>K.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Tao</surname>
          </string-name>
          ,
          <article-title>The roles of trust, personalization, loss of privacy, and anthropomorphism in public acceptance of smart healthcare services</article-title>
          ,
          <source>Computers in Human Behavior</source>
          <volume>127</volume>
          (
          <year>2022</year>
          )
          <fpage>107026</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>A.</given-names>
            <surname>Kubota</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Pourebadi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Banh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Kim</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Riek</surname>
          </string-name>
          ,
          <article-title>Somebody that I used to know: The risks of personalizing robots for dementia care</article-title>
          ,
          <source>Proceedings of We Robot</source>
          (
          <year>2021</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>N.</given-names>
            <surname>Fronemann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Pollmann</surname>
          </string-name>
          , W. Loh,
          <article-title>Should my robot know what's best for me? Human-robot interaction between user experience and ethical design</article-title>
          ,
          <source>AI &amp; SOCIETY</source>
          <volume>37</volume>
          (
          <year>2022</year>
          )
          <fpage>517</fpage>
          -
          <lpage>533</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>B. A.</given-names>
            <surname>Yilma</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Naudet</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Panetto</surname>
          </string-name>
          ,
          <article-title>Introduction to personalisation in cyber-physical-social systems</article-title>
          ,
          <source>in: On the Move to Meaningful Internet Systems: OTM 2018 Workshops</source>
          , Springer,
          <year>2019</year>
          , pp.
          <fpage>25</fpage>
          -
          <lpage>35</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>W.</given-names>
            <surname>Saeed</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Omlin</surname>
          </string-name>
          ,
          <string-name>
            <surname>Explainable</surname>
            <given-names>AI</given-names>
          </string-name>
          (
          <article-title>XAI): A systematic meta-survey of current challenges and future opportunities</article-title>
          ,
          <source>Knowledge-Based Systems</source>
          <volume>263</volume>
          (
          <year>2023</year>
          )
          <fpage>110273</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>F.</given-names>
            <surname>Sado</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. K.</given-names>
            <surname>Loo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W. S.</given-names>
            <surname>Liew</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Kerzel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Wermter</surname>
          </string-name>
          ,
          <article-title>Explainable goal-driven agents and robots-a comprehensive review</article-title>
          ,
          <source>ACM Computing Surveys</source>
          <volume>55</volume>
          (
          <year>2023</year>
          )
          <fpage>1</fpage>
          -
          <lpage>41</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>T.</given-names>
            <surname>Miller</surname>
          </string-name>
          ,
          <article-title>Explanation in artificial intelligence: Insights from the social sciences</article-title>
          ,
          <source>Artificial intelligence 267</source>
          (
          <year>2019</year>
          )
          <fpage>1</fpage>
          -
          <lpage>38</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>J.</given-names>
            <surname>Leusmann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Gienger</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Schmidt</surname>
          </string-name>
          ,
          <string-name>
            <surname>S.</surname>
          </string-name>
          <article-title>Mayer, Understanding the uncertainty loop of human-robot interaction</article-title>
          ,
          <source>arXiv preprint arXiv:2303.07889</source>
          (
          <year>2023</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>S.</given-names>
            <surname>Anjomshoae</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Najjar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Calvaresi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Främling</surname>
          </string-name>
          ,
          <article-title>Explainable agents and robots: Results from</article-title>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>