<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <issn pub-type="ppub">1613-0073</issn>
    </journal-meta>
    <article-meta>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Daniel Majonica</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Nardie Fanchamps</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Deniz Iren</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Roland Klemke</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>TEZFPSX</string-name>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Cologne Game Lab, Technische Hochschule Köln</institution>
          ,
          <addr-line>Cologne</addr-line>
          ,
          <country country="DE">Germany</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Open Universiteit</institution>
          ,
          <addr-line>Heerlen</addr-line>
          ,
          <country country="NL">Netherlands</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2024</year>
      </pub-date>
      <fpage>16</fpage>
      <lpage>20</lpage>
      <abstract>
        <p>We transition step by step into a world where humans and robots collaborate and coexist in the same physical space. Both will need to learn how to communicate for successful collaboration and cooperation. To facilitate this interaction, it is essential for humans to get a realistic understanding of their robot partner's capabilities and limitations and to establish a certain degree of trust. Robots are increasingly present in a variety of settings beyond the industrial sector. They are utilized for many purposes such as elderly care [1], as pills [2] in the medical domain, or delivery robots on the streets [3]. Our focus, however, is on robots in educational settings, where they can function as teachers [4] or learning companions [5]. Such educational robots can enhance the learning process by providing valuable input and feedback to users. Interaction with educational robots should be bidirectional, involving multiple layers of interaction and feedback between the learner and the robot.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>£ EN!DPMHOFBC</p>
      <p>(D. Majonica)</p>
      <p>(CC BY 4.0).</p>
      <p>CEUR</p>
      <p>ceur-ws.org
tion (HRI) learning. This study investigates the impact of ILEs on HRI and their potential to
improve the overall learning experience.</p>
      <p>
        Trust vital for human-robot collaboration. Trust increases when trustees are successfully
performing a task and when expectations are met [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. Trust can also be measured at different
times. In this study, we look at post-interaction trust [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. This means, that after the interaction,
participants assess their trust level towards robots. This, combined with their recorded decision
during the interaction, can be analyzed to understand more on how big the spectrum of trust in
human-robot collaboration is.
Participants were asked to sit down in front of a LEGO Mindstorms EV3 robot and put on an
AR device, in this case, the Hololens 2. On the table, between the participant and the robot was
a set of special cards (Figure 1). All the instructions were then proceeded to be given through
the AR device.
      </p>
      <p>
        This study was so far conducted through two different sessions with the same setup but
different participants. The participants in both sessions were from a diverse group aged 18
and above, from various academic fields related to technology-enhanced learning. The study
was conducted with eight individuals. The participants joined voluntarily and were selected at
random. They were also not paid or reimbursed through other means. The data was collected
using an anonymized post-study survey with closed and open-ended questions. Additionally, each
action with the AR device was recorded so the path taken could be reconstructed. Each session
was structured to allow individual participants to interact privately with a robotic counterpart.
The questionnaire included a system usability scale which was analyzed in a previous study
[
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] as well as three specific open-ended questions. The relevant three questions asked in the
questionnaire were:
      </p>
      <p>What perceived types of ILE components presented in this study (e.g. video pop-up)
affected (both positively and negatively) the HRI?</p>
    </sec>
    <sec id="sec-2">
      <title>To what extent do you trust the robot in terms of collaboration? How confident do you feel about the decisions of the robot and about your own decisions? doors. room to conclude the game.</title>
      <p>• The game introduces special ’robot rooms’, where decision-making is jointly conducted by
the participant and the robot.
• A maximum of 15 turns are allotted to locate the keycard and reach the engine room. If
the number of turns exceeds 15, the time has run out and the spaceship is deemed to
explode.</p>
      <p>In the robot rooms (playing card in the middle of figure 2), the decision of the robot was
scripted via an algorithm. The robot had two states dependent on the number of moves left
until the end of the game. The switching happened when the estimated perfect path through
the maze from the current position of the player would be within a critical range to not be able
to complete the maze in the given 15 moves. Before this switch happened, the robot would
always agree with the decision of the player even if that would mean they would go in the wrong
direction. The second state was then to always point in the correct direction which is where
disagreement can happen. Additionally, the last rooms were set up to have two valid paths to
take while the robot was programmed to always disagree with the player’s choice. This would
result in a forced disagreement to be able to see if the participant chooses to trust the robot
or go the other path. Afterward, the participants were not informed if the other path would
work as well, so if they chose to disagree with the robot, they would only know that their chosen
path was valid and might assume that the path the robot chose in the end was invalid. This
assumption happened from observation during the study and questions of participants to the
conducting researcher afterward.</p>
      <p>
        In order to play the game participants first had to select the virtual card in the AR environment
by clicking it. Then, participants had to flip over the selected physical equivalent card. This
way, they could navigate the maze. In the special robot rooms, the movement changed slightly.
First, participants selected the virtual card, however, the system then waited for the robot’s
algorithm to create a suggestion as described above. After that participants had full control over
which room they would go to. They could select 1) their first choice, 2) the robots algorithms
suggestions if it would be different, or 3) another room if they decided to go somewhere else.
The turn ended with participants flipping over the selected physical equivalent card.
The interaction with the robot happened on different levels [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ] with different interaction
directions [16]. The first interaction was human-led. In this interaction, the robot first had to be
assembled after it broke down. The participants first saw a video on how to effectively repair the
robot on the AR device. Then, the participants could assemble the missing parts of the robot,
leading the interaction. Possible mistakes or missing parts were then given as feedback through
the AR device using pop-up feedback. The participants interpreted this feedback correctly
and assembled the robot after some time. This interaction is partly there to give time to the
participant to familiarize themselves with the robot and also look closer at it.
      </p>
      <p>After the first interaction was done, the game, as described above began. Participants selected
different rooms until they went into a robot room. In these robot rooms, the interaction was as
follows: First, participants had to make a decision about which room to go next. Then they
had to wait for the robot’s algorithm to give a suggestion. These suggestions were displayed
on the physical robot. Then, participants had to make their final decision based on their own
first choice and the robots suggestion. This interaction was designed in this way to create a
human-led approach. If the robot gave the suggestion before the participants made a decision,
the participants would always follow the robot. This was shown in previous tests with this setup.
In order to avoid this, we changed the interaction to have this two-step process. Because of this
process, the interaction became more collaborative in nature. The planned disagreement of the
algorithm at the last decision had the effect that people who reported higher trust in the robot
decided against their own first decision.</p>
      <p>Another crucial interaction with the robot happened when participants found the keycard.
The keycard was stored in a special box, which, in the constrains of the game, can only be
opened by the robot. However, to do this correctly, the participants first had to hand over the
box to the robot in the correct manner. This interaction was first shown using the AR device
as a video and if mistakes happened through feedback pop-ups. The interaction ended with
the participants finalizing it with a press of a button on the robot. The robot then opened the
special box for the participants to be able to obtain the keycard.</p>
      <p>B.JO 'JOEHT</p>
    </sec>
    <sec id="sec-3">
      <title>The following observations can be made:</title>
      <p>Regarding trust, the data shows a wide spectrum of answers. On the question of "To what
extent do you trust the robot in terms of collaboration?" the answers received range from "I
trusted it [the robot] fully and did not doubt it" over to "[...] sometimes I trusted him [the robot]
&amp; sometimes not" up until the answer "I didn’t trust the robot at all [...]". The participants
varied also in behaviour while playing the game. While some participants followed the robot at
every step, some participants refused to take the advice of the robot. However, within the study
group, nobody always disagreed with the robot. This might be because in the beginning the
robot’s algorithm was programmed to always agree with the participant.</p>
      <p>The data from the reconstructed path, combined with the participants’ questionnaire responses,
indicated that those who reported fully trusting the robot consistently followed the robot’s
decisions. Conversely, none of the participants who expressed distrust or difficulty in trusting
the robot consistently went against the robot’s decisions. This may suggest that they ignored
the robot’s decisions rather than attributing malicious intent to the robot.</p>
      <p>According to the game rules, participants first needed to find the keycard to access the engine
room. However, some participants, confused by the objective, headed straight for the exit,
unaware of their mistake. When the robot suggested they should return, participants assumed it
was an error and ignored it. The robot, limited in its communication abilities, couldn’t explain
the reason for its suggestion. This limitation was not anticipated or programmed into the robot’s
behavior during the study’s design.
Due to the currently limited data, the findings are almost impossible to generalize. Further
research and data collection has to be made to collect more findings in this area.</p>
      <p>The study aimed to examine different types of HRI. Participants first engaged in a task where
they repaired robot parts using an instructional video on the AR device. This activity helped
them closely examine the robot and create a scenario where the robot needed assistance. This
would then be flipped when the robot was consulted in the respective robot rooms where the
human received assistance from the robot.</p>
      <p>The results showed significant variation in trust levels among participants, even though
they experienced the same environment and interaction knowledge. Some saw the robot as
infallible, while others were skeptical. This suggests that trust in the robot’s abilities is highly
individualized and influenced by personal perceptions, especially when no prior information
about the robot’s intent is given.</p>
      <p>According to the literature, the terms confidence and trust share similarities [17] and are close
concepts. Even though that are often used interchangeably, these terms might not mean the
exact same for every participant. While confidence refers to a specific reference, trust can be
broad [18].</p>
      <p>By chance, we found a technical limitation of the device which hindered one data collection.
One participant was visually impaired which created a incompatibility with the built-in eye
tracking of the Hololens 2 as well as the overall intractability through the device. The data of
this participant was not used in the presented data.</p>
      <p>In future work, we will explore how trust is related to the mental model in HRI. We will run the
same experiment but expand the questionnaire to use validated questionnaires [19] and validate
if the current findings still persist with more participants.</p>
      <p>Other ways how this study could be extended by researchers are the usage of different robots.
The robot used was semi-humanoid which means it had human-like features while being very
low on the human-likeness scale. Other robot designs might, especially humanoid robots could
result in higher levels of trust as suggested by the literature [20].</p>
      <p>This study is about HRI in an educational context, focusing on trust and collaborative
decisionmaking facilitated by AR technology.</p>
      <p>Although the data is currently limited and not yet
generalizable, several key insights emerged. Participants exhibited a broad spectrum of trust
levels towards the robot, ranging from full trust to complete skepticism. These varying perceptions
were influenced by individual interactions and experiences with robots, suggesting that trust in
autonomous robots might be highly subjective and personal. Behavioral patterns also varied,
with some participants consistently following the robot’s guidance while others occasionally
disregarded it. Initial agreement from the robot likely contributed to this variability, as it may
have influenced early interpersonal dynamics.</p>
      <p>Ultimately, this research underscores the complexity of developing trust in human-robot
collaborations and points to the potential of personalized approaches to enhance HRI through
means like Generative AI. Further investigations with a more extensive dataset will be crucial
for validating these preliminary findings.
This research was funded by the BMBF, the German Federal Ministry of Education and Research,
under the MILKI-PSY (ger. abb. for Multimodal immersive learning with artificial intelligence
for psychomotor training; IUQTXNJMLEZF ) name and the grant code: %)# .</p>
      <p>We would also like to acknowledge the dedication and hard work of the development team of
the software running on the AR device. Namely, Patrick Handwerk and Suhana Biswas.
oration vary over tasks with different action types, Frontiers in neurorobotics 12 (2018)
36.
[16] D. Majonica, N. Fanchamps, D. Iren, R. Klemke, Exploring immersive learning environments
in human-robot interaction use cases, in: International Conference on Games and Learning
Alliance, Springer, 2023, pp. 267–276.
[17] M. Lupoi, Trust and confidence, Sweet &amp; Maxwell, 2009.
[18] B. D. Adams, Trust vs. confidence, Defence Research and Development Canada-Toronto,
2005.
[19] T. Nomura, T. Suzuki, T. Kanda, K. Kato, Measurement of negative attitudes toward
robots, Interaction Studies. Social Behaviour and Communication in Biological and Artificial
Systems 7 (2006) 437–454.
[20] J. Pinney, F. Carroll, P. Newbury, Human-robot interaction: the impact of robotic aesthetics
on anticipated human trust, PeerJ Computer Science 8 (2022) e837.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>R.</given-names>
            <surname>Bemelmans</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G. J.</given-names>
            <surname>Gelderblom</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Jonker</surname>
          </string-name>
          , L. De Witte,
          <article-title>Socially assistive robots in elderly care: a systematic review into effects and effectiveness</article-title>
          ,
          <source>Journal of the American Medical Directors Association</source>
          <volume>13</volume>
          (
          <year>2012</year>
          )
          <fpage>114</fpage>
          -
          <lpage>120</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>R.</given-names>
            <surname>Mundaca-Uribe</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Askarinam</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. H.</given-names>
            <surname>Fang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <article-title>Towards multifunctional robotic pills</article-title>
          ,
          <source>Nature Biomedical Engineering</source>
          (
          <year>2023</year>
          )
          <fpage>1</fpage>
          -
          <lpage>13</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>A.</given-names>
            <surname>Gujarathi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Kulkarni</surname>
          </string-name>
          , U. Patil,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Phalak</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Deotalu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Jain</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Panchi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Dhabale</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Chiddarwar</surname>
          </string-name>
          ,
          <article-title>Design and development of autonomous delivery robot</article-title>
          ,
          <source>arXiv preprint arXiv:2103.09229</source>
          (
          <year>2021</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>A.</given-names>
            <surname>Edwards</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Edwards</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. R.</given-names>
            <surname>Spence</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Harris</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Gambino</surname>
          </string-name>
          ,
          <article-title>Robots in the classroom: Differences in students perceptions of credibility and learning between teacher as robot and robot as teacher</article-title>
          ,
          <source>Computers in Human Behavior</source>
          <volume>65</volume>
          (
          <year>2016</year>
          )
          <fpage>627</fpage>
          -
          <lpage>634</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>C.-W.</given-names>
            <surname>Wei</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Hung</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.-S.</given-names>
            <surname>Chen</surname>
          </string-name>
          , et al.,
          <article-title>A joyful classroom learning system with robot learning companion for children to learn mathematics multiplication</article-title>
          .,
          <source>Turkish Online Journal of Educational Technology-TOJET</source>
          <volume>10</volume>
          (
          <year>2011</year>
          )
          <fpage>11</fpage>
          -
          <lpage>23</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>R. R.</given-names>
            <surname>Galin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. V.</given-names>
            <surname>Meshcheryakov</surname>
          </string-name>
          ,
          <article-title>Human-robot interaction efficiency and human-robot collaboration</article-title>
          ,
          <source>in: Robotics: Industry 4.0 issues &amp; new intelligent control paradigms</source>
          , Springer,
          <year>2020</year>
          , pp.
          <fpage>55</fpage>
          -
          <lpage>63</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>F.</given-names>
            <surname>Barravecchia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Bartolomei</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Mastrogiacomo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Franceschini</surname>
          </string-name>
          ,
          <article-title>Redefining humanrobot symbiosis: a bio-inspired approach to collaborative assembly</article-title>
          ,
          <source>The International Journal of Advanced Manufacturing Technology</source>
          <volume>128</volume>
          (
          <year>2023</year>
          )
          <fpage>2043</fpage>
          -
          <lpage>2058</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>T.</given-names>
            <surname>Keller</surname>
          </string-name>
          , D. Majonica,
          <string-name>
            <given-names>A.</given-names>
            <surname>Richert</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Klemke</surname>
          </string-name>
          ,
          <article-title>Prerequisite knowledge of learning environments in human-robot collaboration for dyadic teams</article-title>
          ,
          <source>Proceedings. ISSN 1613</source>
          (
          <year>2020</year>
          )
          <fpage>0073</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>C.</given-names>
            <surname>Esterwood</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L. P.</given-names>
            <surname>Robert</surname>
          </string-name>
          ,
          <article-title>The theory of mind and human-robot trust repair</article-title>
          ,
          <source>Scientific Reports</source>
          <volume>13</volume>
          (
          <year>2023</year>
          )
          <fpage>9877</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>K.</given-names>
            <surname>Schaefer</surname>
          </string-name>
          ,
          <article-title>The perception and measurement of human-robot trust (</article-title>
          <year>2013</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <surname>K. A. M. Sanusi</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <string-name>
            <surname>Majonica</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          <string-name>
            <surname>Handwerk</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Biswas</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Klemke</surname>
          </string-name>
          ,
          <article-title>Evaluating an immersive learning toolkit for training psychomotor skills in the fields of human-robot interaction and dance</article-title>
          ., in: MILeS@
          <string-name>
            <surname>EC-TEL</surname>
          </string-name>
          ,
          <year>2023</year>
          , pp.
          <fpage>70</fpage>
          -
          <lpage>78</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <surname>K. A. M. Sanusi</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <string-name>
            <surname>Majonica</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          <string-name>
            <surname>Künz</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Klemke</surname>
          </string-name>
          ,
          <article-title>Immersive training environments for psychomotor skills development: A student driven prototype development approach</article-title>
          ,
          <source>in: Multimodal Immersive Learning Systems</source>
          <year>2021</year>
          , CEUR,
          <year>2021</year>
          , pp.
          <fpage>53</fpage>
          -
          <lpage>58</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <surname>K. A. M. Sanusi</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Slupczynski</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Geisen</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <string-name>
            <surname>Iren</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Klamma</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Klatt</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Klemke</surname>
          </string-name>
          ,
          <article-title>Impect-sports: Using an immersive learning system to facilitate the psychomotor skills acquisition process</article-title>
          ., in: MILeS@
          <string-name>
            <surname>EC-TEL</surname>
          </string-name>
          ,
          <year>2022</year>
          , pp.
          <fpage>34</fpage>
          -
          <lpage>39</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>A.</given-names>
            <surname>Samanta</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Kotte</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Handwerk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Asyraaf Mat Sanusi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Geisen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Kravcik</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Duong-Trung</surname>
          </string-name>
          ,
          <article-title>Impect-pose: A complete front-end and back-end architecture for pose tracking and feedback</article-title>
          ,
          <source>in: Adjunct Proceedings of the 32nd ACM Conference on User Modeling, Adaptation and Personalization</source>
          ,
          <year>2024</year>
          , pp.
          <fpage>142</fpage>
          -
          <lpage>147</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>R.</given-names>
            <surname>Schulz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Kratzer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Toussaint</surname>
          </string-name>
          ,
          <article-title>Preferred interaction styles for human-robot collab-</article-title>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>