<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <issn pub-type="ppub">1613-0073</issn>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>A facial imitation framework for the simultaneous face control of a virtual avatar and a humanoid robot</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>MattiaBruscia</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Graziano AM.anduzio</string-name>
          <email>grazianoalfredo.manduzio@phd.unip</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>LorenzoCominell</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Enzo PasqualeSciling o</string-name>
          <email>enzo.scilingo@unipi.i</email>
          <email>lorenzo.cominelli@unipi</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Workshop</string-name>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>University of Pisa</institution>
          ,
          <addr-line>Pisa</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Facial expression imitation (FEI) for humanoid robots is an active research field in the context of human robot interaction (HRI). Virtual avatars can enhance and simplify the experimental HRI setup in terms of cost and performance, avoiding possible long-term mechanical degradation of the physical robot in use. Moreover, the presented framework allows to conduct comparison studies aimed at investigating the role of embodiment in the interaction with a robot versus its digital twin, which is a critical factor to establish a successful social bond with the robot, as in the case of numerous clinical applications.</p>
      </abstract>
      <kwd-group>
        <kwd>Human-robot interaction</kwd>
        <kwd>facial expression imitation</kwd>
        <kwd>virtual avatar</kwd>
        <kwd>Facial Action Coding System (FACS)</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>CEUR</p>
      <p>ceur-ws.org
provides the opportunity to assess the value of embodiment in the context of emotional communication
between a human being and an artificial interlocutor.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Proposed work</title>
      <p>
        As shown in Fig.1, the proposed framework is composed of three main systems: the real-time acquisition
of images from a camera, the analysis of the extracted images to obtain the Action Units (AUs) of the
detected subject, the send and execution of the facial movements to the avatar and to the physical
robot. In the acquisition phase, tFrhaeme Grabber (FG, Algorithm1), when executed, asks the user to
input the desired frame acquisition ra( t=e 5 fps if not specified) and the name of the folder where
the frames will be stored. TsheetupFolder() function is then called, creating both a main folder and
a temporary one for storing the frames. If folders already exist, the program informs the user that
specified folders already exists. Next, tsheetupCamera() function is called, initializing the webcam
and configuring its resolution (wi dt=h 640 pixel, heightℎ = 480 pixel) and frame rate. Finally, the
getFrames() function starts capturing frames from the webcam. For each captured frame, a unique
name is generated, including the frame number and timestamp. The frame is then saved as an image in
the temporary folder and copied to the main folder. This process continues until the user interrupts
the program with a keyboard interruption. When this occurs, the program disconnects the webcam,
closes alOlpenCV windows, and removes both the main and temporary folders. If an error occurs during
the frame capture (e.g., if the webcam is unavailable), the program notifies the user that it cannot
initiate a new webcam recording. The second program, i.eE.,vtehnet Handler (EH, Algorithm2) is
a file monitoring system that responds to the creation of new files by sending them to a server for
processing and subsequently forwarding the results to another server. Itwautscehsdtohgemodule to
monitor a specified directory for new files. The program starts by defining the directory to monitor
and then enters a waiting loop until the specified directory exists. Once this condition is met, it begins
monitoring it using thOneMyWatch class. This class utilizes the Observer class fromwatthcehdog
module to monitor the directory. OTnhMeyWatch class has a run method that initiates the observation
and waits for file system events. When a file system event is detectedo,nt_haeny_event() method
of theHandler class is called. This method checks if the event corresponds to the creation of a new
ifle. If a new file is detected, its path is passed to tehmeaution() function, which sends the file in
a binary representatiotno a local server for processing and returns an XML response. The XML
response is then sent to another server usinsgetnhdeToAbel() andsendToAvatar() function. This
process continues until the user interrupts the program or an error occurs. Two Flask web applications
[
        <xref ref-type="bibr" rid="ref15">15</xref>
        ], i.e., ReceiverAvatar (RAv, Algorithm3) andReceiverAbel (RAb, Algorithm4), are structured
as web services that receive XML data, extract AUs values, and send them in the appropriate format
to the avatar and the robot usingsetnhdeAUsAbel() andsendAUsAvatar() functions. Using Flask
applications enable the system with a high versatility and scalability, because, when they receive a
post request to the relative write endpoint, they call the write function, which extracts the data fro
the request, performs some formatting steps, sends the data to the avatar or to Abel using the relative
sendAUs() function, and returns a response to the original request. Another Flask application using
theEmotiva API is responsible for predicting facial AUs and estimated emotions from a single image
 . Emotiva is a Facial Expression Recognition (FER) software able to analyze human attentive and
afective states 1[
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. A post call is made each time the event handler detects the capture of a frame in
the specified folder. The virtual avatar used in this framework is based  on the project, an
open-source  -based 3D face animation system11[
        <xref ref-type="bibr" rid="ref12">, 12</xref>
        ]. It is a software that enables the simulation
of realistic facial expressions by manipulating specific AUs as defined in the FA CS. includes
an API suitable for generating real-time dynamic facial expressions for a three-dimensional character. It
can be easily integrated into existing systems without requiring prior experience in computer graphics.
Algorithm 1 Frame Grabber
1: function setupFolder(  _ )
2: if   _ is not specified then
3:   _ ← ‘/correct/path/to/frame_folder’
4:
5:
Algorithm 2 Image sender
1: function sendAUs(s)
2:  ← content of the post request to ‘/AUs_write_port’
3: return response
4: function detectNewImage(event)
5: if new image in the foldetrhen
6: send  to Flask receiver server
Algorithm 3 Avatar’s receiver
1: initialize Flask instance
2: function write(‘/write_port’, method=‘POST’)
3:  ← content of the post request to ‘/AUs_extraction_port’
4:  ← extract list from
5: send  and movement speed to avatar
6: return ‘Data received’
Algorithm 4 Abel’s receiver
1: initialize Flask instance
2: function write(‘/write_port’, method=‘POST’)
3:  ← content of the post request to ‘/AUs_extraction_port’
4:  ← extract list from
5: send  and movement speed to Abel
6: return ‘Data received’
      </p>
      <p>Frame Grabber</p>
      <p>Frame
Event Handler
(Image Sender)</p>
      <p>Watchdog
frame_folder</p>
      <p>Avatar Receiver
Abel Receiver</p>
      <p>Avatar
Abel</p>
    </sec>
    <sec id="sec-3">
      <title>3. Experimental results</title>
      <p>The proposed framework was developed with Python on PC platform. We run the application with a
frame rate= 5 fps using the integrated webcam, taking images of6s4i0zex 480 pixel. In Table1, AUs
used for the experiment are listed. In our tests, the facial expressions assumed by the digital avatar
as well as by the robot successfully followed the AUs extracted by the Emotiva API and, although
they were noisy data acquired with non-specific equipment, it was possible to efectively control the
movement of both the virtual and physical agents simply changing the user facial expression. This
is shown in Fig.2c and2d in the case of the avatar, and in F2iega.nd2f in case of the robot. The
landmarks are represented as yellow dots superimposed on the two images of the subje2catasn(Fdig.
2b). Additionally, a rectangle is used to identify the faces present in the field of view. The values of the
AUs in both cases are also displayed in Figu2rgeasnd2h. To improve the quality of control, we plan,
instead of using the PC camera, to use a Kinect camera directly connected to Abel for image acquisition,
which has a higher resolution, and to increase the number of AUs involved in the agent control. To
set up the experiment accurately it is necessary to be in a very bright environment, preferably under
direct light to increase the contrast of the acquired image. It is also advisable to choose a suficiently
high frame rate to achieve real-time control of the robot and the avatar. Selecting values that are to
low can lead to latency issues in the avatar system. Additionally, during the experiment, the subject’s
face should not exit the camera’s field of view or rotate more than an angle of about 30° from the
central position. Processing a partial face is not supported. If this occurrence happens, the results are
deemed unreliable, and the user receives an alert message. In case of detection of multiple subjects
in the field of view, a processing is performed for each visible face. In the context of the presented
framework, this could generate conflicts in the control of the avatar and the robot, since there is no
decision-making algorithm included in this framework. This problem is solved by the integration of
the proposed framework with the high-level cognitive processing (i.e., the Plan block of Abel’s control
architecture13[]) that makes the artificial agents able to focus their attention on a specific subject
according to specific attention rul1e6s].[
(a) Happy expression
(b) Sad expression
(c) Happy expression, avatar face
(d) Sad expression, avatar face
(e) Happy expression, Abel face
(f) Sad expression, Abel face
(g) Happy expression AUs and emotions values</p>
      <p>(h) Sad expression AUs and emotions values</p>
    </sec>
    <sec id="sec-4">
      <title>4. Conclusions and future developments</title>
      <p>
        The use of a digital avatar introduces several advantages, such as simplifying certain phases of
development and testing which would normally involve the correspondent physical robot, helping to
prevent the inevitable wear and tear of the robot’s electronic and mechanical components, and the high
scalability and afordability of this technology. On the other hand, we are aware of the importance
and the influence of a social robot corporeality in HRI, especially in several clinical applications (e.g.,
[
        <xref ref-type="bibr" rid="ref17 ref18 ref19 ref20">17, 18, 19, 20</xref>
        ]. The presented architecture allows exploratory interaction studies where it will be
possible to compare two systems in which the perception and information processing remain identical,
with the only difering variable being the representation and embodiment of the artificial agent. These
studies will lead to a methodological evaluation using standard scales (e.g., the Godspee2d1]m) ethods [
comparing the cases of Abel and the digital avatar. Moreover, the degrees of freedom of the avatar are
limited e.g., it cannot make asymmetric expressions, and it cannot move any part of the body but the
face. To improve this aspect, the next steps of the project will be also to modify the code of the avatar
at a lower level, separating the right and left part of its face, and to build the graphical component and
the control of other expressive body parts, such as neck, arms and hands.
      </p>
    </sec>
    <sec id="sec-5">
      <title>Acknowledgments</title>
      <p>Thanks to the developers of Emotihvtatps://emotiva.ita/nd openFACShttps://github.com/phuselab/
openFACS. Research partly funded by PNRR - M4C2 - Investimento 1.3, Partenariato Esteso PE00000013
- ”FAIR - Future Artificial Intelligence Research” - Spoke 1 ”Human-centered AI”, funded by the European
Commission under the NextGeneration EU programme.
[21] C. Bartneck, E. Croft, D. Kulic, Measurement instruments for the anthropomorphism, animacy,
likeability, perceived intelligence, and perceived safety of robots, International Journal of Social
Robotics 1 (2009) 71–81. do1i:0.1007/s12369-008-0001-3.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>S.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Deng</surname>
          </string-name>
          ,
          <article-title>Deep facial expression recognition: A survey</article-title>
          ,
          <source>IEEE transactions on afective computing 13</source>
          (
          <year>2020</year>
          )
          <fpage>1195</fpage>
          -
          <lpage>1215</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>D.</given-names>
            <surname>Canedo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. J.</given-names>
            <surname>Neves</surname>
          </string-name>
          ,
          <article-title>Facial expression recognition using computer vision: A systematic review</article-title>
          ,
          <source>Applied Sciences</source>
          <volume>9</volume>
          (
          <year>2019</year>
          )
          <fpage>4678</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>P.</given-names>
            <surname>Ekman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W. V.</given-names>
            <surname>Friesen</surname>
          </string-name>
          ,
          <article-title>Facial action coding system</article-title>
          , Environmental Psychology &amp; Nonverbal
          <string-name>
            <surname>Behavior</surname>
          </string-name>
          (
          <year>1978</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>C.</given-names>
            <surname>Breazeal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Buchsbaum</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Gray</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Gatenby</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Blumberg</surname>
          </string-name>
          ,
          <article-title>Learning from and about others: Towards using imitation to bootstrap the social understanding of others by robots</article-title>
          ,
          <source>Artificial life 11</source>
          (
          <year>2005</year>
          )
          <fpage>31</fpage>
          -
          <lpage>62</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>T.</given-names>
            <surname>Wu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N. J.</given-names>
            <surname>Butko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Ruvulo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. S.</given-names>
            <surname>Bartlett</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. R.</given-names>
            <surname>Movellan</surname>
          </string-name>
          ,
          <article-title>Learning to make facial expressions</article-title>
          ,
          <source>in: 2009 IEEE 8th International Conference on Development and Learning</source>
          ,
          <year>2009</year>
          , pp.
          <fpage>11</fpage>
          -
          <lpage>06</lpage>
          .. doi: 1109/DEVLRN.
          <year>2009</year>
          .
          <volume>5175536</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>S.</given-names>
            <surname>Boucenna</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Gaussier</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Andry</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Hafemeister</surname>
          </string-name>
          ,
          <article-title>A robot learns the facial expressions recognition and face/non-face discrimination through an imitation game</article-title>
          ,
          <source>International Journal of Social Robotics</source>
          <volume>6</volume>
          (
          <year>2014</year>
          )
          <fpage>633</fpage>
          -
          <lpage>652</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>A.</given-names>
            <surname>Meghdari</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. B.</given-names>
            <surname>Shouraki</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Siamy</surname>
          </string-name>
          ,
          <string-name>
            <surname>A. Shariati,</surname>
          </string-name>
          <article-title>The real-time facial imitation by a social humanoid robot</article-title>
          ,
          <source>in: 2016 4th International Conference on Robotics and Mechatronics (ICROM)</source>
          , IEEE,
          <year>2016</year>
          , pp.
          <fpage>524</fpage>
          -
          <lpage>529</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>H.</given-names>
            <surname>Kobayashi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Hara</surname>
          </string-name>
          ,
          <article-title>Facial interaction between animated 3d face robot and human beings</article-title>
          ,
          <source>in: 1997 IEEE International Conference on Systems, Man, and Cybernetics</source>
          .
          <source>Computational Cybernetics and Simulation</source>
          , volume
          <volume>4</volume>
          ,
          <year>1997</year>
          , pp.
          <fpage>3732</fpage>
          -
          <lpage>3737</lpage>
          vol.
          <volume>4</volume>
          . d1o0i.:1109/ICSMC.
          <year>1997</year>
          .
          <volume>633250</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>D.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Sun</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Hu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Zang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <article-title>Real-time performance-driven facial animation with 3ds max and kinect</article-title>
          ,
          <source>in: 2013 3rd International Conference on Consumer Electronics, Communications and Networks</source>
          ,
          <year>2013</year>
          , pp.
          <fpage>473</fpage>
          -
          <lpage>476</lpage>
          .
          <year>d1o0i</year>
          .:1109/CECNet.
          <year>2013</year>
          .
          <volume>6703372</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>N.</given-names>
            <surname>Rawal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Koert</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Turan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Kersting</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Peters</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Stock-Homburg</surname>
          </string-name>
          ,
          <article-title>Exgennet: Learning to generate robotic facial expression using facial expression recognition</article-title>
          ,
          <source>Frontiers in Robotics and AI</source>
          <volume>8</volume>
          (
          <year>2022</year>
          )
          <fpage>730317</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>V.</given-names>
            <surname>Cuculo</surname>
          </string-name>
          ,
          <string-name>
            <surname>A. D'Amelio</surname>
            ,
            <given-names>Openfacs:</given-names>
          </string-name>
          <article-title>An open source facs-based 3d face animation system</article-title>
          , in: Y.
          <string-name>
            <surname>Zhao</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          <string-name>
            <surname>Barnes</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          <string-name>
            <surname>Chen</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Westermann</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          <string-name>
            <surname>Kong</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          Lin (Eds.),
          <source>Image and Graphics</source>
          , Springer International Publishing, Cham,
          <year>2019</year>
          , pp.
          <fpage>232</fpage>
          -
          <lpage>242</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12] openFACS,
          <year>2023</year>
          . URL: https://github.com/phuselab/openFAC.S
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>L.</given-names>
            <surname>Cominelli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Hoegen</surname>
          </string-name>
          , D. De Rossi,
          <article-title>Abel: integrating humanoid body, emotions, and time perception to investigate social interaction and human cognition</article-title>
          ,
          <source>Applied Sciences</source>
          <volume>11</volume>
          (
          <year>2021</year>
          )
          <fpage>1070</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <surname>Emotiva</surname>
          </string-name>
          ,
          <year>2023</year>
          . URL:https://emotiva.i.t/
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <surname>Flask</surname>
          </string-name>
          ,
          <year>2023</year>
          . URL:https://flask.palletsprojects.com/en/2..3.x/
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>L.</given-names>
            <surname>Cominelli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Mazzei</surname>
          </string-name>
          , D. E. De Rossi,
          <article-title>Seai: Social emotional artificial intelligence based on damasio's theory of mind</article-title>
          ,
          <source>Frontiers in Robotics and AI</source>
          <volume>5</volume>
          (
          <year>2018</year>
          )
          <article-title>6</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>S.</given-names>
            <surname>Shamsuddin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Yussof</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Ismail</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F. A.</given-names>
            <surname>Hanapiah</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Mohamed</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H. A.</given-names>
            <surname>Piah</surname>
          </string-name>
          ,
          <string-name>
            <surname>N. I. Zahari</surname>
          </string-name>
          ,
          <article-title>Initial response of autistic children in human-robot interaction therapy with humanoid robot nao</article-title>
          ,
          <source>in: 2012 IEEE 8th International Colloquium on Signal Processing and its Applications</source>
          ,
          <year>2012</year>
          , pp.
          <fpage>188</fpage>
          -
          <lpage>193</lpage>
          . doi:
          <volume>10</volume>
          .1109/CSPA.
          <year>2012</year>
          .
          <volume>6194716</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>S.</given-names>
            <surname>Shamsuddin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Yussof</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L. I.</given-names>
            <surname>Ismail</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Mohamed</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F. A.</given-names>
            <surname>Hanapiah</surname>
          </string-name>
          ,
          <string-name>
            <surname>N. I. Zahari</surname>
          </string-name>
          ,
          <article-title>Initial response in hri-a case study on evaluation of child with autism spectrum disorders interacting with a humanoid robot nao</article-title>
          ,
          <source>Procedia Engineering</source>
          <volume>41</volume>
          (
          <year>2012</year>
          )
          <fpage>1448</fpage>
          -
          <lpage>1455</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>A.</given-names>
            <surname>Tapus</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Peca</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Aly</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Pop</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Jisa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Pintea</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. S.</given-names>
            <surname>Rusu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. O.</given-names>
            <surname>David</surname>
          </string-name>
          ,
          <article-title>Children with autism social engagement in interaction with nao, an imitative robot: A series of single case experiments</article-title>
          ,
          <source>Interaction studies 13</source>
          (
          <year>2012</year>
          )
          <fpage>315</fpage>
          -
          <lpage>347</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>L. J.</given-names>
            <surname>Wood</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Zaraki</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Robins</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Dautenhahn</surname>
          </string-name>
          ,
          <article-title>Developing kaspar: a humanoid robot for children with autism</article-title>
          ,
          <source>International Journal of Social Robotics</source>
          <volume>13</volume>
          (
          <year>2021</year>
          )
          <fpage>491</fpage>
          -
          <lpage>508</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>