<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Facial expression recognition from nao robot within a memory training program for individuals with mild cognitive impairment</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>1st Berardina De Carolis</string-name>
          <email>berardina.decarolis@uniba.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>2nd Giuseppe Palestra</string-name>
          <email>giuseppepalestra@gmail.com</email>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>3rd Olimpia Pino</string-name>
          <email>olimpia.pino@unipr.it</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>SAT19: 1st Workshop on Socio-Affective Technologies: an interdisciplinary</string-name>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of Computer Science, University of Bari “A. Moro”</institution>
          ,
          <addr-line>Bari</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Department of Psychology, University of Parma</institution>
          ,
          <addr-line>Parma</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Research Department, HERO srl</institution>
          ,
          <addr-line>Apulia</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff3">
          <label>3</label>
          <institution>approach</institution>
          ,
          <addr-line>October 7, 2019, Bari</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2019</year>
      </pub-date>
      <abstract>
        <p>-Mild Cognitive Impairment refers to a borderline state between healthy aging and dementia. Memory-training program plays a crucial role in the reduction of the possible conversion in dementia and a robot mediated memory training is useful to overcome limits of traditional programs. The present study addresses the effectiveness of a system in automatically recognize facial expression from video recorded sessions of a robot mediated memory training lasted 2 months involving 21 patients. The system is able to recognize facial expressions from group sessions handling partially occluded faces. Findings showed that in all participants the system is able to recognize facial expressions. Index Terms-facial expression recognition, social robot, memory training, Mild Cognitive Impairment</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>I. INTRODUCTION</title>
      <p>
        Mild Cognitive Impairment (MCI) concerns a stage between
normal aging and early dementia marked by cognitive deficit
characterized by scores below the norm on psychometric
tests, preserved functional abilities and high levels of quality
of life [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. The prevalence of MCI in individuals &gt;65 years of age
is between 10 and 20%. MCI is highly likely to convert in
dementia at a rate of about 13% per year and in the rest of the
patients the impairment persists stable or even return to
normal over time [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. Dementia was estimated to have been
detected around the globe at the rate of one new patient
about every 7 seconds. Therefore, MCI has become a relevant
research topic because it could play a critical role in
distinguishing developmental changes in lifespan memory
from those that are real signs of the disorder. Delaying the
onset of dementia by as little as one year could decrease the
global burden of Alzheimer’s by 9 million of patients in 2050
[
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. In order to maintaining cognitive functions
nonpharmacological programs are developed. These programs
involve qualified psychologist and therapists in order to
conduct new tasks and new exercises, to monitor the
performance of the patients, to
provide helpful feedback, to analyze patients’ performances
over the time [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. The psychologists’ primary objective for MCI
patients is to keep their cognitive ability when functional
capabilities and independence are not compromised [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. The
worldwide incidence of MCI expects to increase in the next few
years [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]; however, space and personnel shortages are already
becoming a problem owing to an unprecedented rise in life
expectancy [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. In very recent years, new tools and
technologies based on machine learning and robotics are
successfully applied to the field of psychology and could be
used to in memory training program for people with MCI.
Humanoid robots are able to improve mood, emotional
expressiveness and social relationships among patients with
dementia [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]– [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] also executing many assistive functionalities
[
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]–[
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] and providing life assistance demonstrating that the
information support provided by the robot also has the
potential to improve the daily life of persons with a mild level
of dementia [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. Most recent advances in information and
communication technologies have enabled the development
of telepresence robots to connect a family member and a
person with dementia as a means of enhancing
communication between these two parties [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]. The
humanoids skills are progressively enhanced: they are able to
recognize faces, call people by their name, shape their
behavior considering the mood of people interacting with
them [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ], [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]. Some robots can also reproduce emotions
[
        <xref ref-type="bibr" rid="ref12">12</xref>
        ], [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ], making their human mate feel welcomed, and
simulating empathy [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ], [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ]. Kinetics technology can help
them reproduce movements [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ], while speech recognition
software allows them to respond to what people say, even in
many different languages. During human-robot interaction,
the mirror-circuit, responsible for social interaction, is verified
to be active [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ] suggesting that humans can consider robots
as real companions with their own intentions. Many studies
have employed the robot NAO. If appropriately programmed,
it is able to decode human emotions, simulate emotions
through the color of his eyes or the position of the body,
recognize faces and model physical exercise to a group of
seniors [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ], and equipped to measure of health and
environmental parameters [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ]. Robotics could partially fill in
some of the identified gaps in current health care and home
care/self-care provisions for promising applications in these
fields that we expect to play relevant roles in the near future.
With emerging research suggesting that mobile robot systems
can improve elderly care [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ], [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ], [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ], also through the
development of a coding system aimed at measuring
engagement-related behavior across activities in people with
dementia [
        <xref ref-type="bibr" rid="ref21">21</xref>
        ], [
        <xref ref-type="bibr" rid="ref22">22</xref>
        ]. With the growing incidence of pathologies
and cognitive impairment associated with aging, there will be
an increasing demand for maintaining care systems and
services for elderly with the imperative of economic
costeffectiveness of care provision. In our previous work [
        <xref ref-type="bibr" rid="ref23">23</xref>
        ], NAO
has been evaluated as mediator in a memorytraining program
for people with MCI in a center for cognitive disorders therapy.
      </p>
      <p>The focus of the present paper was evaluate the feasibility
and usefulness of NAO platform in cognitive stimulation. In
human interaction emotional and social signals expressing
additional information are essential. For this aim, particular
attention was paid to quantifying the effectiveness of robot on
well being of the training recipients and this was realized
measuring facial expressions at the same time gaze when
human participants interact with the synthetic agent. This
study aims to exploit video clips recorded during the 2-month
experiment of the previous study analyzing facial expressions
of the participants in the experiment. The system presented is
able: i) to recognize group facial expression thanks a multiface
detector; ii) to recognize facial expression in partially occluded
faces. This paper is organized as follows. Section 2 presents
Materials and Methods used. Section 3 reports experimental
results. Finally, conclusions are drawn.</p>
    </sec>
    <sec id="sec-2">
      <title>II. MATERIALS AND METHODS</title>
      <sec id="sec-2-1">
        <title>A. Participants</title>
        <p>The participants were selected from the population of
outpatients attending the Center for Cognitive Disorders and
Dementia of AUSL Parma (Italy). All the participants were
firstly evaluated by memory-disorders specialists. The
diagnosis of MCI was based on a detailed medical history,
relevant physical and neurological examinations, negative
laboratory findings, and neuroimaging studies. Subjects are
enrolled according to the following inclusion criteria: a)
diagnosis of MCI obtained through Petersen guidelines, and
full marks in the two tests measuring daily living activities (ADL
and IADL); b) both genders; c) chronological age comprised
between 45 and 85 years; and d) without pharmacological
treatment. Exclusion criteria were a diagnosis of major
neurocognitive disorder (defined using DSM 5 criteria), history
of symptomatic stroke (although silent brain infarction was not
an exclusion), history of other central nervous system
diseases, serious medical or psychiatric illness that would
interfere with study participation, such Parkinson’s disease,
HIV/AIDS, or other contraindications. Informed consent was
obtained from all the patients or from their legal
representatives when appropriate. 21 individuals (10 females
and 11 males) participated in the experiment with a mean age
of 73.45 years (SD = 7.71). The mean education level was of
9.90 years (SD = 4.58) with a minimum value of 5 years
corresponding to the conclusion of the elementary school and
a maximum value of 18 years which corresponds to a bachelor
degree.</p>
      </sec>
      <sec id="sec-2-2">
        <title>B. Robot Mediated Memory Training Program</title>
        <p>
          The memory training exercises were implemented on NAO
on the basis of exercises described in literature [
          <xref ref-type="bibr" rid="ref24">24</xref>
          ] and aimed
to train: i) focused attention; ii) divided and alternate
attention; and iii) categorization and association as learning
strategies. Five tasks were implemented in NAO, considering
the characteristics of the robot:
        </p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>1) Reading stories;</title>
      <p>2) Questions about the story;
3) Associated/not associated words;
4) Associated/not associated word recall;
5) Song-singer match;</p>
      <sec id="sec-3-1">
        <title>C. Video corpus</title>
        <p>In this study, we used a corpus of 48 memory program
session video clips of one hour from 24 therapeutic sessions.
Each video clips recorded three or four participants. These
videos were recorder during two months by two cameras
placed in the therapeutic room and they have been used to
record all participants. Overall, at the end of the experiment
for each participant a total of eight hours of video have been
collected.</p>
      </sec>
      <sec id="sec-3-2">
        <title>D. Group Facial Expression Recognition System</title>
        <p>
          The group facial expression recognition system detect faces
in the corpus and then recognize 6 basic emotions (anger,
disgust, fear, happiness, sadness and surprise) plus neutral
expression. The video analysis system is based on a previous
study that aimed to recognize six basic emotions through facial
expression [
          <xref ref-type="bibr" rid="ref25">25</xref>
          ] but the system has been improved in order: i)
to work in groups using a multi-face detector; ii) handle partial
occlusions of the face. In group facial expression recognition
handle occlusions is very important in order to ensure a high
accuracy rate. Indeed, sometimes the face of the participants
is occluded by a hand as well as by an arm of the other
participants as shown in Figure 1.
        </p>
        <p>After identifying a face, the system extracts, for each face,
facial landmarks locating 77 key points. Once the 77 points are
identified, the software tracks linear, polygonal, elliptical and
expression and an average accuracy of 94,24% on 6 basic facial
expression plus neutral.</p>
        <p>Overall, the system extracts 32 geometric features that have
been used in total or in part (to handle occlusions) in order to
train a model. To recognize facial expression, the system uses
a classification module that, through a Random Forest
classifier, analyzes the geometric characteristic vectors to
determine the facial expression.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>III. RESULTS</title>
      <p>In order to test the system, the video clips of the corpus have
been down sampled at 1 frame per second. For each
participant 28,800 frames have been analyzed (1
frame/second x 8 hours of recording). To evaluate the facial
expression recognition during the memory training program
mediated by the robot: i) the number of detected face (nFD);
ii) and the number of each facial expression recognized (nFE)
for each frame for each participants in the video corpus have
been used as metrics. Overall, the system was able to analyze
all the
angular characteristics, i.e. the distance between two points to
find the following: three lines describing the left eyebrow; two
defining the left eye; one for the cheeks; one for nose; eight
for the mouth. The system then determines polygonal
features, calculating the area delimited by irregular polygons
created using three or more key reference points, specifically:
one for the left eye; one forming a triangle between the
corners of the left eye and the left corner of the mouth; one
for the mouth. Thus, the system traces the elliptic
characteristics, calculated by the ratio between the major axis
and the minor axis of the ellipse, in particular seven ellipses
are chosen between the reference points: one for the left
eyebrow; three for the eye, left upper and lower eyelid; three
for the mouth, lower and upper lips. The pipeline of the system
is depicted in Figure 2. The system has been tested on the
Extended Cohn-Kanade (CK+) data set, a well known facial
expression image database of 123 individuals of different
gender, ethnicity and age. The system reach an average facial
expression recognition accuracy of 95,46% on six basic facial
video corpus and all the 21 faces of the participants have been
detected. With respect to nFD, it has been observed that, in
percentage, a face has been detected in total of the corpus
with a success rate of 56%. Moreover, respect to the nFE, it has
been observed that in each frame where a face had been
detected (also if partially occluded) the system was able to
recognize a facial expression. In percentage, the three most
common facial expressions are neutral that has been founded
in the 41% of the frames, happiness in the 17% and sadness in
the 15% as shown in Figure 3.</p>
      <p>In Table I are reported the number of frames where a face
had been detected and Table II reports the number of facial
expression recognized for each participant divided into facial
expressions.
DI</p>
      <p>In this paper a system for group facial expression recognition
with handling of partial occlusions has been presented
track of the mood of the individuals involved during the
training sessions. In future work, other analysis on the video
corpus will be done to understand the engagement, how the
participants have been involved during the memory training
program and how to better involve the individuals with MCI in
new memory training program mediated by a social robot.
as well as its application to the field of psychology. A particular
application such as a memory training program for MCI in
adults can benefit from computer vision and machine learning
technologies to understand better how patients react to robot
mediated memory training program. Moreover, from
automatic facial expression recognition psychologists can keep</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>O.</given-names>
            <surname>Pino</surname>
          </string-name>
          , “
          <article-title>Memory impairments and rehabilitation: Evidence-based effects of approaches and training programs,” The Open Rehabilitation Journal</article-title>
          , vol.
          <volume>8</volume>
          , no.
          <issue>1</issue>
          ,
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>S.</given-names>
            <surname>Gauthier</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Reisberg</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Zaudig</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. C.</given-names>
            <surname>Petersen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Ritchie</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Broich</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Belleville</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Brodaty</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Bennett</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Chertkow</surname>
          </string-name>
          et al., “
          <article-title>Mild cognitive impairment,” The Lancet</article-title>
          , vol.
          <volume>367</volume>
          , no.
          <issue>9518</issue>
          , pp.
          <fpage>1262</fpage>
          -
          <lpage>1270</lpage>
          ,
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>T. G.</given-names>
            <surname>Ton</surname>
          </string-name>
          , T. DeLeire, S. G. May,
          <string-name>
            <given-names>N.</given-names>
            <surname>Hou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. G.</given-names>
            <surname>Tebeka</surname>
          </string-name>
          , E. Chen, and
          <string-name>
            <given-names>J.</given-names>
            <surname>Chodosh</surname>
          </string-name>
          , “
          <article-title>The financial burden and health care utilization patterns associated with amnestic mild cognitive impairment,” Alzheimer's &amp; Dementia</article-title>
          , vol.
          <volume>13</volume>
          , no.
          <issue>3</issue>
          , pp.
          <fpage>217</fpage>
          -
          <lpage>224</lpage>
          ,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>O.</given-names>
            <surname>Pino</surname>
          </string-name>
          ,
          <article-title>Ricucire i ricordi: la memoria, i suoi disturbi, le evidenze di efficacia dei trattamenti riabilitativi</article-title>
          .
          <source>Mondadori universita</source>
          ,
          <year>2017</year>
          .`
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>S.</given-names>
            <surname>Sabanovic</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. C.</given-names>
            <surname>Bennett</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.-L.</given-names>
            <surname>Chang</surname>
          </string-name>
          , and
          <string-name>
            <given-names>L.</given-names>
            <surname>Huber</surname>
          </string-name>
          , “
          <article-title>Paro robot affects diverse interaction modalities in group sensory therapy for older adults with dementia,” in Rehabilitation Robotics (ICORR</article-title>
          ),
          <source>2013 IEEE International Conference on. IEEE</source>
          ,
          <year>2013</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>6</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <surname>H.-M. Gross</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <string-name>
            <surname>Schroeter</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Mueller</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Volkhardt</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          <string-name>
            <surname>Einhorn</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Bley</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          <string-name>
            <surname>Langner</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <string-name>
            <surname>Martin</surname>
            ,
            <given-names>and M.</given-names>
          </string-name>
          <string-name>
            <surname>Merten</surname>
          </string-name>
          , “
          <article-title>I'll keep an eye on you: Home robot companion for elderly people with cognitive impairment,” in Systems</article-title>
          , Man, and
          <string-name>
            <surname>Cybernetics</surname>
          </string-name>
          (SMC),
          <source>2011 IEEE International Conference on. IEEE</source>
          ,
          <year>2011</year>
          , pp.
          <fpage>2481</fpage>
          -
          <lpage>2488</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>F.</given-names>
            <surname>Mart</surname>
          </string-name>
          ´ın, C. Aguero, J. M. Ca¨ nas, G. Abella, R. Ben˜ ´ıtez, S. Rivero,
          <string-name>
            <given-names>M.</given-names>
            <surname>Valenti</surname>
          </string-name>
          , and
          <string-name>
            <given-names>P.</given-names>
            <surname>Mart</surname>
          </string-name>
          <article-title>´ınez-Mart´ın, “Robots in therapy for dementia patients</article-title>
          ,
          <source>” Journal of Physical Agents</source>
          , vol.
          <volume>7</volume>
          , no.
          <issue>1</issue>
          , pp.
          <fpage>48</fpage>
          -
          <lpage>55</lpage>
          ,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>C. D.</given-names>
            <surname>Kidd</surname>
          </string-name>
          and
          <string-name>
            <given-names>C.</given-names>
            <surname>Breazeal</surname>
          </string-name>
          , “
          <article-title>Robots at home: Understanding longterm human-robot interaction</article-title>
          ,
          <source>” in Intelligent Robots and Systems</source>
          ,
          <year>2008</year>
          .
          <article-title>IROS 2008</article-title>
          . IEEE/RSJ International Conference on. IEEE,
          <year>2008</year>
          , pp.
          <fpage>3230</fpage>
          -
          <lpage>3235</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>B.</given-names>
            <surname>Graf</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Parlitz</surname>
          </string-name>
          , and
          <string-name>
            <given-names>M.</given-names>
            <surname>Hagele</surname>
          </string-name>
          , “
          <article-title>Robotic home assistant care-o-bot®¨ 3 product vision</article-title>
          and innovation platform,” in International Conference on Human-Computer Interaction. Springer,
          <year>2009</year>
          , pp.
          <fpage>312</fpage>
          -
          <lpage>320</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <surname>H.-M. Gross</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <string-name>
            <surname>Schroeter</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Mueller</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Volkhardt</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          <string-name>
            <surname>Einhorn</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Bley</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          <string-name>
            <surname>Langner</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Merten</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <string-name>
            <surname>Huijnen</surname>
          </string-name>
          , H. van den Heuvel et al., “
          <article-title>Further progress towards a home robot companion for people with mild cognitive impairment,” in Systems</article-title>
          , Man, and
          <string-name>
            <surname>Cybernetics</surname>
          </string-name>
          (SMC),
          <source>2012 IEEE International Conference on. IEEE</source>
          ,
          <year>2012</year>
          , pp.
          <fpage>637</fpage>
          -
          <lpage>644</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>W.</given-names>
            <surname>Moyle</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Jones</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Cooke</surname>
          </string-name>
          , S. ODwyer,
          <string-name>
            <given-names>B.</given-names>
            <surname>Sung</surname>
          </string-name>
          , and
          <string-name>
            <given-names>S.</given-names>
            <surname>Drummond</surname>
          </string-name>
          , “
          <article-title>Connecting the person with dementia and family: a feasibility study of a telepresence robot,” BMC geriatrics</article-title>
          , vol.
          <volume>14</volume>
          , no.
          <issue>1</issue>
          , p.
          <fpage>7</fpage>
          ,
          <year>2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <surname>B. De Carolis</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Ferilli</surname>
          </string-name>
          , and G. Palestra, “
          <article-title>Simulating empathic behavior in a social assistive robot,” Multimedia Tools and Applications</article-title>
          , vol.
          <volume>76</volume>
          , no.
          <issue>4</issue>
          , pp.
          <fpage>5073</fpage>
          -
          <lpage>5094</lpage>
          ,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <surname>B. De Carolis</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          <string-name>
            <surname>Macchiarulo</surname>
          </string-name>
          , and G. Palestra, “
          <article-title>Soft biometrics for social adaptive robots</article-title>
          ,” in International Conference on Industrial,
          <source>Engineering and Other Applications of Applied Intelligent Systems</source>
          . Springer,
          <year>2019</year>
          , pp.
          <fpage>687</fpage>
          -
          <lpage>699</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>B. N.</given-names>
            <surname>De Carolis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Ferilli</surname>
          </string-name>
          , G. Palestra, and
          <string-name>
            <given-names>V.</given-names>
            <surname>Carofiglio</surname>
          </string-name>
          , “
          <article-title>Towards an empathic social robot for ambient assisted living</article-title>
          .”
          <source>in ESSEM@ AAMAS</source>
          , pp.
          <fpage>19</fpage>
          -
          <lpage>34</lpage>
          . [Online]. Available: https://dl.acm.org/citation.cfm?id=
          <fpage>3054107</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>V.</given-names>
            <surname>Gallese</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Keysers</surname>
          </string-name>
          , and G. Rizzolatti, “
          <article-title>A unifying view of the basis of social cognition,” Trends in cognitive sciences</article-title>
          , vol.
          <volume>8</volume>
          , no.
          <issue>9</issue>
          , pp.
          <fpage>396</fpage>
          -
          <lpage>403</lpage>
          ,
          <year>2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <surname>B. De Carolis</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Ferilli</surname>
          </string-name>
          , and G. Palestra, “
          <article-title>Improving speech-based human robot interaction with emotion recognition,” in International Symposium on Methodologies for Intelligent Systems</article-title>
          . Springer,
          <year>2015</year>
          , pp.
          <fpage>273</fpage>
          -
          <lpage>279</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>B. S.</given-names>
            <surname>Ertugrul</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Gurpinar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Kivrak</surname>
          </string-name>
          , and
          <string-name>
            <given-names>H.</given-names>
            <surname>Kose</surname>
          </string-name>
          , “
          <article-title>Gesture recogni-˘ tion for humanoid assisted interactive sign language tutoring</article-title>
          ,”
          <source>in Signal Processing and Communications Applications Conference (SIU)</source>
          ,
          <year>2013</year>
          21st. IEEE,
          <year>2013</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>4</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>V.</given-names>
            <surname>Gazzola</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Rizzolatti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Wicker</surname>
          </string-name>
          , and
          <string-name>
            <given-names>C.</given-names>
            <surname>Keysers</surname>
          </string-name>
          , “
          <article-title>The anthropomorphic brain: the mirror neuron system responds to human and robotic actions</article-title>
          ,
          <source>” Neuroimage</source>
          , vol.
          <volume>35</volume>
          , no.
          <issue>4</issue>
          , pp.
          <fpage>1674</fpage>
          -
          <lpage>1684</lpage>
          ,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>I.</given-names>
            <surname>Back</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Makela</surname>
          </string-name>
          , and
          <string-name>
            <given-names>J.</given-names>
            <surname>Kallio</surname>
          </string-name>
          , “
          <article-title>Robot-guided exercise program for¨ the rehabilitation of older nursing home residents,” Annals of Long Term Care</article-title>
          , vol.
          <volume>21</volume>
          , no.
          <issue>6</issue>
          ,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>J. P.</given-names>
            <surname>Vital</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. S.</given-names>
            <surname>Couceiro</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N. M.</given-names>
            <surname>Rodrigues</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. M.</given-names>
            <surname>Figueiredo</surname>
          </string-name>
          , and
          <string-name>
            <given-names>N. M.</given-names>
            <surname>Ferreira</surname>
          </string-name>
          , “
          <article-title>Fostering the nao platform as an elderly care robot,” in 2013 IEEE 2nd International Conference on Serious Games and Applications for Health (SeGAH)</article-title>
          . IEEE,
          <year>2013</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>5</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>G.</given-names>
            <surname>Perugia</surname>
          </string-name>
          , R. van Berkel,
          <string-name>
            <surname>M.</surname>
          </string-name>
          <article-title>D´ıaz-</article-title>
          <string-name>
            <surname>Boladeras</surname>
            ,
            <given-names>A</given-names>
          </string-name>
          . Catala-Mallofr` e,´ M.
          <string-name>
            <surname>Rauterberg</surname>
          </string-name>
          , and E. Barakova, “
          <article-title>Understanding engagement in dementia through behavior. the ethographic and laban-inspired coding system of engagement (elicse) and the evidence-based model of engagementrelated behavior (emodeb),” Frontiers in psychology</article-title>
          , vol.
          <volume>9</volume>
          ,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>P.</given-names>
            <surname>Boissy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Corriveau</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Michaud</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Labonte</surname>
          </string-name>
          , and
          <string-name>
            <given-names>M.-P.</given-names>
            <surname>Royer</surname>
          </string-name>
          ,´
          <article-title>“A qualitative study of in-home robotic telepresence for home care of community-living elderly subjects</article-title>
          ,
          <source>” Journal of telemedicine and telecare</source>
          , vol.
          <volume>13</volume>
          , no.
          <issue>2</issue>
          , pp.
          <fpage>79</fpage>
          -
          <lpage>84</lpage>
          ,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23]
          <string-name>
            <given-names>O.</given-names>
            <surname>Pino</surname>
          </string-name>
          , G. Palestra,
          <string-name>
            <given-names>R.</given-names>
            <surname>Trevino</surname>
          </string-name>
          , and B. De Carolis, “
          <article-title>The humanoid robot nao as trainer in a memory program for elderly people with mild cognitive impairment</article-title>
          ,”
          <source>International Journal of Social Robotics</source>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>13</lpage>
          ,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [24]
          <string-name>
            <given-names>D.</given-names>
            <surname>Gollin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Ferrari</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>A.</given-names>
            <surname>Peruzzi</surname>
          </string-name>
          ,
          <article-title>Una palestra per la mente. Stimolazione cognitiva per l'invecchiamento cerebrale e le demenze</article-title>
          .
          <source>Edizioni Erickson</source>
          ,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [25]
          <string-name>
            <given-names>G.</given-names>
            <surname>Palestra</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Pettinicchio</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Del Coco</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Carcagni</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Leo</surname>
          </string-name>
          , and
          <string-name>
            <given-names>C.</given-names>
            <surname>Distante</surname>
          </string-name>
          , “
          <article-title>Improved performance in facial expression recognition using 32 geometric features</article-title>
          ,
          <source>” in International Conference on Image Analysis and Processing</source>
          . Springer,
          <year>2015</year>
          , pp.
          <fpage>518</fpage>
          -
          <lpage>528</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>