<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>March</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Dialogue Design and Management for Multi-Session Casual Conversation with Older Adults</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>S. Zahra Razavi</string-name>
          <email>srazavi@cs.rochester.edu</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Mohammad Rafayet Ali</string-name>
          <email>mali7@cs.rochester.edu</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Lenhart K. Schubert</string-name>
          <email>schubert@cs.rochester.edu</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Kimberly A. Van Orden</string-name>
          <email>kimberly_vanorden@urmc.rochester.edu</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Benjamin Kane</string-name>
          <email>bkane2@u.rochester.edu</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Tianyi Ma</string-name>
          <email>tma8@u.rochester.edu</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Spoken Dialogue Agents; Older Users; Dialogue Management; Ca-</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>University of Rochester</institution>
          ,
          <addr-line>Rochester, NY</addr-line>
          ,
          <country country="US">USA</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>sual Conversation; Social Skills Practice</institution>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2019</year>
      </pub-date>
      <volume>20</volume>
      <issue>2019</issue>
      <abstract>
        <p>We address the problem of designing a conversational avatar capable of a sequence of casual conversations with older adults. Users at risk of loneliness, social anxiety or a sense of ennui may benefit from practicing such conversations in private, at their convenience. We describe an automatic spoken dialogue manager for LISSA, an on-screen virtual agent that can keep older users involved in conversations over several sessions, each lasting 10-20 minutes. The idea behind LISSA is to improve users' communication skills by providing feedback on their non-verbal behavior at certain points in the course of the conversations. In this paper, we analyze the dialogues collected from the first session between LISSA and each of 8 participants. We examine the quality of the conversations by comparing the transcripts with those collected in a WOZ setting. LISSA's contributions to the conversations were judged by research assistants who rated the extent to which the contributions were “natural", “on track", “encouraging", “understanding", “relevant", and “polite". The results show that the automatic dialogue manager was able to handle conversation with the users smoothly and naturally.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>CCS CONCEPTS</title>
      <p>• Human-centered computing → Human computer
interaction (HCI); HCI design and evaluation methods; • Artificial
intelligence → Natural language processing.</p>
      <p>IUI Workshops’19, March 20, 2019, Los Angeles, USA
© Copyright 2019 for the individual papers by the papers’ authors. Copying permitted
for private and academic purposes. This volume is published and copyrighted by its
editors.
1</p>
    </sec>
    <sec id="sec-2">
      <title>INTRODUCTION</title>
      <p>
        The population of senior adults is growing, in part as a result of
advances in healthcare. According to United Nations studies on
world population, the number of people aged 60 years and over is
predicted to rise from 962 million in 2017 to 2.1 billion in 2050 [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ].
Moreover, large numbers of older adults live alone. According to
2017 Profile of Older Americans , in the US about 28% (13.8 million) of
noninstitutionalized older persons, and about half of women age 75
and over live alone [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. Some studies show a significant correlation
between depression and loneliness among older people [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ]. Also
many have dificulty managing their personal and household
activities on their own and could potentially benefit from technologies
that increase their autonomy, or perhaps just to provide engaging
interactions in their spare time.
      </p>
      <p>
        However, learning to use digital technology can be dificult and
frustrating, especially for older people, and this negatively impacts
acceptance and use of such technology [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ]. But the ever-increasing
accuracy of both automatic speech recognition (ASR) systems and
text-to-speech (TTS) systems, along with richly engineered apps
such as Siri and Alexa, has boosted the popularity of conversational
interaction with computers, obviating the need for arcane expertise.
Realistic virtual avatars and social robots can make this interaction
even more natural and pleasant.
      </p>
      <p>Spoken language interaction can benefit older people in many
diferent ways, including the following:
• Dialogue systems can support people with their health care
needs. Such systems can collect and track health information
from older individuals and provide comments and advice to
improve user wellness. They can also remind people about
their medications and doctor appointments.
• Speech-based systems can help users feel less lonely by
providing casual chat as well as information on news, events,
activities, etc.
• They can also provide entertainment such as games, jokes,
or music for greater enjoyment of spare time.
• Dialogue systems can help older people improve some skills.</p>
      <p>
        For instance, cognitive games can help users maintain mental
acuity. They can also enable practice of communication skills,
as may be desired by seniors experiencing social isolation
after loss of connections to close family and friends [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ].
• In limited ways, conversational systems can lead
“reminiscence therapy" sessions aimed at reducing stress, depression
and boredom in people with dementia and memory disorders,
by helping to elicit their memories and achievements [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ].
      </p>
      <p>All of the above types of systems require a powerful dialogue
manager, allowing for smooth and natural conversations with users,
and taking account of the special needs and limitations of the target
user population.</p>
      <p>
        In this paper we describe how we adapted the dialogue
capabilities of a virtual agent, LISSA, for interaction with older adults
who may wish to improve their social communication skills. As
the centerpiece of the "Aging and Engaging Program" (AEP) [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ],
this version of LISSA holds a casual conversation with the user on
various topics likely to engage the user. At the same time LISSA
observes and processes the user’s nonverbal behaviors, including eye
contact, smiling, and head motion, as well as superficial speech cues
such as speaking volume, modulation, and an emotional valence
derived from the word stream.
      </p>
      <p>The conversation is in three segments, and at the two break
points the system ofers some advice to users on the areas they may
need to work on and how to improve in those areas. In addition,
a summary of weak and strong aspects of the user’s behavior is
presented at the end of the conversation.</p>
      <p>The system was designed in close collaboration with an expert
advisory panel of professionals (at our afiliated medical research
center) working with geriatric patients. A single-session study was
conducted with 25 participants where the virtual agent’s
contributions to the dialogue were selected by a human wizard. The
nonverbal feedback showed an accuracy of 72% on average, in
relation to a human-provided corpus of annotations, treated as ground
truth. Also, a user survey showed that users found the program
useful and easy to use. As the next step, we deployed a fully
automatic system where users can interact with the avatar at home. We
planned for a 10-session intervention, where in each session the
participants have 10-20 minutes of interaction with the avatar. The
ifrst and last sessions were held in a lab, where experts collected
information on users and rated their communication skills. The
remaining sessions were self-initiated by the users at home. We
ran the study with 9 participants interacting with the avatar and
10 participants in a control group.</p>
      <p>
        Our framework for multi-topic dialogue management was
introduced in [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ]. To ensure that the conversations would be
appropriate and engaging, our geriatrician collaborators guided us
in the selection of topics and specific content contributed to the
dialogues by LISSA, based on their experiences in elderly therapy.
As an initial evaluation of LISSA’s conversational behavior we have
analyzed 8 in-lab conversations using the automated version of
LISSA, comparing these with 8 such conversations where LISSA
outputs were selected by a human wizard. In all cases the human
participants were first-time users (with no overlap between the
two groups). The transcripts were deidentified, randomized, and
distributed to 6 research assistants (RAs), with each transcript
being rated by at least 3 RAs. Ratings for each transcript were then
averaged across the RAs. The results showed that the interactions
with the automatic system earned high ratings, indeed on some
criteria slightly better than WOZ-based interactions.
      </p>
      <p>We have initiated further data analyses, and three aspects
relating to conversation quality are the following. First, we will be
looking at the level of verbosity for diferent users in diferent
sessions, to determine its dependence on the user’s personality and on
the topic under discussion. Second, we will measure self-disclosure
and study its correlation with user mood and personality. Third,
we will track users’ sentiment over the course of conversations to
determine its variability over time, and its dependence on dialogue
topics and user personality.</p>
      <p>The main contributions of the work reported here are
• demonstration of the flexibility of our approach to dialogue
design and management, as evidenced by rapid adaptation
and scaling up of previous versions of LISSA’s repertoire,
suitable for the Aging and Engaging domain; our approach uses
modifiable dialogue schemas, hierarchical pattern
transductions, and context-independent “gist clause" interpretations
of utterances;
• integration of an automated dialogue system for multi-topic
conversations into a virtual human (LISSA), designed for
conversation practice and capable of observing and providing
feedback to the user;
• an initial demonstration showing that the automated
dialogue manager functions as efectively for a sampling of users
as the prior wizard-guided version; this signifies an advance
of the state of the art in building conversational practice
systems for older users, with no constraints on users’ verbal
inputs (apart from the conversational context determined by
LISSA’s questions).</p>
      <p>The rest of this paper is organized as follows. We first comment
on extant work on spoken dialogue systems that are designed to
help older adults; then we introduce our virtual agent, LISSA, and
briefly explain the feedback system. We then discuss the dialogue
manager structure and the content design for multi-session
interactions with older users. In the next section, we evaluate LISSA’s
ifrst-session interactions with users, referred to above. We then
mention ongoing and future work, and finally summarize our
results and state our conclusions.
2</p>
    </sec>
    <sec id="sec-3">
      <title>RELATED WORK</title>
      <p>As noted in the the introduction, thanks in part to recent
improvements in ASR and TTS systems, the use of conversational systems
to help technically unskilled older adults has become more feasible.</p>
      <p>
        According to [
        <xref ref-type="bibr" rid="ref26">26</xref>
        ], employing virtual agents as daily assistants for
older adults puts at least two questions in front of us: first, whether
potential users would accept these systems and second, how the
systems can interact with users robustly, while taking into account
the limitations and needs of this user population. [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] showed that
older people who are unfamiliar with technology prefer to interact
with assistive systems in natural language. The most important
features for efective interaction of virtual agents with a user are
discussed in [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ]. Key among them is ease of use, especially for
older individuals with reduced cognitive abilities. Virtual agents
also should have a likable appearance and persona. Moreover, the
system should be able to personalize its behavior according to the
specific needs and preferences of each user. Another important
yet challenging feature is the ability to recover from mistakes,
misunderstandings, and other kinds of errors, and subsequently
resume normal interaction.
      </p>
      <p>
        [
        <xref ref-type="bibr" rid="ref25">25</xref>
        ] found high acceptance of a companionable virtual agent –
albeit WOZ-controlled – among users, when they could talk about
a topic of interest to them. Some systems provide a degree of social
companionship with older users through inquiries about subjects of
interest and providing local information relevant to those interests.
For instance [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] tried single-session interactions where a robot
reads out newspapers according to users’ topics of interest and
asks them some personal questions about their past, attending to
the response by nodding and maintaining eye contact. The survey
results showed that the participants liked the interaction and were
open to further sessions with the system. [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] introduced “Ryan" as
a companion who interprets user’s emotions through verbal and
nonverbal cues. Six individuals who lived alone enjoyed interacting
with Ryan and they felt happy when the robot was keeping them
company. However, they did not find talking to the robot to be
like talking to a person. Another application is to gather health
information (such as blood pressure and exercise regimen) during
the interaction and provide health advice accordingly [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]. Unlike
the work reported here, none of these projects make an attempt
to engage users in topically broad, casual dialogue, to understand
users’ inputs in such dialogues, or to allow for conversational skills
practice, with feedback by the system.
      </p>
      <p>
        Virtual companions have also been proposed for palliative care
– helping people with terminal illnesses reduce stress by steering
them towards topics such as spirituality, or their life story. In an
exploratory study with older adults reported in [
        <xref ref-type="bibr" rid="ref23">23</xref>
        ], 44 users
interacted with a virtual agent about death-related topics such as
last will, funeral preparation and spirituality. They used a tablet
displaying an avatar that used speech and also some nonverbal
signals such as posture and hand gestures; users selected their
input utterances from a multiple-choice menu. The study showed
that people were satisfied with their interaction, found the avatar
understanding and easy to interact with, and were prepared to
continue their conversation with the avatar. Again, however,
systems of this type so far do not actually attempt to extract meaning
from miscellaneous user inputs, apart from answers to inquiries
about specific items of information. Nor do they generally provide
feedback based on observing the user, or provide opportunities for
practicing conversational skills.
      </p>
      <p>
        Other systems have focused on assisting older adults with their
specific daily needs. For instance [
        <xref ref-type="bibr" rid="ref22">22</xref>
        ] introduces a virtual
companion, “Mary", for older adults that assists them in organizing
their daily tasks by responding to their needs such as ofering
reminders, guidance with household activities, and locating objects
using the 3D camera. A 12-week interaction between Mary and 20
older adults showed that the companion was accepted very well,
although occasionally verbal misunderstandings and errors caused
some user frustration. The authors note that high expectations by
users may have limited full satisfaction with the system. “Billie" [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]
is another example of a virtual home assistant that helps users
organize their daily schedule. Although the task scope was limited, and
the first study was not completely automatic, Billie was designed to
handle certain challenges often encountered with spoken language
systems, especially with older users, such as verbal
misunderstandings and topic shifts. The virtual agent also used gestures, facial
expressions and head movement for more natural speaking
behavior (for instance for emphasis, or to signal uncertainty). The study
results showed that users can efectively handle the interaction.
However, this system does not track the user’s non-verbal behavior,
and is not capable of casual conversation on various topics. Instead,
it focuses on the specific task of managing the user’s daily calendar.
      </p>
      <p>
        Another approach that has proved efective in ameliorating
loneliness and depression in older people is reminiscence therapy [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ].
A pilot study aimed at implementing a conversational agent that
collects and organizes memories and stories in an engaging manner
is reported in [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]. The authors suggest that a successful
companionable agent needs to possess not only a model of conversation and
of reminiscence, but also generic knowledge about events, habits,
values, relationships, etc., to enable it to respond meaningfully to
reminiscences.
      </p>
      <p>
        Robotic pets are another interesting technology ofered for
alleviating loneliness in older people; such pets react to user speech
and petting by producing sounds and eye and body movements.
Studies [
        <xref ref-type="bibr" rid="ref21">21</xref>
        ] show that such interactions can improve users’
communication and interaction skills.
      </p>
      <p>
        In almost all applications of virtual agents, the system needs
to somehow motivate users to engage conversationally with it.
(Robotic pets are an exception.) An interesting study of people’s
willingness to disclose themselves in interacting with an open-domain
voice-based chatbot is presented in [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ]. The authors discovered
that self-disclosure by the chatbot would motivate self-disclosure
by users. Moreover, initial self-disclosure (or otherwise) by users
is apt to characterize their behavior in the rest of the
conversation. For instance, users who self-disclose initially tend to have
longer turns throughout the rest of the dialogue, and are more apt
to self-disclose in later turns. The authors were unable to confirm a
clear link between a tendency towards self-disclosure and positive
evaluation of the system. However, they found that people enjoy
the conversation more if the agent ofers its own backstory.
3
      </p>
    </sec>
    <sec id="sec-4">
      <title>THE LISSA VIRTUAL AGENT</title>
      <p>The LISSA (Live Interactive Social Skills Assistance) virtual agent
is a human-like avatar (Figure 1) intended to become ubiquitously
available for helping people practice their social skills. LISSA can
engage users in conversation and provide both real-time and
postsession feedback.
3.1</p>
    </sec>
    <sec id="sec-5">
      <title>The system</title>
      <p>
        To help users improve their communication skills, LISSA provides
feedback on their nonverbal behavior, including eye contact,
smiling, speaking volume, as well as one verbal feature: the emotional
valence of the word content. In the original implementation of LISSA,
feedback was presented in real time through screen icons that turn
from green to red, indicating that the user should improve the
corresponding nonverbal behavior. Feedback was also presented at the
end of each interaction, via charts and figures providing
information on the user’s performance. However, as both real-time visual
feedback and charts could be cognitively demanding for older users,
the Aging and Engaging version of LISSA ofers feedback through
speech and text during the conversation. Furthermore, at the end of
each conversation session, the current system generates a
speechand text-based summary of the feedback provided during the
dialogue, mentioning users’ strong areas, weaknesses and suggestions
for improvement. More details on how feedback is generated and
presented can be found in [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ].
3.2
      </p>
    </sec>
    <sec id="sec-6">
      <title>Previous LISSA Studies</title>
      <p>
        Two versions of LISSA were previously designed based on the
needs and limitations of the target users. LISSA showed potential
for impacting college students’ communication skills when it was
tested as a speed-dating partner in a WOZ setting [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], ofering
feedback to the user. Subsequently, a fully automatic version was
implemented, designed to help high-functioning teens with autism
spectrum disorder (ASD) to practice their conversational skills.
Experiments were conducted with 8 participants, who were asked to
evaluate the system. The ASD teens not only handled the interaction
well, but also expressed interest in further sessions with LISSA
to improve their social skills [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ]. These results encouraged us
to improve the system and to prepare for additional experiments
with similar subjects. The quest for additional participants, and for
evaluation of conversational practice with LISSA as an efective
intervention for ASD teens, is still ongoing. At the same time the
preliminary successes with LISSA led to the idea of creating a new
version with a larger topical repertoire, to be tested in multiple
sessions, eventually at home rather than in the lab. The idea was
implemented as the “Aging and Engaging Program".
3.3
      </p>
    </sec>
    <sec id="sec-7">
      <title>Aging and Engaging: The WOZ Study</title>
      <p>Social communication deficits among older adults can cause
significant problems such as social isolation and depression. Technological
interventions are now seen as having potential even for older adults,
as computers, cellphones and tablets have become more accessible
and easier to use. Our most recent version of LISSA is designed to
interact with older users over several sessions, with the goal of
helping them improve their communication skills. The system ofers low
technological complexity and the feedback is presented in a format
imposing minimum cognitive load. The interface was designed with
the assistance of experts working with geriatric patients, focusing
on 12 older adults.</p>
      <p>The program is designed so that each user becomes engaged
in casual talk with the system, where two or three times during
the conversation, the system suspends the conversation and
comments on the user’s weaknesses and strengths, and suggests ways
of improving on the weaker aspects. Also at the end of the
conversation, the system briefs the user on what they went through and
summarizes the previously ofered advice.</p>
      <p>
        To evaluate the program’s potential efectiveness, we conducted
a one-shot study with 25 older adults (all more than 60 years old,
average age 67, 25% male), using a WOZ setting for handling the
conversation. The results showed that the participants’ speaking
times in response to questions, as well as the amount of positive
feedback, increased gradually in the course of conversation. At the
same time, participants found the feedback useful, easy to interpret,
and fairly accurate, and expressed their interest in using the system
at home. More details on the study can be found in [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ].
3.4
      </p>
    </sec>
    <sec id="sec-8">
      <title>Multi-session Automatic Interaction with</title>
    </sec>
    <sec id="sec-9">
      <title>Older Adults</title>
      <p>Based on the outcome of the WOZ studies, we designed a
multisession study where participants can interact with the system in
their homes at their convenience. The purpose is to study the
acceptability of a longer-term interaction and its impact on users’
communication skills. The design enables each participant to
engage in ten conversation sessions with the avatar. The first and the
last sessions are held in the lab, where users fill out surveys and are
evaluated for their communication skills by experts. The rest of the
sessions are held in users’ homes where they need to have access
to a laptop or a personal computer. Each interaction with LISSA
consists of three subsessions where LISSA leads a natural casual
conversation with the user. Users receive brief feedback on their
behavior, during two breaks between subsessions. At the end, users
are provided with a behavioral summary, and some suggestions on
how they can improve their communication skills. In our studies
so far, nine participants used the avatar for conversation practice,
while ten participants were assigned to a control group.</p>
      <p>We collected the conversation transcripts from the participants
assigned to LISSA in order to assess the quality of the dialogues.
We discuss the evaluation process and results in later sections.</p>
      <p>In the next section, we provide some details on how the system
manages an engaging conversation with the user, along with an
evaluation of the quality of the conversations gathered. Figure 2
shows a portion of a conversation between a user and LISSA in the
ifrst session of interaction.
4</p>
    </sec>
    <sec id="sec-10">
      <title>THE LISSA DIALOGUE MANAGER</title>
      <p>In order to handle a smooth natural conversation on everyday topics
with a user, the dialogue manager (DM) needs to follow a
coherent plan around a topic. It needs to extract essential information
even from relatively wordy inputs, and produce relevant comments
demonstrating its understanding of the user’s turns. The DM also
needs a way to respond to user questions, and to update its plan if
this is necessitated by a user input; for instance, if a planned query
to the user has already been answered by some part of a user’s
input, that query should be skipped. The LISSA DM is capable of
such plan edits, as well as expansion of steps into subplans. We
now explain the DM structure along with the content we provided
for the Aging and Engaging program.
4.1</p>
    </sec>
    <sec id="sec-11">
      <title>The DM Structure</title>
      <p>
        Our approach to dialogue management is based on the hypothesis
that human cognition and behavior rely to a great extent on
dynamically modifiable schemas [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ] [
        <xref ref-type="bibr" rid="ref24">24</xref>
        ] and on hierarchical pattern
recognition/transduction.
      </p>
      <p>Accordingly the dialogue manager we have designed follows
a flexible, modifiable dialogue schema, specifying a sequence of
intended and expected interactions with the user, subject to change
as the interaction proceeds. The sequence of formal assertions in
the body of the schema express either actions intended by the agent,
or inputs expected from the user. These events are instantiated in
the course of the conversation. This is a simple matter for explicitly
specified utterances by the agent, but the expected inputs from the
user are usually specified very abstractly, and become
particularized as a result of input interpretation. Schemas can be extended
allow for specification of participant types, preconditions,
concurrent conditions, partial action/event ordering, conditionals, and
iteration. These more general features are still in the early stages
of implementation, but so far have not been needed for the present
application.</p>
      <p>Based on the main conversational schema, LISSA leads the main
lfow of the conversation by asking the user various questions.
Following each user’s response, LISSA might show one of the following
behaviors:
• Making a relevant comment on the user’s input;
• Responding to a question, if the user asked one (typically, at
the end of the input);
• Instantiating a subdialogue, if the user switched to an
“oftrack" topic (an unexpected question may have this efect as
well.)</p>
      <p>A dialogue segment between LISSA and one user in the Aging
program can be seen in Figure 2.</p>
      <p>Throughout the conversation, the replies of the user are
transduced into simple, explicit, largely context-independent English
sentences. We call these gist-clauses, and they provide much of the
power of our approach. The DM interprets each user’s input in the
context of LISSA’s previous question, using this question to select
pattern transduction hierarchies relevant to interpreting the user’s
response. It then applies the rules in the selected hierarchies to
derive one or more gist-clauses from the user’s input. The
transduction trees are designed so that we extract as much information as
we can from a user input. The terminal nodes in the transduction
trees are templates that can output gist-clauses. As an example of a
gist clause, if LISSA asks "Have you seen the new Star Wars movie?"
and the user answers "Yes, I have.", the output of the choice tree for
interpreting this reply would be something like "I have seen the
new Star Wars movie."</p>
      <p>
        The system then applies hierarchical pattern transduction
methods to the gist-clauses derived from the user’s input to generate
a specific verbal reaction. In particular, each of the gist clauses is
matched against a set of relevant gist clause forms in a
transduction tree, and when a match is found, a corresponding reaction is
constructed as output. This reaction could refer to specific phrases
mentioned by the user or to topics abstracted from them, making
the reaction more meaningful. Figure 3 shows an overview of the
dialogue manager. The most important stratagem in our approach
is the use of questions asked by LISSA as context for transducing
a gist-clause interpretation of the user’s answer, and in turn, the
use of those gist-clause interpretations as context for transducing
a meaningful reaction by LISSA to the user’s answer (or multiple
reactions, for instance if the user concludes an answer with a
reciprocal question). The interesting point is that in gist-clause form,
question-answer-reaction triples are virtually independent of the
conversational context, and this leads to “portability" of the
infrastructure for many such triples from one kind of (casual) dialogue
to another. More details can be found in [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ].
4.2
      </p>
    </sec>
    <sec id="sec-12">
      <title>The Content</title>
      <p>To enable a smooth, meaningful conversation with the user, the
pattern transduction trees need to be designed carefully and equipped
with appropriate patterns at the nodes. For the Aging and
Engaging program we needed ten interactions between LISSA and each
user. Each interaction consists of three subsessions, and each
subsession contains 3-5 questions to the user, along with sporadic
self-disclosures by LISSA. In these disclosures LISSA presents
herself as a 65-year old widow who moved to the city a few years ago
to live with her daughter, and relates relevant information or
experiences of hers. LISSA leads the conversation by opening topics,
stating some personal thoughts and encouraging user inputs by
asking questions. Upon receiving such inputs, LISSA makes relevant
comments and expresses her thoughts and emotions.</p>
      <p>LISSA’s contributions to the dialogues, including the choice of
the character and her background, were meticulously designed
by four trained research assistants (RAs), in extensive
consultation with gerontologists with expertise in interventions. Details
of her character, interests, beliefs and thoughts were designed and
inserted into the interaction along the way. The contents of the
DM’s transduction trees were designed based on the suggestions
provided by our expert collaborators, as well as on the experience
gathered from previous LISSA-user interactions in the WOZ study.</p>
      <p>
        Since we planned for ten interactions between LISSA and each
user, we collected a list of 30 topics that could be of interest to
our target group. The gerontological experts divided the topics
into three groups based on their emotional intensity or degree of
intimacy: easy, medium, and hard. All the topics were ones that
older people would encounter in their daily lives. While the “easy"
ones involve little personal disclosure or intimacy and are typical of
conversations they might have at a senior center with new people
or at a dining hall in a senior living community, the harder
conversations contain more emotionally evocative topics. Table 1 shows
a list of topics in each group. Research shows that the social bond
between users and a virtual agent can increase the efectiveness of
the virtual agent in diferent tasks such as tutoring [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] and health
coaching [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. According to [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ], one way to increase the rapport
between user and a virtual assistant over time is to encourage
selfdisclosure by the user. We therefore designed the dialogues so as to
gradually increase the level of intimacy of the conversational topics.
We designed each conversation for smooth topic transitions from
easier topics in the first session to progressively more
challenging topics in the later sessions. Figure 4 shows how the emotional
intensity increases as we move through the study.
      </p>
      <p>It is noteworthy that the design and implementation of the
infrastructure for the 30 topical dialogues represented an
order-ofmagnitude scale-up from the previous dialogue designs for
speeddating and for interaction with autistic teens. This testifies to the
efectiveness of our use of schemas, transduction hierarchies, and
interpretations in the form of gist clauses for rapid dialogue
implementation. The individual topics were mostly implemented by
three of the authors, including two undergraduates, and typically
took about half a day or a day to complete.
5</p>
    </sec>
    <sec id="sec-13">
      <title>CONVERSATION EVALUATION</title>
      <p>To evaluate the eficacy of the DM system, we designed a procedure
to assess the quality of the conversations from a third-person point
of view. We compared conversation transcripts from the initial
WOZ-based study with transcripts from the fully automatic
dialogue system. To do this, 16 conversation transcripts were chosen,
each consisting of a single session with three subtopics. 8 transcripts
(the first 8 that became available) were pulled from the WOZ study,
and the other 8 were pulled from the automated sessions. Both sets
of 8 were taken from the first-day interaction between users and
LISSA. The topics covered in both sets of interactions were very
much alike, so that it was fair to compare them. These transcripts
were de-identified, assigned random numerical labels, and cleaned
so that the transcripts followed a uniform format.</p>
      <p>The transcripts were then assigned randomly to 6 RAs who were
blind to the study condition, with each RA being responsible for
assessing 8 transcripts. Each RA was tasked to rate each assigned
transcript on six criteria related to the quality of the conversation,
which are shown in Table 2. Each criterion was independently rated
on a Likert scale from 1 (not at all) to 5 (completely). No other
guidance was provided to the RAs; rather they were directed to make
natural judgments for each criterion based on the transcript alone.
The rating results showed good internal consistency (Cronbach’s
Emotional intensity</p>
      <p>Topics
alpha = 0.89). Ratings across RAs were then averaged for each
transcript to represent the consensus score for that transcript. The
average rating and one standard deviation for the WOZ study
compared to the automated LISSA system are shown for each criterion
in Figure 5.</p>
      <p>As can be seen in the evaluation results, the automated LISSA
system was able to achieve fairly high ratings for each of the
criteria. This suggests that the system was able to hold conversations
that were of good overall quality. Compared to the WOZ study,
the automated system was able to achieve slightly higher ratings,
although none of the gains were suficiently large to be statistically
significant. Nonetheless, these results suggest that the automated
LISSA system is capable of producing conversations that are
approximately of the same quality as the human-operated (i.e., WOZ)
system.</p>
      <p>The largest diference between the WOZ ratings and the
automated LISSA ratings was in the “On Track" metric, where the
average LISSA rating was half a point higher than the WOZ average.
Again, this improvement is not statistically significant, given the
variance and low number of samples, but it is a result consistent
with the overall design of the system. Since the DM was designed
to follow a well-established conversation plan, while also being
equipped with some mechanisms to handle of-topic input and then
return to the main track, we expected the system to be particularly
efective at remaining on track as compared to the human-operated
systems.</p>
      <p>
        The good scores we achieved for the “Encouraging" and “Polite"
metrics strike us as significant. They suggest that the guidance
provided by psychiatric experts on the design of the topics,
questions, and comments was advantageous. Additionally, our strategy
of including self-disclosure and a backstory in the LISSA dialogues
was probably a factor in the successful performance of our system.
As mentioned in the Related Work section, self-disclosure by an
avatar tends to encourage self-disclosure by the users [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ], and
enjoyment is increased if the avatar is provided with a backstory.
6
      </p>
    </sec>
    <sec id="sec-14">
      <title>USABILITY OF THE SYSTEM</title>
      <p>The results discussed above indicate that LISSA’s dialogue system is
functioning well. Confirmation of LISSA’s potential for improving
users’ communication skills will require additional study, including:
transcription and analysis of the ASR files from the at-home
sessions; expert evaluation of users’ communication behaviors in the
initial and final lab interviews and surveys that were conducted;
and additional, longer-range studies with participants in the target
population. However, we have been able to obtain some insights
into people’s feelings about their interaction with LISSA after
completing all 10 sessions.</p>
      <p>The surveys administered at the end asked participants
numerous questions about their assessment of LISSA, including various
aspects of the nonverbal feedback they had received throughout
the sessions. A subset of questions of interest here concerned the
usability of the system. In particular, participants were asked to
rate the following statements:
• I thought the program was easy to use.
• I would imagine that most people would learn to use this
program very quickly.
• I felt very confident using the program.
• Overall, I would rate the user-friendliness of this product
as:...</p>
      <p>Responses were given on a five-point scale ranging from "strongly
disagree" (=1) to "strongly agree" (=5).</p>
      <p>The results from nine participants can be seen in Table 3.</p>
      <p>The results from nine participants showed an average score of
4.33 (sd = 0.67) for the first statement, meaning that they found
the system easy to use. The average score on the third statement
indicates that participants felt no anxiety in handling the interaction
at home on their own; and along similar lines, the “user friendliness"
rating of 4.56 (sd = 0.50) suggests that users had no real dificulties
in dealing with the system.</p>
      <p>Criterion
WOZ Average</p>
      <p>WOZ SD</p>
      <p>LISSA Average</p>
      <p>LISSA SD</p>
      <p>We also provided some qualitative questions about users’
opinions about the system and their experience with it. One question
asked what they liked about the system. Some dialogue-related
responses were the following:
- “having someone to talk to, even though it was a computer"
- “starting out with a topic that fit in well with lifestyle that allowed
responses and conversation, and providing feedback about responses"
- “I liked her personality and her calmness"
- “questions were relatable and answerable"
- “conversational topics were fitting, though at times seemed more
elementary"
- “simple and straightforward"
- “interfacing with LISSA was like having someone in my home
visiting"
- “the topics seemed general enough and enjoyed talking about them;
helped with self-reflection and have deeper conversations rather than
just shallow, surface-level conversations"</p>
      <p>When asked what they didn’t like about the system, few users
registered any complaints about the dialogue per se. One participant
wanted longer conversations, commenting: “a bit more time [should
be] devoted to the conversations - ask 4-5 questions (rather than just
3) to help participants feel more comfortable with the conversation".
One commented that “having humans instead of computers may be
more motivating" – a salient reminder that computers remain tools,
not surrogate humans.
7</p>
    </sec>
    <sec id="sec-15">
      <title>FUTURE WORK</title>
      <p>We plan to use the conversational data collected from multi-session
interactions between LISSA and older users to gain a deeper
understanding of certain key aspects of these interactions. These aspects
include the following:
• Verbosity: We plan to study the level of verbosity of
participants from session to session, and also the variability
of verbosity among participants. Questions of interest are
whether verbosity is dependent on the user’s personality,
the topic and questions asked, and other factors; and also
whether verbosity is correlated with a positive view of the
system.
• Self-disclosure: Self-disclosure is considered a sign of
conversational engagement and trust. We intend to use known
techniques for measuring self-disclosure in a dialogue to
assess whether the extent of self-disclosure is more dependent
on the user’s personality or the inputs from the avatar; and,
how self-disclosure might correlate with the user’s mood,
social skills and evaluation of the system.
• Sentiment: As this program targets older adults who are at
risk of isolation and depression, we want to know more about
users’ moods, based on the emotional valence of their inputs.
We plan to track sentiment over the course of successive
sessions, taking into account the context of the dialogue
(e.g., the topic under discussion). We should then be able to
detect any correlations between the users’ moods, and their
evaluation of the system at various points.
8</p>
    </sec>
    <sec id="sec-16">
      <title>CONCLUSION</title>
      <p>We introduced the design and implementation of a spoken dialogue
manager for handling multi-session interactions with older adults.
The dialogue manager leads engaging conversations with users on
various everyday and personal topics. Users receive feedback on
their nonverbal behaviors including eye contact, smiling, and head
motion, as well as emotional valence. The feedback is presented
at two topical transition points within a conversation as well as
at the end. The goal of the system is to help users improve their
communication skills.</p>
      <p>The DM was implemented based on a framework proposed in
previous work, employing flexible schemas for planned and
anticipated events, and hierarchical pattern transductions for deriving
“gist clause" interpretations and responses. We prepared the
dialogue manager for 10 interaction sessions, each covering 3 topics.
Our DM framework allowed quite rapid development of the 30
topics. The content was adapted to the needs and limitations of
older adults, based on collaboration with an expert advisory panel
of professionals at our afiliated medical research center. A broad
spectrum of topics was established, and the sessions were designed
to progress from emotionally undemanding ones to ones calling
for greater emotional disclosure.</p>
      <p>We ran a study including 8 participants interacting with the
virtual agent. To evaluate the quality of the conversations, we
compared the conversations of these participants in the initial in-lab
sessions with 8 such conversations where LISSA outputs were selected
by a human wizard. The transcripts were randomly assigned to 6
RAs, who rated the transcripts based on 6 features of high-quality
conversation. The results showed high ratings for the automatic
system, even slightly better than wizard-moderated interactions.
The overall usability evaluation of the system by users who
completed the series of sessions shows that users found the system easy
to use and rated it as definitely user-friendly. Further studies of
interaction quality and efects on users, based (among other data)
on ASR transcripts of the at-home sessions, are in progress.</p>
    </sec>
    <sec id="sec-17">
      <title>ACKNOWLEDGMENTS</title>
      <p>The work was supported by DARPA CwC subcontract
W911NF-151-0542. We would also like to thank Professor Paul Duberstein and
Shuwen Zhang for their helpful contributions.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <fpage>2017</fpage>
          .
          <article-title>Profile of Older Americans</article-title>
          . https://acl.gov/sites/default/files/Aging% 20and%
          <article-title>20Disability%20in%20America/2017OlderAmericansProfile</article-title>
          .pdf
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <fpage>2017</fpage>
          . World Population Prospects:
          <article-title>The 2017 Revision</article-title>
          . https://esa.un.org/unpd/ wpp/Publications/Files/WPP2017_KeyFindings.pdf
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>Hojjat</given-names>
            <surname>Abdollahi</surname>
          </string-name>
          , Ali Mollahosseini,
          <string-name>
            <surname>Josh T Lane</surname>
          </string-name>
          , and
          <string-name>
            <surname>Mohammad H Mahoor</surname>
          </string-name>
          .
          <year>2017</year>
          .
          <article-title>A pilot study on using an intelligent life-like robot as a companion for elderly individuals with dementia and depression</article-title>
          .
          <source>In Humanoid Robotics (Humanoids)</source>
          ,
          <source>2017 IEEE-RAS 17th International Conference on. IEEE</source>
          ,
          <fpage>541</fpage>
          -
          <lpage>546</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>Mohammad</given-names>
            <surname>Rafayet</surname>
          </string-name>
          <string-name>
            <given-names>Ali</given-names>
            , Dev Crasta, Li Jin, Agustin Baretto, Joshua Pachter,
            <surname>Ronald D Rogge</surname>
          </string-name>
          , and Mohammed Ehsan Hoque.
          <year>2015</year>
          .
          <article-title>LISSAâĂŤLive Interactive Social Skill Assistance</article-title>
          .
          <source>In Afective Computing and Intelligent Interaction (ACII)</source>
          ,
          <source>2015 International Conference on. IEEE</source>
          ,
          <fpage>173</fpage>
          -
          <lpage>179</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>Mohammad</given-names>
            <surname>Rafayet</surname>
          </string-name>
          <string-name>
            <given-names>Ali</given-names>
            , Kimberly Van Orden,
            <surname>Kimberly Parkhurst</surname>
          </string-name>
          , Shuyang Liu,
          <string-name>
            <surname>Viet-Duy Nguyen</surname>
            , Paul Duberstein, and
            <given-names>M Ehsan</given-names>
          </string-name>
          <string-name>
            <surname>Hoque</surname>
          </string-name>
          .
          <year>2018</year>
          .
          <article-title>Aging and Engaging: A Social Conversational Skills Training Program for Older Adults</article-title>
          .
          <source>In 23rd International Conference on Intelligent User Interfaces. ACM</source>
          ,
          <volume>55</volume>
          -
          <fpage>66</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <surname>Patricia</surname>
            <given-names>A Arean</given-names>
          </string-name>
          , Michael G Perri, Arthur M Nezu, Rebecca L Schein,
          <string-name>
            <surname>Frima Christopher</surname>
          </string-name>
          , and
          <string-name>
            <surname>Thomas</surname>
            <given-names>X</given-names>
          </string-name>
          <string-name>
            <surname>Joseph</surname>
          </string-name>
          .
          <year>1993</year>
          .
          <article-title>Comparative efectiveness of social problem-solving therapy and reminiscence therapy as treatments for depression in older adults</article-title>
          .
          <source>Journal of consulting and clinical psychology 61</source>
          ,
          <issue>6</issue>
          (
          <year>1993</year>
          ),
          <fpage>1003</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>GUIDE</given-names>
            <surname>Consortium</surname>
          </string-name>
          .
          <year>2011</year>
          .
          <article-title>User interaction and application requirements - deliverable D2.1</article-title>
          . (
          <year>2011</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>Stefan</given-names>
            <surname>Kopp</surname>
          </string-name>
          , Mara Brandt, Hendrik Buschmeier, Katharina Cyra, Farina Freigang, Nicole Krämer, Franz Kummert, Christiane Opfermann, Karola Pitsch,
          <string-name>
            <given-names>Lars</given-names>
            <surname>Schillingmann</surname>
          </string-name>
          , et al.
          <year>2018</year>
          .
          <article-title>Conversational Assistants for Elderly Users-The Importance of Socially Cooperative Dialogue</article-title>
          .
          <source>In AAMAS Workshop on Intelligent Conversation Agents in Home and Geriatric Care Applications.</source>
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>Christine</given-names>
            <surname>Lisetti</surname>
          </string-name>
          , Reza Amini, Ugan Yasavur, and
          <string-name>
            <given-names>Naphtali</given-names>
            <surname>Rishe</surname>
          </string-name>
          .
          <year>2013</year>
          .
          <article-title>I can help you change! an empathic virtual agent delivers behavior change health interventions</article-title>
          .
          <source>ACM Transactions on Management Information Systems (TMIS) 4</source>
          ,
          <issue>4</issue>
          (
          <year>2013</year>
          ),
          <fpage>19</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>Michael</given-names>
            <surname>Madaio</surname>
          </string-name>
          , Kun Peng, Amy Ogan, and
          <string-name>
            <given-names>Justine</given-names>
            <surname>Cassell</surname>
          </string-name>
          .
          <year>2018</year>
          .
          <article-title>A climate of support: a process-oriented analysis of the impact of rapport on peer tutoring</article-title>
          .
          <source>In Proceedings of the 12th International Conference of the Learning Sciences (ICLS).</source>
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <surname>Juliana</surname>
            <given-names>Miehle</given-names>
          </string-name>
          , Ilker Bagci, Wolfgang Minker, and
          <string-name>
            <given-names>Stefan</given-names>
            <surname>Ultes</surname>
          </string-name>
          .
          <year>2019</year>
          .
          <article-title>A Social Companion and Conversational Partner for the Elderly</article-title>
          .
          <source>In Advanced Social Interaction with Agents</source>
          . Springer,
          <fpage>103</fpage>
          -
          <lpage>109</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <surname>Svetlana</surname>
            <given-names>Nikitina</given-names>
          </string-name>
          , Sara Callaioli, and
          <string-name>
            <given-names>Marcos</given-names>
            <surname>Baez</surname>
          </string-name>
          .
          <year>2018</year>
          .
          <article-title>Smart conversational agents for reminiscence</article-title>
          . arXiv preprint arXiv:
          <year>1804</year>
          .
          <volume>06550</volume>
          (
          <year>2018</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <surname>Risako</surname>
            <given-names>Ono</given-names>
          </string-name>
          , Yuki Nishizeki, and Masahiro Araki. [n. d.].
          <article-title>Virtual Dialogue Agent for Supporting a Healthy Lifestyle of the Elderly</article-title>
          .
          <source>In IWSDS.</source>
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <surname>Florian</surname>
            <given-names>Pecune</given-names>
          </string-name>
          , Jingya Chen, Yoichi Matsuyama, and
          <string-name>
            <given-names>Justine</given-names>
            <surname>Cassell</surname>
          </string-name>
          .
          <year>2018</year>
          .
          <article-title>Field Trial Analysis of Socially Aware Robot Assistant</article-title>
          .
          <source>In Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems</source>
          .
          <source>International Foundation for Autonomous Agents and Multiagent Systems</source>
          ,
          <volume>1241</volume>
          -
          <fpage>1249</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>Abhilasha</given-names>
            <surname>Ravichander</surname>
          </string-name>
          and Alan W Black.
          <year>2018</year>
          .
          <article-title>An Empirical Study of SelfDisclosure in Spoken Dialogue Systems</article-title>
          .
          <source>In Proceedings of the 19th Annual SIGdial Meeting on Discourse and Dialogue</source>
          .
          <volume>253</volume>
          -
          <fpage>263</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>Seyedeh</given-names>
            <surname>Zahra</surname>
          </string-name>
          <string-name>
            <given-names>Razavi</given-names>
            , Mohammad Rafayet Ali,
            <surname>Tristram H Smith</surname>
          </string-name>
          ,
          <article-title>Lenhart K Schubert,</article-title>
          and Mohammed Ehsan Hoque.
          <year>2016</year>
          .
          <article-title>The LISSA Virtual Human and ASD Teens: An Overview of Initial Experiments</article-title>
          .
          <source>In International Conference on Intelligent Virtual Agents</source>
          . Springer,
          <fpage>460</fpage>
          -
          <lpage>463</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>Seyedeh</given-names>
            <surname>Zahra</surname>
          </string-name>
          <string-name>
            <surname>Razavi</surname>
          </string-name>
          , Lenhart K Schubert, Mohammad Rafayet Ali, and Mohammed Ehsan Hoque.
          <year>2017</year>
          .
          <article-title>Managing Casual Spoken Dialogue Using Flexible Schemas</article-title>
          .
          <source>Pattern Transduction Trees, and Gist Clauses</source>
          (
          <year>2017</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <surname>Roger</surname>
            <given-names>C</given-names>
          </string-name>
          <string-name>
            <surname>Schank and Robert P Abelson</surname>
          </string-name>
          .
          <year>2013</year>
          .
          <article-title>Scripts, plans, goals, and understanding: An inquiry into human knowledge structures</article-title>
          . Psychology Press.
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <surname>Nava</surname>
            <given-names>A</given-names>
          </string-name>
          <string-name>
            <surname>Shaked</surname>
          </string-name>
          .
          <year>2017</year>
          .
          <article-title>Avatars and virtual agents-relationship interfaces for the elderly</article-title>
          .
          <source>Healthcare technology letters 4</source>
          ,
          <issue>3</issue>
          (
          <year>2017</year>
          ),
          <fpage>83</fpage>
          -
          <lpage>87</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>Archana</given-names>
            <surname>Singh</surname>
          </string-name>
          and
          <string-name>
            <given-names>Nishi</given-names>
            <surname>Misra</surname>
          </string-name>
          .
          <year>2009</year>
          .
          <article-title>Loneliness, depression and sociability in old age</article-title>
          .
          <source>Industrial psychiatry journal 18</source>
          ,
          <issue>1</issue>
          (
          <year>2009</year>
          ),
          <fpage>51</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <surname>Huei-Chuan</surname>
            <given-names>Sung</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Shu-Min</surname>
            <given-names>Chang</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mau-Yu Chin</surname>
          </string-name>
          , and
          <string-name>
            <surname>Wen-Li Lee</surname>
          </string-name>
          .
          <year>2015</year>
          .
          <article-title>Robotassisted therapy for improving social interactions and activity participation among institutionalized older adults: A pilot study</article-title>
          .
          <source>Asia-Pacific Psychiatry 7</source>
          ,
          <issue>1</issue>
          (
          <year>2015</year>
          ),
          <fpage>1</fpage>
          -
          <lpage>6</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <surname>Christiana</surname>
            <given-names>Tsiourti</given-names>
          </string-name>
          , Maher Ben Moussa, João Quintas, Ben Loke, Inge Jochem, Joana Albuquerque Lopes, and
          <string-name>
            <given-names>Dimitri</given-names>
            <surname>Konstantas</surname>
          </string-name>
          .
          <year>2016</year>
          .
          <article-title>A virtual assistive companion for older adults: design implications for a real-world application</article-title>
          .
          <source>In Proceedings of SAI Intelligent Systems Conference</source>
          . Springer,
          <fpage>1014</fpage>
          -
          <lpage>1033</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23]
          <string-name>
            <surname>Dina</surname>
            <given-names>Utami</given-names>
          </string-name>
          , Timothy Bickmore,
          <source>Asimina Nikolopoulou, and Michael PaascheOrlow</source>
          .
          <year>2017</year>
          .
          <article-title>Talk about death: End of life planning with a virtual agent</article-title>
          .
          <source>In International Conference on Intelligent Virtual Agents</source>
          . Springer,
          <fpage>441</fpage>
          -
          <lpage>450</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [24]
          <string-name>
            <surname>Teun</surname>
            <given-names>A. Van Dijk</given-names>
          </string-name>
          and
          <string-name>
            <given-names>Walter</given-names>
            <surname>Kintsch</surname>
          </string-name>
          .
          <year>1983</year>
          .
          <article-title>Strategies of Discourse Comprehension</article-title>
          . Academic Press, New York.
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [25]
          <string-name>
            <given-names>Laura</given-names>
            <surname>Pfeifer</surname>
          </string-name>
          <string-name>
            <surname>Vardoulakis</surname>
          </string-name>
          , Lazlo Ring, Barbara Barry, Candace L Sidner, and
          <string-name>
            <given-names>Timothy</given-names>
            <surname>Bickmore</surname>
          </string-name>
          .
          <year>2012</year>
          .
          <article-title>Designing relational agents as long term social companions for older adults</article-title>
          .
          <source>In International Conference on Intelligent Virtual Agents</source>
          . Springer,
          <fpage>289</fpage>
          -
          <lpage>302</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          [26]
          <string-name>
            <surname>Ramin</surname>
            <given-names>Yaghoubzadeh</given-names>
          </string-name>
          , Marcel Kramer, Karola Pitsch, and
          <string-name>
            <given-names>Stefan</given-names>
            <surname>Kopp</surname>
          </string-name>
          .
          <year>2013</year>
          .
          <article-title>Virtual agents as daily assistants for elderly or cognitively impaired people</article-title>
          .
          <source>In International Workshop on Intelligent Virtual Agents</source>
          . Springer,
          <fpage>79</fpage>
          -
          <lpage>91</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>