<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>Work-
shops, Los Angeles, USA, March</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Building Real-World Chatbot Interviewers: Lessons from a Wizard-of-Oz Field Study</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Michelle X. Zhou</string-name>
          <email>mzhou@juji-</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Carolyn Wang</string-name>
          <email>carolynw@gmail.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Gloria Mark</string-name>
          <email>gmark@uci.edu</email>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Huahai Yang</string-name>
          <email>hyang@juji-</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Kevin Xu</string-name>
          <email>xukevinwork@gmail.com</email>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Columbia University New York</institution>
          ,
          <addr-line>NY</addr-line>
          ,
          <country country="US">USA</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Juji, Inc. San Jose, CA inc.com</institution>
          ,
          <country country="US">USA</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Univ. of Pennsylvania Philadelphia</institution>
          ,
          <addr-line>PA</addr-line>
          ,
          <country country="US">USA</country>
        </aff>
        <aff id="aff3">
          <label>3</label>
          <institution>University of California</institution>
          ,
          <addr-line>Irvine Irvine, CA</addr-line>
          ,
          <country country="US">USA</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2019</year>
      </pub-date>
      <volume>20</volume>
      <issue>2019</issue>
      <abstract>
        <p>We present a Wizard-of-Oz field study, where a humanassisted chatbot interviewed 53 actual job applicants each in a 30-minute, text-based conversation. A detailed analysis of the chat transcripts and user feedback revealed users' likes and dislikes of the chatbot, as well as the patterns of their interaction with the chatbot. Our findings yield a set of practical design suggestions for building effective, realworld chatbot interviewers that appear intelligent with even limited NLP or conversational capabilities.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Chatbot</kwd>
        <kwd>AI Interviewer</kwd>
        <kwd>Personality Inference</kwd>
        <kwd>Wizard-ofOZ study</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>CCS CONCEPTS
• Computing Methodologies → Intelligent Agents •
Human-centered computing → Interactive systems and tools</p>
      <p>
        INTRODUCTION
From recruitment to user research, interviewing is a key
technique used to collect information from a target
audience. Human-driven interviewing however cannot scale to
large numbers and introduces potential biases [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]. To
address the challenges, researchers have built intelligent
agents as interviewers (e.g., [
        <xref ref-type="bibr" rid="ref12 ref14 ref17">4, 13, 15, 18</xref>
        ]). Despite the
obvious benefits, building such an agent for real-world use
is nontrivial because it must cover varied interview
questions and handle diverse user responses [4].
      </p>
      <p>Since natural language processing (NLP) techniques are far
from perfect and it is unclear what users’ expectations and
behavior would be in real-world interview situations, we
built a Wizard-of-Oz (WoZ) interviewing system to better</p>
      <p>
        Copyright © 2019 for the individual papers by the papers' authors.
Copying permitted for private and academic purposes. This volume is
published and copyrighted by its editors.
understand the associated user and technical requirements.
This system lets a human operator (wizard) drive a
textbased chat with a user (Figure 1). Inspired by the work in
[
        <xref ref-type="bibr" rid="ref12">13</xref>
        ], we also wanted to observe how the use of a
personality engine might affect user interactions with a
humanassisted chatbot interviewer. We thus incorporated a
personality engine into the WoZ system to automatically infer
a user’s Big 5 personality traits based on the user’s text
given during the interview. We deployed the system to
interview 53 real job applicants who applied to an internship
program. We analyzed about 1600 minutes of interview
transcripts to answer two sets of questions:
• Q2 How did users interact with a chatbot interviewer?
§ What were the characteristics of user interactions?
§ Which interactions could be supported in practice?
The first set of questions is to understand users’ perceptions
of a human-assisted chatbot interviewer, especially what
they like and dislike. The second set of questions is to
uncover a set of practical features that make a chatbot
intelligent with even limited NLP or conversation capabilities.
As we discuss shortly, the answers to the above questions
reveal new insights into user interactions with a chatbot
interviewer in a real-world application. Moreover, the
answers help formulate a set of practical design suggestions
for building effective, real-world chatbot interviewers.
RELATED WORK
Our work is inspired by recent efforts on using virtual
inAgatha:
      </p>
      <p>I took the liberty to analyze your
personality from the text in your social media
account. See it on your right.</p>
      <p>User:</p>
      <p>Hm, I think there’s a bug
Agatha:</p>
      <p>What is a bug?
User:</p>
      <p>
        It says data source twitter, and there are no
words showing up.
terviewers to aid in information collection. For example, Li
et al. show that actual job candidates are willing to confide
in and listen to chatbot interviewers [
        <xref ref-type="bibr" rid="ref12">13</xref>
        ]. Several studies
also indicate how an embodied virtual agent elicits more
sensitive information from users than a human interviewer
can (e.g., [
        <xref ref-type="bibr" rid="ref14 ref19">15, 20</xref>
        ]). While prior work demonstrates the
benefits of a virtual interviewer, little is known on the
requirements for creating a chatbot interviewer for effective
realworld use. For example, what NLP skills should a chatbot
possess to support practical interview situations? How
would a chatbot's ability to understand a user’s personality
affect their interview experience? The purpose of this study is
to find answers to these questions.
      </p>
      <p>WIZARD-OF-OZ (WOZ) INTERVIEWING SYSTEM
Our WOZ system is a web-based system that offers a
textbased chat interface with which a user can interact. A
human operator (wizard) uses the interface to drive an
interview. The wizard can ask questions and compose responses,
including calling system functions (e.g., calling a function
to analyze a user’s personality and display the results).
FIELD DEPLOYMENT
We deployed the WOZ system to aid a software startup in
hiring three summer interns from over 650 applicants. First,
60 candidates were selected based on their stated technical
interests and experience relevant to the positions. Each
candidate was invited to participate in a 60-minute interview: a
30-minute chatbot (wizard-of-oz) interview followed by a
30-minute phone interview. The same person served as the
wizard and the phone interviewer.</p>
      <p>The wizard used an interview agenda with a set of questions
to guide each interview. At the beginning of each interview,
the candidates were informed that: (1) they would be
interviewed by a chatbot, (2) they would be asked about their
interview experience, and (3) the chatbot would analyze
their conversation and infer their personality.</p>
      <p>
        To start an interview, a candidate logged into the WOZ
system with his/her Facebook or Twitter account. The
interview included four parts. First, the chatbot named Agatha
asked the candidate to make a self-introduction. Agatha
then displayed the system-inferred Big 5 personality traits
based on the user opt-in Facebook posts (up to 200) or
tweets (up to 3200) during the login. The personality
inference engine used in the study is similar to the one described
in [
        <xref ref-type="bibr" rid="ref12">13</xref>
        ]. The candidate was asked to evaluate and comment
on his/her inferred traits. The second part of the interview
included a set of circumstantial questions where a candidate
was asked to provide their assessment of a situation and
propose solutions. For example, one question was about
handling software defects before a deadline. The third part
of the interview included a set of casual inquiries about the
candidate (e.g., “If you had a super power, what would it
be?”). The last part solicited the candidate’s impression of
the chatbot (“What’s your impression of me”) and input for
future improvements (“What should I improve on”).
      </p>
      <p>During each interview, the wizard intentionally did not
interpret long or complex user input and kept her response
simple without getting into a deep conversation on any
topic. She did so for two reasons. One was to test users’
impressions of a capable but realistic chatbot, since even the
most advanced chatbot is unlikely to understand or respond
to every user input. The other was that the wizard could not
afford detailed responses due to time constraints, as it takes
time for a person to digest a complex input, compose a
thoughtful response, and type it into the system.</p>
      <p>The whole process lasted about two and half weeks, during
which 53 candidates completed their interviews. All were
university students with 20 (37.7%) females and 33 (62.3%)
males. On average, each user answered about 18 interview
questions and input about 500 words.</p>
      <p>RESULTS ANALYSIS AND FINDINGS
To answer the two sets of questions (Q1 and Q2), three
coders read and analyzed each interview transcript using an
open-coding approach. Below we report the results.
Q1: User Impressions of the Chatbot
At the end of each interview, a candidate was asked of
his/her impression of the chatbot interviewer. The top-10
keywords used to describe the chatbot were: understand,
responsive, natural, interesting, human-like, friendly,
fluency, believable, fun, and cool. While unaware of the wizard’s
presence, most users (92%) described the chatbot as
interesting and intelligent, almost like a real person. To extract a
list of user likes and dislikes, each coder went through all
user comments independently and then worked together to
merge their lists. Three categories emerged from the
coding, shown in Table 1.</p>
      <p>User Likes
In general, the users liked the chatbot’s language
capabilities and thought it asked questions naturally and responded
to them well. Here is what a user said to the chatbot:
“Your responses sounded very natural and real, it could
have almost been a live human.”
Regarding the chatbot’s conversation capability, the users
felt that the chatbot was attentive and engaging. They
especially appreciated that the chatbot made an effort to learn
during the conversation. For example, when a user asked
the chatbot to tell a joke, the chatbot responded “what is
joke”. Upon receiving the user’s answer, the chatbot
thanked the user. These simple exchanges made the users
perceive the chatbot as honest, engaging, and willing to
learn. For example, one user told the chatbot:
“I think you're doing a pretty good job so far! Sometimes
you don't understand my questions, but you are still
learning and I can tell you're making an effort to learn
more by asking me questions”.</p>
      <p>Likewise, another user expressed:
“You ask more questions than you give answers,
indicating that you are focused on me and wish to maintain the
conversational flow”.</p>
      <p>The interviewees were also impressed by the chatbot’s
ability to analyze their personality during the interview. For
example, one user told the chatbot:
“It's the first time I've ever done something like this… I'd
say your analyses were generally accurate”.</p>
      <p>Overall, users’ positive perceptions were encouraging
especially as the wizard intentionally did not interpret many
user inputs and kept the chat simple and shallow due to
time constraints as well as to set realistic expectations. This
suggests that practical solutions could satisfy users with
even limited NLP.</p>
      <p>User Dislikes
The users pointed out several aspects they did not like about
the chatbot and were in need of improvements.</p>
      <p>Concerning language, 66% of users mentioned that the
chatbot could be improved to carry out a deeper, more
interesting conversation. For example, per one user:
“You need to have more knowledge, … your responses
will become interesting, not just some simple answers”.
For example, the wizard simply chose not to answer a user
question like “What do I need to know about myself?”
On the conversation capabilities, the main complaint was
about the conversation timing—untimely interruptions
during an interview. After a user texted the first response to a
question, the wizard often continued with the next question
without waiting for more user input, mainly due to the time
constraint. In some cases, the users might still be typing or
wanting to give more input, but felt that their thoughts or
responses were interrupted untimely. In reality, the human
wizard found it difficult to determine the response timing
especially since she had little knowledge of a user’ habit
(e.g., fast or slow in response).</p>
      <p>Several complained that the chatbot was unable to
“remember” and learn from their exchanges (“I already told you
that I like basketball…”)
Concerning the chatbot’s personality, some users felt that
the chatbot was unlike a real person because it was too
Language
Capability
“You are a bit too emotional when you respond. People
don't really use the punctuations that you use.”
However, others felt that the chatbot lacked personality or
strong opinions:
“I see the lack of personality within your sentence
structure or word choices.”
“you need to have more strong, personal opinions… and
the ability to keep the conversation going… maybe gives
me more opinionated feedback”
Since all the users interacted with the same wizard, we
suspected that such impressions might be affected by the users’
own personality. We however did not have sufficient data
to validate this hypothesis.</p>
      <p>Q2: User Interactions with the Chatbot
Extracting user likes and dislikes helped answer the first set
of questions on users’ impressions of a human-assisted
chatbot. To support the user likes and avoid the dislikes, we
must answer the second set of questions to discover what
takes to build an effective chatbot interviewer.</p>
      <p>
        As shown in Table 1, 92% of users thought the chatbot was
capable of understanding them, but yet 66% of them hoped
such a capability to be further improved. Existing work
shows that how an interviewer responds to interviewees
during an interview largely affects one’s interviewing
experience [
        <xref ref-type="bibr" rid="ref13">5, 14</xref>
        ]. We thus analyzed each WoZ interview
transcript to identify how the wizard responded to users’
questions/requests during an interview, which would help
explain the users’ impressions and expectations.
      </p>
      <p>Each coder first extracted all user questions/requests from
the transcripts independently and then merged their lists.
They identified a total of 328 user requests and classified
them into six categories (Table 2). The wizard responded to
200 such requests (response rate 61%) during the
interviews. The top three user questions/requests asked about
the chatbot’s personality analysis (32.6%) and the chatbot
itself (27.7%), and requested conversation continuation
(20%). The chatbot responded to these three types of
questions 66.4%, 75.8%, and 56.7% of the time, respectively. In
most of these cases, the wizard used canned responses (e.g.,
answering about the chatbot’s personality analysis). To
avoid an unbounded conversation, the wizard answered few
general user requests (“Tell me a joke”).</p>
      <p>As indicated by their questions, the users showed great
interest in their personality analysis results regardless the
accuracy of the results. In fact, the system did not perform
analysis for nearly half of the users (26) because of a glitch
in getting their social media data. Although the system
displayed random results and indicated zero words analyzed,
most users except one did not notice the glitch, and inquired
and argued about the results. For those who obtained an
actual result, 67% thought their result was accurate.</p>
      <p>Category</p>
      <p>Moreover, we observed interaction reciprocity—a user
asked a chatbot the same questions that s/he was asked
during an interview. For example, the users were asked
“what’s your super power?” Quite a few users asked the
chatbot the same question when they were invited to ask a
question. Users asked a small number of random general
questions (43 out of 328).</p>
      <p>DESIGN SUGGESTIONS AND DISCUSSION
Although our WoZ study has its limitations (e.g. only 53
users in one specific use), our findings described above
often two valuable insights. First, our analysis shows that
the wizard who used limited NLP and did not respond to
every user input still impressed the users being very
humanlike. This suggests that a chatbot interviewer could be built
with limited NLP. However, it should focus on responding
to frequently asked user questions/requests related to an
interview. Such responses even if they are canned or simple
would still make users feel the chatbot understand and
respond to them well. Second, the types of user
questions/requests in an interview can be anticipated (see
below), which in turn helps prepare a chatbot interviewer to
handle user input robustly with even limited NLP.
Below we outline a concrete set of design suggestions for
building effective chatbot interviewers. These suggestions
aim at enabling a set of practical features that make a
chatbot interviewer appear intelligent with limited NLP or
conversation capabilities.</p>
      <p>
        Active Listening Skills
Effective human interviewers actively listen to their
interviewees to better engage with them [
        <xref ref-type="bibr" rid="ref13 ref21">22, 14</xref>
        ]. One way to
build an effective chatbot interviewer is to empower it with
active listening skills, such as repeating a user’s input to
make the user feel s/he is heard [
        <xref ref-type="bibr" rid="ref13 ref21">3, 14, 22</xref>
        ]. As one user
suggested, the chatbot should just “repeat the last three
words they say”. To do so, a chatbot incorporates a user’s
expressions in its response. For example, if a user
mentioned “I love to cook”, the chatbot could ask: “I know you
like to cook. Why do you enjoy it?” Although this feature
will require a chatbot to parse a user’s input, it does not
require perfect NLP and a partial understanding of a user
input (e.g., parts of speech) will go a long way.
      </p>
      <p>Being Honest and Humble by Asking Questions
Users liked the attentive and honest behavior of the
humanassisted chatbot (Table 1). The wizard posed questions to
avoid getting into a deep conversation. Such behavior can
be robustly supported when a chatbot encounters unknown
words or expressions in user input. For example, in the
WOZ study, a user asked the chatbot “Do you know
idioms?” The chatbot asked “What is it idiom?” Not only will
such a question make a user feel engaged, but it will also
help the chatbot “learn” new concepts. In the above
example, the unknown words (“idiom”) and the associated user
explanation can be recorded and later used by the chatbot to
answer similar user questions in the future.</p>
      <p>Anticipating User Questions/Requests
Instead of providing general NLP capabilities, we can build
targeted NLP capabilities by anticipating user interactions
during an interview. Table 2 shows that 81% of user
questions/requests fell into three categories, of which
corresponding answers could be prepared in advance. For
example, we can anticipate users’ asking about the interview
context, such as the chatbot’s origin and capability. We can
also anticipate user questions based on interaction
reciprocity and prepare a chatbot with answers to all its
interview questions (e.g., “what is your super power”).
While it might not be feasible to anticipate all user behavior
or to make a chatbot interviewer understand and respond to
every user input, our findings suggest that user interactions
are not random during an interview and many of them can
be anticipated and handled effectively.</p>
      <p>Pacing a Conversation Intelligently
Learning from our WOZ study, we suggest that a chatbot
use three sources of information to pace a conversation.
One source is to detect a user’s keystroke activities. If a
user is still typing, the chatbot could then wait until the
typing is done. Another source is to model a user’s pace, such
as tracking her average response time, and then use the
information to pace a conversation with this user. The third
source is to detect the completeness of the content in the
current response. If the current response is fairly complete,
the chatbot can then move on without waiting for additional
input from a user. However, judging response completeness
may not be easy as it may be question-dependent.
Alternatively, the system could assess the informativeness of a
response based on information entropy [7].</p>
      <p>
        Personalizing an Interview
Effective human interviewers personalize a conversation to
better engage with their interviewees [
        <xref ref-type="bibr" rid="ref18">5, 19</xref>
        ]. Our study also
showed that users were interested in the system analysis of
their personality regardless its accuracy. One way to build
an effective chatbot interviewer is to power it with a
personality inference engine like the one used in our WOZ
study. The analysis result could help a chatbot personalize a
conversation and encourage a user to open up. For example,
a chatbot can analyze a user opt-in social media content at
the start of an interview and use the inferred personality to
pose tailored questions. Assuming that the chatbot infers a
user high on creativeness, it could ask “It seems you are
very creative, what’s the most creative thing you have
done?” Such a personalized conversation makes users stay
engaged and motivates them to cooperate [
        <xref ref-type="bibr" rid="ref16">12, 17</xref>
        ].
Moreover, such information could be used to adapt the chatbot’s
personality to a user’s [
        <xref ref-type="bibr" rid="ref16">17</xref>
        ] or fit for an interview task [
        <xref ref-type="bibr" rid="ref12">13</xref>
        ].
CONCLUSIONS AND FUTURE WORK
We are building fully automated chatbot interviewers that
can support diverse real-world interview situations. To
develop an effective chatbot, we conducted a WOZ field study
where a human-assisted chatbot interviewed 53 actual job
applicants. Our findings revealed what the users liked or
disliked about the chatbot along with a set of user
interaction patterns coincident with such opinions. Based on the
findings, we formulated a set of practical design
suggestions for building effective, real-world chatbot interviewers
with even limited NLP capabilities. Based on these design
suggestions, we are building chatbot interviewers that can
function in varied interview contexts, such as job interviews
and customer interviews.
      </p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Adali</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Golbeck</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <article-title>Predicting personality with social behavior</article-title>
          .
          <source>ASONAM'</source>
          <year>2012</year>
          ,
          <fpage>302</fpage>
          -
          <lpage>309</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          <string-name>
            <surname>Bickmore</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gruber</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Picard</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <article-title>Establishing the computer-patient working alliance in automated health behavior change interventions</article-title>
          .
          <source>Patient Education and Counseling</source>
          ,
          <year>2005</year>
          ,
          <volume>59</volume>
          (
          <issue>1</issue>
          ):
          <fpage>21</fpage>
          -
          <lpage>30</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          <string-name>
            <surname>Decker</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <year>1989</year>
          . How to communicate effectively, Page, London, UK.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          <string-name>
            <surname>DeVault</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Artstein</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Benn</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dey</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Fast</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gainer</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Georgila</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          <string-name>
            <surname>Gratch</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hartholt</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lhommet</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lucas</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Marsella</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Morbini</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Nazarian</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Scherer</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Stratou</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Suri</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Traum</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wood</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Xu</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rizzo</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          , and Morency,
          <string-name>
            <surname>LP.</surname>
          </string-name>
          <article-title>SimSensei kiosk: a virtual human interviewer for healthcare decision support</article-title>
          ,
          <source>Proc. AAMAS</source>
          <year>2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          <string-name>
            <surname>DiCicco-Bloom</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Crabtree</surname>
            ,
            <given-names>B. F.</given-names>
          </string-name>
          (
          <year>2006</year>
          ).
          <article-title>The qualitative research interview</article-title>
          .
          <source>Medical education</source>
          ,
          <volume>40</volume>
          (
          <issue>4</issue>
          ),
          <fpage>314</fpage>
          -
          <lpage>321</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          <string-name>
            <surname>Digman</surname>
            ,
            <given-names>J. M.</given-names>
          </string-name>
          <article-title>Personality structure: Emergence of the five-factor model</article-title>
          .
          <source>Annual Review of Psychology</source>
          <year>1990</year>
          ,
          <volume>41</volume>
          (
          <issue>1</issue>
          ):
          <fpage>417</fpage>
          -
          <lpage>440</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          <string-name>
            <surname>Ebrahimi</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Maasoumi</surname>
          </string-name>
          . E., and
          <string-name>
            <surname>Soofi</surname>
          </string-name>
          . E.
          <article-title>Measuring Informativeness of Data by Entropy</article-title>
          and Variance,
          <year>1999</year>
          , Advances in Economics,
          <source>Income Distribution and Scientific Methodology</source>
          <volume>61</volume>
          -77.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          <string-name>
            <given-names>S.</given-names>
            , &amp;
            <surname>Meng</surname>
          </string-name>
          ,
          <string-name>
            <surname>H.</surname>
          </string-name>
          (
          <year>2012</year>
          ).
          <article-title>Testing a new procedure for reducing faking on personality tests within selection contexts</article-title>
          .
          <source>Journal of Applied Psychology</source>
          ,
          <volume>97</volume>
          ,
          <fpage>866</fpage>
          -
          <lpage>880</lpage>
          . doi:
          <volume>10</volume>
          .1037/a0026655.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          <string-name>
            <surname>Grieve</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Watkinson</surname>
          </string-name>
          ,
          <source>J. The Psychological Benefits of Being Authentic on Facebook. Cyber Psychology</source>
          , Behavior, and Social Networking,
          <year>2016</year>
          ,
          <volume>19</volume>
          (
          <issue>7</issue>
          ):
          <fpage>420</fpage>
          -
          <lpage>425</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Gou</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zhou</surname>
            ,
            <given-names>M.X.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Yang</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          <article-title>KnowMe and ShareMe: Understanding automatically discovered personality traits from social media and user sharing preferences</article-title>
          .
          <source>CHI '14</source>
          ,
          <fpage>955</fpage>
          -
          <lpage>964</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Jackle</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lynn</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sinibaldi</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Tippping</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <article-title>The effect of interviewer experience, attitudes, personality, and skills on respondent cooperation with face-toface surveys</article-title>
          .
          <source>Survey Research Methods</source>
          ,
          <year>2013</year>
          ,
          <volume>7</volume>
          (
          <issue>1</issue>
          ):
          <fpage>1</fpage>
          <lpage>12</lpage>
          . Lee,
          <string-name>
            <given-names>M.</given-names>
            ,
            <surname>Forlizzi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            ,
            <surname>Kiesler</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            ,
            <surname>Rybski</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            ,
            <surname>Antanitis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            , and
            <surname>Savetsila</surname>
          </string-name>
          ,
          <string-name>
            <surname>S.</surname>
          </string-name>
          <article-title>Personalization in HRI: a longitudinal field experiment</article-title>
          ,
          <source>Proc. ACM/IEEE international conference on HumanRobot Interaction</source>
          ,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          13.
          <string-name>
            <surname>Li</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zhou</surname>
            ,
            <given-names>M.X.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Yang</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Mark</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          <article-title>Confiding in and listening to virtual agents: The effect of personality</article-title>
          .
          <source>Proc. ACM IUI</source>
          <year>2017</year>
          ,
          <volume>275</volume>
          -
          <fpage>286</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          14.
          <string-name>
            <surname>Louw</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Todd</surname>
            ,
            <given-names>R. W.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Jimakorn</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          (
          <year>2011</year>
          ).
          <article-title>Active listening in qualitative research interviews</article-title>
          .
          <source>Proceedings of the International Conference: Doing Research in Applied Linguistics</source>
          ,
          <volume>71</volume>
          -
          <fpage>82</fpage>
          . Retrieved from http://arts.kmutt.ac.th/dral/.
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          15.
          <string-name>
            <surname>Lucas</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gratch</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>King</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Morency</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          <article-title>It's only a computer: virtual humans increase willingness to disclose</article-title>
          .
          <source>Comp. in Human Behavior</source>
          ,
          <year>2014</year>
          , vol.
          <volume>37</volume>
          :
          <fpage>94</fpage>
          -
          <lpage>100</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          16.
          <string-name>
            <surname>McCrae</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          and Costa,
          <string-name>
            <surname>P.</surname>
          </string-name>
          (
          <year>1999</year>
          )
          <article-title>The five factor theory of personality</article-title>
          .
          <source>in Handbook of Personality: Theory and Research</source>
          ,
          <string-name>
            <given-names>L.A.</given-names>
            <surname>Pervin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.P.</given-names>
            <surname>Johns</surname>
          </string-name>
          , NY: Guilford,
          <fpage>139</fpage>
          -
          <lpage>153</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          17.
          <string-name>
            <surname>Nass</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Steuer</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Tauber</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          <article-title>"Computers are social actors."</article-title>
          <source>In Proceedings of the SIGCHI conference on Human factors in computing systems</source>
          , pp.
          <fpage>72</fpage>
          -
          <lpage>78</lpage>
          . ACM,
          <year>1994</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          18.
          <string-name>
            <surname>Nunamaker</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          , et al.
          <article-title>"Embodied conversational agent-based kiosk for automated interviewing</article-title>
          .
          <source>" Journal of Management Information Systems 28.1</source>
          (
          <year>2011</year>
          ):
          <fpage>17</fpage>
          -
          <lpage>48</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          19.
          <string-name>
            <surname>Okun</surname>
          </string-name>
          , B.
          <source>Effective Helping: Interviewing and Counseling Techniques. 7th Edition</source>
          , Cengage Learning,
          <year>2007</year>
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          20.
          <string-name>
            <surname>Pickard</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Roster</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Chen</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          <article-title>Revealing sensitive information in personal interviews: Is selfdisclosure easier with humans and avatars and under what conditions? Computers in Human Behavior,</article-title>
          <year>2016</year>
          , vol.
          <volume>65</volume>
          :
          <fpage>23</fpage>
          -
          <lpage>30</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          21.
          <string-name>
            <surname>Turk</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <article-title>Multimodal interaction: A review</article-title>
          .
          <source>Pattern Recognition Letters</source>
          <volume>36</volume>
          :
          <fpage>189</fpage>
          -
          <lpage>195</lpage>
          (
          <year>2014</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          22.
          <string-name>
            <surname>Weger</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          , Jr.,
          <string-name>
            <surname>Castle</surname>
            ,
            <given-names>G. R.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Emmett</surname>
            ,
            <given-names>M. C.</given-names>
          </string-name>
          (
          <year>2010</year>
          ).
          <article-title>Active listening in peer interviews: The influence of message paraphrasing on perceptions of listening skill</article-title>
          .
          <source>International Journal of Listening</source>
          ,
          <volume>24</volume>
          ,
          <fpage>34</fpage>
          -
          <lpage>49</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>