<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Agent to Improve User Engagement and Collaborative Experience in Human-AI Co-Creative Design Ideation</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Jeba Rezwana</string-name>
          <email>jrezwana@uncc.edu</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Mary Lou Maher</string-name>
          <email>m.maher@uncc.edu</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Nicholas Davis</string-name>
          <email>ndavis64@uncc.edu</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="editor">
          <string-name>Human AI Co-creation, User Engagement, Virtual Embodied AI, Conversational AI</string-name>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Joint Proceedings of the ACM IUI 2021 Workshops, College Station</institution>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>University of North Carolina at Charlotte</institution>
          ,
          <addr-line>9201 University City Blvd, Charlotte, NC 28223</addr-line>
          ,
          <country country="US">US</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>to the AI, give feedback to the AI, etc. For example</institution>
          ,
          <addr-line>Im-</addr-line>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2017</year>
      </pub-date>
      <abstract>
        <p>In recent years, researchers have designed many co-creative systems that are very promising with a powerful AI, yet some fail to engage the users due to the unimpressive quality of the collaboration and interaction. Most of the existing co-creative systems use instructing interaction where users only communicate with the AI by providing instructions for contribution. In this paper, we demonstrate the prototype of a co-creative system for design ideation, Creative PenPal that utilizes an interaction model that includes human-AI conversing interaction using text and a virtual embodiment of the AI character. We hypothesize that this interaction model will improve user engagement, user perception about the AI, and the collaborative experience. We describe the study design to investigate the impact of this particular interaction model on user engagement and the overall collaborative experience. By the time of the workshop, we will have the data and insights from the study.</p>
      </abstract>
      <kwd-group>
        <kwd>Collaborative</kwd>
        <kwd>Design Ideation</kwd>
        <kwd>or feedback</kwd>
        <kwd>In a human collaboration</kwd>
        <kwd>collaborators</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <sec id="sec-1-1">
        <title>AI agents are becoming a part of our everyday life, thanks</title>
        <p>
          to artificial intelligence technologies. Human-AI co
creative tasks as partners [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ]. Rather than being perceived as
a support tool, AI agents in co-creative systems should be
regarded as a co-equal partner. This field has the
potential to transform how people perceive and interact with
AI. A study showed that AI ability alone does not ensure
a positive collaborative experience of users with the AI
[
          <xref ref-type="bibr" rid="ref2">2</xref>
          ]. In recent years, researchers have designed many
co-creative systems with powerful AI ability, yet
sometimes users fail to maintain their interest and engagement
while collaborating with the AI due to the quality of the
collaboration and interaction. The literature asserts that
user engagement is associated with the way users interact
with a system [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ]. Interaction design is often an untended
mental property of co-creative systems. Bown asserted
that the success of a creative system’s collaborative role
should be further investigated through interaction design
as interaction plays a key role in the creative process of
co-creative systems [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ]. Therefore, as a young field, there
are potential areas of interaction design to be explored
for designing efective co-creative systems that engage
users and provide a better collaborative experience.
USA
the mechanics of co-creation [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ]. The literature about
human-AI co-creation says that embodied
communication improves coordination between the human and the
topic in the co-creativity literature despite being a funda- or embodied communication for user to AI direct
comAI [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ]. Additionally, literature asserts that a commu- buttons or using function keys, etc. In contrast, the
connication channel for conversation between co-creators versing interaction type is where users have a dialogue
other than communicating through the shared creative with a system. Users can speak via an interface or type in
product improves user engagement in a human creative questions or answers to which the system replies via text
collaboration [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ]. These literatures led us to investi- or speech output [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ]. Conversational agents have
trangate the impact of embodied communication from the AI sitioned into multiple industries with increased ability
and a conversation between the human and AI on user for user engagement in intelligent conversation.
engagement and collaborative experience in human-AI The literature asserts that embodied communication
co-creativity. Our research questions emerged from the aids synchronization and coordination in improvisational
issue that most existing co-creative systems use instruct- human-computer co-creativity [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ]. Being able to
coning interaction type, which uses one-way communica- verse with each other shows an increased engagement
tion, human to AI. For this work, we will investigate level in a human creative collaboration [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ]. A user’s
the impact of conversing interaction and AI embodiment confidence in an AI agent’s ability to perform tasks is
on user engagement, user perception about the AI and improved when imbuing the agent with embodiment and
the overall collaborative experience. The two research social behaviors compared to the agent solely depending
questions we have are- on conversation [
          <xref ref-type="bibr" rid="ref14">14</xref>
          ]. Bente et al. reported that
embodied telepresent communication improved both social
• How does AI embodiment and conversing inter- presence and interpersonal trust in remote collaboration
action influence user engagement? settings with a high level of nonverbal activity [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ].
• How does AI embodiment and conversing interac- User engagement with virtual embodied conversational
tion influence user perception about the AI agent agents can be measured via user self-reports; by
monas the collaborative partner and the overall col- itoring the user’s responses, tracking the user’s body
laborative experience? postures, head movements and facial expressions
during the interaction, or by manually logging behavioral
        </p>
        <p>
          For investigating the research questions, we have de- responses of user experience [
          <xref ref-type="bibr" rid="ref16">16</xref>
          ]. Carrol and Latulipe
veloped a prototype of a co-creative system named Cre- proposed a quantitative and psychometric survey, called
ative PenPal where the user and the AI collaborate on a Creativity Support Index (CSI), to assess a tool’s
creativdesign ideation task. Users can generate ideas for design- ity support by measuring six dimensions of creativity
ing a particular object by sketching on a canvas, and the via self reports: Exploration, Expressiveness, Immersion,
AI will also contribute to the design ideation by showing Enjoyment, Results Worth Efort and Collaboration [
          <xref ref-type="bibr" rid="ref17">17</xref>
          ].
diferent inspirational sketches. Creative PenPal utilizes
a conversing interaction for the communication between
the human and the AI. Additionally, a virtual embodied
character for the AI agent is utilized. For investigating 3. Interface
the research questions, we describe the study design in
the paper. By the time of the workshop, we will have the
data and insights from the study.
        </p>
      </sec>
    </sec>
    <sec id="sec-2">
      <title>2. Related Work</title>
      <sec id="sec-2-1">
        <title>Louie et al. identified that AI ability alone does not ensure</title>
        <p>
          a positive collaborative experience of users with the AI
[
          <xref ref-type="bibr" rid="ref2">2</xref>
          ]. Bown asserted that the success of a creative system’s
collaborative role should be further investigated in terms
of interaction design as interaction plays a key role in
the creative process of co-creative systems [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ]. Later
YeeKing and d’Inverno argued for a stronger focus on the
user experience, suggesting a need for further integration
of interaction design practice into human-AI co-creativity
research [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ].
        </p>
        <p>
          Interaction types are ways a user interacts with a
product or application [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ]. Instructing interaction is where
users issue instructions to a system. This can be done
in many ways, including typing in commands, selecting
options from menus or on a multitouch screen, pressing
        </p>
      </sec>
      <sec id="sec-2-2">
        <title>Creative PenPal is an interactive prototype, created with</title>
        <p>Javascript, which has all the interaction components
except the back-end AI model. We have selected a collection
of sketches as the database for creating a seamless
experience that mimics an actual implementation of the AI
model. The sketch generation is automated where the
system selects sketches from the collection. We have two
versions of the Creative PenPal prototype to investigate
and compare the user engagement and collaborative
experience between the two versions. The original version
uses a conversing interaction and a virtual embodied AI
(see Figure 1). The virtual embodied AI character, a
pencil, is shown in section A of Figure 1. We will address
the AI character as PenPal in the rest of the paper.
Section B is where the conversation happens between the
PenPal and the user via text and buttons. We can see
the design task displayed in section C. Both the user and
the AI collaborate in a design ideation task where both
collaborators generate ideas for the design of an object as
sketches. Users will design the specified object in the task
C</p>
        <p>
          D
by sketching on the canvas shown in section F. Users can
undo a stroke using the ”Undo Previous Sketch” button
and start the design ideation over by using the ”Clear the
canvas” button. When users hit the ”Inspire me” button
shown in section B, the virtual AI character will show
an inspirational sketch of a conceptually similar object,
an object that have similar working mechanism or usage
as the design task object, on its canvas shown in
section G. Previous work on co-creative design ideation [
          <xref ref-type="bibr" rid="ref18">18</xref>
          ]
showed that users were more inspired by conceptually
similar objects than visually similar objects that share
structural similarity as the design task object. Users can
also ask for visually similar objects or sketches of the
design task object to get inspiration by saying they didn’t
like the conceptually similar object (described in the next
section). Section E shows the name of the object located
in the PenPal generated sketch. The other version uses
an instructing interaction where users can instruct the
AI using buttons without AI embodiment (Figure 2). We
will use both of these two versions to compare the impact
of two diferent interaction designs on user engagement
and collaborative experience with an AI.
        </p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>4. Interaction Model</title>
      <sec id="sec-3-1">
        <title>For the interaction model, we choose a conversing in</title>
        <p>teraction. The conversation with the virtual embodied
AI is simple so that the user will be able to go deeper
into the ideation process without any interruption in the
design flow. The embodied virtual agent will show some
afective characteristics, for example, when the user likes
its contribution, it will be seen as happy and when the
user does not like the contribution from the AI, it will
be sad. The conversation is divided into five diferent
situational phases demonstrated in Figure 2. Each phase
includes the embodied state of the AI and conversational
interaction between the user and the PenPal. The text
without a comment bubble represents the embodied state
of the AI in Figure 2. The texts with comment bubbles
represent dialogues of the user and the AI, and the icon
indicates which dialogue belongs to whom. Diferent
responses from the user initiate another phase, which is
shown using arrows in Figure 2. If the user can respond
with diferent options, “/” sign is used in the Figure.</p>
        <sec id="sec-3-1-1">
          <title>4.1. PenPal Introduction</title>
          <p>This phase will start when the user starts the design
task. PenPal will introduce itself and ask the users if
they want to see an inspirational sketch from the AI by
saying ”Hi! I am your Creative PenPal. Do you want
me to inspire you?”. Users can respond immediately by
pressing the button ”Inspire me” or they can keep ideating
by sketching and respond later.</p>
        </sec>
        <sec id="sec-3-1-2">
          <title>4.2. PenPal Generating Sketch and</title>
        </sec>
        <sec id="sec-3-1-3">
          <title>Collecting User Preferences</title>
        </sec>
      </sec>
      <sec id="sec-3-2">
        <title>When the user hits the button ”Inspire me” indicating the desire to see an inspirational sketch, the PenPal will move to the canvas and generate a sketch. The PenPal will ask the user whether they liked the sketch or not. The user</title>
        <p>can reply with the ”Yes” button or the ”No” button. This
phase is for collecting user preferences.</p>
        <sec id="sec-3-2-1">
          <title>4.3. User Liked PenPals Sketch</title>
        </sec>
      </sec>
      <sec id="sec-3-3">
        <title>When users select the ”Yes” button in response to PenPal’s question to determine whether the user liked the sketch or not, it means the sketch inspired the user in</title>
        <p>
          their design ideation. The PenPal will arrive with a happy including 25 males and 25 females as participants. This
face and say, ”I am glad that you liked the sketch! Let me study will use a between-subject study where one group
know if you want to see another inspirational sketch as of participants will test the version with instructing
inan idea”. If The user wants to see an inspiration again, teraction and without any embodied AI character. The
they will select the ”Inspire me” button. other group will test the version with the conversing
interaction and a virtual embodied AI agent. The study
4.4. User Did not Like PenPals Sketch will start with a short pre-study survey to collect some
When users click the ”No” button, indicating that Pen- demographic information about the participants, for
exPal’s generated sketch did not inspire them, PenPal ar- ample, gender, age-range, drawing/ sketching skills, etc.
rives with a sad face and says, ”Sorry that I could not Then, the participant will carry out the design task using
inspire you!” (left side of Figure 4, the gree arrow indi- either one version of the Creative PenPal. The task for
cates transition). Then it suggests the user ask for specific this study is- “Ideate the design of a shopping cart for
types of objects as inspiration by saying, ”Let’s try to be the elderly within 20 minutes. You must include three
more specific about what you want me to inspire with” design inspirations from the AI in the design”. The whole
(Right side of Figure 4). The user can respond witree task will be screen recorded. After the task, the
particioptions, ”Design Task Objects” (as our design task object pants will fill out Creativity Support Index (CSI), which
is shopping cart, the button says ”Shopping Carts”), ”vi- is a well known psycho metric survey, for measuring six
sually similar objects”, or ”conceptually similar objects”. dimensions of creativity: Exploration, Expressiveness,
Visually similar objects have visual structural similarity Immersion, Enjoyment, Results Worth Efort and
Collabas the design task object and conceptually similar objects oration [
          <xref ref-type="bibr" rid="ref17">17</xref>
          ] to evaluate user engagement, collaboration
have similar working mechanism or usage as the design and immersion. After that, a retrospective think-aloud
task object. When the user clicks any of these three will be conducted as the participants watch the
screenbuttons, the PenPal will generate a sketch accordingly. recording video of the task to understand the rationale
behind the user interaction process and user experience.
4.5. User Finished Sketching The study will end with a follow-up semi-structured
interview to determine in depth qualitative data about the
user engagement and overall experience with the AI.
        </p>
      </sec>
      <sec id="sec-3-4">
        <title>When the user finishes the design ideation sketching,</title>
        <p>they let the virtual agent know by clicking the ”Finish
ideation” button. The virtual agent arrives and greets the
user for completing the design ideation task by saying,
”Well done! You did a great job! ”.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>5. Study Protocol</title>
      <sec id="sec-4-1">
        <title>The user experiment will take place virtually. We will use Google Meet to connect with the study participants. The target sample size for the study is 50 participants</title>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>6. Discussion</title>
      <p>
        In the young yet fast-growing field of human-AI
cocreativity, attention is needed to design human-centered
co-creative systems where users are engaged in a
successful collaborative experience. Interaction design where
users can communicate with the AI for providing user
preference improves the collaborative experience and
user attitude towards the AI [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. Conversing virtual
agents have transitioned into services such as ecommerce,
leading to an increased ability for user engagement. In a
conversing interaction, users can provide their feedback
on the AI’s contribution, which provides more
information to the AI about user preferences. Conversing
interaction also helps users perceive the AI as a partner rather
than a tool. A user’s confidence in an AI agent’s ability to
perform tasks is improved when imbuing the agent with
embodiment and social behaviors compared to the agent
solely depending on conversation [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ]. The embodiment
improves the user perception of an AI agent in terms of
a collaborative partner, an entity. Users also tend to trust
an AI in terms of their ability when they can see their
presence. The embodiment also helps design afective AI
where an AI’s feelings are visible in its expression or
gesture. As a young field and new research area, interaction
design is rarely discussed in the existing literature despite
being a fundamental property of an adequate co-creative
system. An adequate interaction model dramatically
improves the quality of the collaboration and engages users.
Investigating the impact of conversing interaction and AI
embodiment for designing efective co-creative systems
that engage users is essential.
      </p>
      <p>We develop the prototype of Creative PenPal as an
efort to explore the impact of a conversing embodied
co-creative AI agent on user engagement, user
perception and overall collaborative experience. We describe
the study design that will provide insights for designing
efective co-creative systems that engage users and
improve their collaborative experience with the AI agent.
With the insights and results from the study, we will
improve the interaction design of Creative PenPal and
implement the AI model in the improved prototype. At
the time of the workshop, we will have the data and
insights in our hands from the study, revealing the impact
of the interaction model we used. During the workshop,
we will be able to demonstrate the results and insights.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>N.</given-names>
            <surname>Davis</surname>
          </string-name>
          ,
          <article-title>Human-computer co-creativity: Blending human and computational creativity</article-title>
          ,
          <source>in: Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment</source>
          , volume
          <volume>9</volume>
          ,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>R.</given-names>
            <surname>Louie</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Coenen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. Z.</given-names>
            <surname>Huang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Terry</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. J.</given-names>
            <surname>Cai</surname>
          </string-name>
          ,
          <article-title>Novice-ai music co-creation via ai-steering tools for deep generative models</article-title>
          ,
          <source>in: Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems</source>
          ,
          <year>2020</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>13</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>A.</given-names>
            <surname>Sutclife</surname>
          </string-name>
          ,
          <article-title>Designing for user engagement: Aesthetic and attractive user interfaces</article-title>
          ,
          <source>Synthesis lectures on human-centered informatics 2</source>
          (
          <year>2009</year>
          )
          <fpage>1</fpage>
          -
          <lpage>55</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>O.</given-names>
            <surname>Bown</surname>
          </string-name>
          ,
          <article-title>Player responses to a live algorithm: Conceptualising computational creativity without recourse to human comparisons?</article-title>
          , in: ICCC,
          <year>2015</year>
          , pp.
          <fpage>126</fpage>
          -
          <lpage>133</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          <article-title>[5] Library of mixed-initiative creative interfaces</article-title>
          , http://mici.codingconduct.cc/, ???? (Accessed on 05/31/
          <year>2020</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>S.</given-names>
            <surname>Deterding</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Hook</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Fiebrink</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Gillies</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Gow</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Akten</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Smith</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Liapis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Compton</surname>
          </string-name>
          ,
          <article-title>Mixedinitiative creative interfaces</article-title>
          ,
          <source>in: Proceedings of the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems</source>
          ,
          <year>2017</year>
          , pp.
          <fpage>628</fpage>
          -
          <lpage>635</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Rogers</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Sharp</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Preece</surname>
          </string-name>
          ,
          <article-title>Interaction design: beyond human-computer interaction</article-title>
          , John Wiley &amp; Sons,
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>P.</given-names>
            <surname>Isola</surname>
          </string-name>
          , J.-Y. Zhu,
          <string-name>
            <given-names>T.</given-names>
            <surname>Zhou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. A.</given-names>
            <surname>Efros</surname>
          </string-name>
          ,
          <article-title>Imageto-image translation with conditional adversarial networks</article-title>
          ,
          <source>in: Proceedings of the IEEE conference on computer vision and pattern recognition</source>
          ,
          <year>2017</year>
          , pp.
          <fpage>1125</fpage>
          -
          <lpage>1134</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>C.</given-names>
            <surname>Gutwin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Greenberg</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Roseman</surname>
          </string-name>
          ,
          <article-title>Workspace awareness in real-time distributed groupware: Framework, widgets, and evaluation</article-title>
          , in: People and
          <string-name>
            <surname>Computers</surname>
            <given-names>XI</given-names>
          </string-name>
          , Springer,
          <year>1996</year>
          , pp.
          <fpage>281</fpage>
          -
          <lpage>298</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>G.</given-names>
            <surname>Hofman</surname>
          </string-name>
          , G. Weinberg,
          <article-title>Interactive improvisation with a robotic marimba player</article-title>
          ,
          <source>Autonomous Robots</source>
          <volume>31</volume>
          (
          <year>2011</year>
          )
          <fpage>133</fpage>
          -
          <lpage>153</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>N.</given-names>
            <surname>Bryan-Kinns</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Hamilton</surname>
          </string-name>
          , Identifying mutual engagement,
          <source>Behaviour &amp; Information Technology</source>
          <volume>31</volume>
          (
          <year>2012</year>
          )
          <fpage>101</fpage>
          -
          <lpage>125</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>M.</given-names>
            <surname>Yee-King</surname>
          </string-name>
          ,
          <string-name>
            <surname>M.</surname>
          </string-name>
          <article-title>d'Inverno, Experience driven design of creative systems (</article-title>
          <year>2016</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>J.</given-names>
            <surname>Preece</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Sharp</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Rogers</surname>
          </string-name>
          ,
          <article-title>Interaction design: beyond human-computer interaction</article-title>
          , John Wiley &amp; Sons,
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>K.</given-names>
            <surname>Kim</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Boelling</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Haesler</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Bailenson</surname>
          </string-name>
          , G. Bruder,
          <string-name>
            <given-names>G. F.</given-names>
            <surname>Welch</surname>
          </string-name>
          ,
          <article-title>Does a digital assistant need a body? the influence of visual embodiment and social behavior on the perception of intelligent virtual agents in ar</article-title>
          ,
          <source>in: 2018 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)</source>
          , IEEE,
          <year>2018</year>
          , pp.
          <fpage>105</fpage>
          -
          <lpage>114</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>G.</given-names>
            <surname>Bente</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Rüggenberg</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N. C.</given-names>
            <surname>Krämer</surname>
          </string-name>
          ,
          <article-title>Social presence and interpersonal trust in avatar-based, collaborative net-communications</article-title>
          ,
          <source>in: Proceedings of the Seventh Annual International Workshop on Presence</source>
          ,
          <year>2004</year>
          , pp.
          <fpage>54</fpage>
          -
          <lpage>61</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>C.</given-names>
            <surname>Clavel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Cafaro</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Campano</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Pelachaud</surname>
          </string-name>
          ,
          <article-title>Fostering user engagement in face-to-face humanagent interactions: a survey</article-title>
          , in: Toward Robotic
          <string-name>
            <surname>Socially Believable Behaving Systems-Volume</surname>
            <given-names>II</given-names>
          </string-name>
          , Springer,
          <year>2016</year>
          , pp.
          <fpage>93</fpage>
          -
          <lpage>120</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>E.</given-names>
            <surname>Cherry</surname>
          </string-name>
          ,
          <string-name>
            <surname>C.</surname>
          </string-name>
          <article-title>Latulipe, Quantifying the creativity support of digital tools through the creativity support index</article-title>
          ,
          <source>ACM Transactions on ComputerHuman Interaction (TOCHI) 21</source>
          (
          <year>2014</year>
          )
          <fpage>1</fpage>
          -
          <lpage>25</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>P.</given-names>
            <surname>Karimi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Rezwana</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Siddiqui</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. L.</given-names>
            <surname>Maher</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Dehbozorgi</surname>
          </string-name>
          ,
          <article-title>Creative sketching partner: an analysis of human-ai co-creativity</article-title>
          ,
          <source>in: Proceedings of the 25th International Conference on Intelligent User Interfaces</source>
          ,
          <year>2020</year>
          , pp.
          <fpage>221</fpage>
          -
          <lpage>230</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>