<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Creative PenPal: A Virtual Embodied Conversational AI Agent to Improve User Engagement and Collaborative Experience in Human-AI Co-Creative Design Ideation</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Jeba</forename><surname>Rezwana</surname></persName>
							<email>jrezwana@uncc.edu</email>
							<affiliation key="aff0">
								<orgName type="institution">University of North Carolina at Charlotte</orgName>
								<address>
									<addrLine>9201 University City Blvd</addrLine>
									<postCode>28223</postCode>
									<settlement>Charlotte</settlement>
									<region>NC</region>
									<country>US</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Mary</forename><forename type="middle">Lou</forename><surname>Maher</surname></persName>
							<email>m.maher@uncc.edu</email>
							<affiliation key="aff0">
								<orgName type="institution">University of North Carolina at Charlotte</orgName>
								<address>
									<addrLine>9201 University City Blvd</addrLine>
									<postCode>28223</postCode>
									<settlement>Charlotte</settlement>
									<region>NC</region>
									<country>US</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Nicholas</forename><surname>Davis</surname></persName>
							<email>ndavis64@uncc.edu</email>
							<affiliation key="aff0">
								<orgName type="institution">University of North Carolina at Charlotte</orgName>
								<address>
									<addrLine>9201 University City Blvd</addrLine>
									<postCode>28223</postCode>
									<settlement>Charlotte</settlement>
									<region>NC</region>
									<country>US</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Creative PenPal: A Virtual Embodied Conversational AI Agent to Improve User Engagement and Collaborative Experience in Human-AI Co-Creative Design Ideation</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">38500EF574011F97EDD8047B3D31EB91</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T08:14+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Human AI Co-creation</term>
					<term>User Engagement</term>
					<term>Virtual Embodied AI</term>
					<term>Conversational AI</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>In recent years, researchers have designed many co-creative systems that are very promising with a powerful AI, yet some fail to engage the users due to the unimpressive quality of the collaboration and interaction. Most of the existing co-creative systems use instructing interaction where users only communicate with the AI by providing instructions for contribution. In this paper, we demonstrate the prototype of a co-creative system for design ideation, Creative PenPal that utilizes an interaction model that includes human-AI conversing interaction using text and a virtual embodiment of the AI character. We hypothesize that this interaction model will improve user engagement, user perception about the AI, and the collaborative experience. We describe the study design to investigate the impact of this particular interaction model on user engagement and the overall collaborative experience. By the time of the workshop, we will have the data and insights from the study.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>AI agents are becoming a part of our everyday life, thanks to artificial intelligence technologies. Human-AI co creativity involves a human and an AI collaborating on creative tasks as partners <ref type="bibr" target="#b0">[1]</ref>. Rather than being perceived as a support tool, AI agents in co-creative systems should be regarded as a co-equal partner. This field has the potential to transform how people perceive and interact with AI. A study showed that AI ability alone does not ensure a positive collaborative experience of users with the AI <ref type="bibr" target="#b1">[2]</ref>. In recent years, researchers have designed many co-creative systems with powerful AI ability, yet sometimes users fail to maintain their interest and engagement while collaborating with the AI due to the quality of the collaboration and interaction. The literature asserts that user engagement is associated with the way users interact with a system <ref type="bibr" target="#b2">[3]</ref>. Interaction design is often an untended topic in the co-creativity literature despite being a fundamental property of co-creative systems. Bown asserted that the success of a creative system's collaborative role should be further investigated through interaction design as interaction plays a key role in the creative process of co-creative systems <ref type="bibr" target="#b3">[4]</ref>. Therefore, as a young field, there are potential areas of interaction design to be explored for designing effective co-creative systems that engage users and provide a better collaborative experience.</p><p>With the intention of investigating the trends in the interaction design of the existing co-creative systems, we utilized the archival website, "Library of Mixed-Initiative Creative Interfaces" (LMICI), which archives 74 existing co-creative systems <ref type="bibr" target="#b4">[5]</ref>. Angie Spoto and Natalia Oleynik created this archive after a workshop on mixedinitiative interfaces led by Deterding et al. in 2017 <ref type="bibr" target="#b5">[6,</ref><ref type="bibr" target="#b4">5]</ref>. The archive provides the corresponding literature and other relevant information for each of the systems. We analyzed the interaction designs of the co-creative systems present in the archive. Apparently, most of the cocreative systems use instructing interaction <ref type="bibr" target="#b6">[7]</ref>, where users provide instructions to the system using buttons, sliders, or text to communicate directly with the AI (other than communicating through the creative product). However, using buttons and sliders, users can communicate with the AI in a very constrained and minimal way in most of the systems. Very few systems use text, voice, or embodied communication for user to AI direct communication during a co-creation to provide information to the AI, give feedback to the AI, etc. For example, Image to Image <ref type="bibr" target="#b7">[8]</ref> is a co-creative system that converts a line drawing of a particular object from the user into a photo-realistic image. The user interface has only one button that users click to tell the AI to convert the drawing. Other than the button, there is no way of communicating with the AI to provide information, suggestion or feedback. In a human collaboration, collaborators communicate to provide feedback and convey important information to each other and is a major component of the mechanics of co-creation <ref type="bibr" target="#b8">[9]</ref>. The literature about human-AI co-creation says that embodied communication improves coordination between the human and the AI <ref type="bibr" target="#b9">[10]</ref>. Additionally, literature asserts that a communication channel for conversation between co-creators other than communicating through the shared creative product improves user engagement in a human creative collaboration <ref type="bibr" target="#b10">[11]</ref>. These literatures led us to investigate the impact of embodied communication from the AI and a conversation between the human and AI on user engagement and collaborative experience in human-AI co-creativity. Our research questions emerged from the issue that most existing co-creative systems use instructing interaction type, which uses one-way communication, human to AI. For this work, we will investigate the impact of conversing interaction and AI embodiment on user engagement, user perception about the AI and the overall collaborative experience. The two research questions we have are-• How does AI embodiment and conversing interaction influence user engagement? • How does AI embodiment and conversing interaction influence user perception about the AI agent as the collaborative partner and the overall collaborative experience?</p><p>For investigating the research questions, we have developed a prototype of a co-creative system named Creative PenPal where the user and the AI collaborate on a design ideation task. Users can generate ideas for designing a particular object by sketching on a canvas, and the AI will also contribute to the design ideation by showing different inspirational sketches. Creative PenPal utilizes a conversing interaction for the communication between the human and the AI. Additionally, a virtual embodied character for the AI agent is utilized. For investigating the research questions, we describe the study design in the paper. By the time of the workshop, we will have the data and insights from the study.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Related Work</head><p>Louie et al. identified that AI ability alone does not ensure a positive collaborative experience of users with the AI <ref type="bibr" target="#b1">[2]</ref>. Bown asserted that the success of a creative system's collaborative role should be further investigated in terms of interaction design as interaction plays a key role in the creative process of co-creative systems <ref type="bibr" target="#b3">[4]</ref>. Later Yee-King and d'Inverno argued for a stronger focus on the user experience, suggesting a need for further integration of interaction design practice into human-AI co-creativity research <ref type="bibr" target="#b11">[12]</ref>.</p><p>Interaction types are ways a user interacts with a product or application <ref type="bibr" target="#b12">[13]</ref>. Instructing interaction is where users issue instructions to a system. This can be done in many ways, including typing in commands, selecting options from menus or on a multitouch screen, pressing buttons or using function keys, etc. In contrast, the conversing interaction type is where users have a dialogue with a system. Users can speak via an interface or type in questions or answers to which the system replies via text or speech output <ref type="bibr" target="#b12">[13]</ref>. Conversational agents have transitioned into multiple industries with increased ability for user engagement in intelligent conversation.</p><p>The literature asserts that embodied communication aids synchronization and coordination in improvisational human-computer co-creativity <ref type="bibr" target="#b9">[10]</ref>. Being able to converse with each other shows an increased engagement level in a human creative collaboration <ref type="bibr" target="#b10">[11]</ref>. A user's confidence in an AI agent's ability to perform tasks is improved when imbuing the agent with embodiment and social behaviors compared to the agent solely depending on conversation <ref type="bibr" target="#b13">[14]</ref>. Bente et al. reported that embodied telepresent communication improved both social presence and interpersonal trust in remote collaboration settings with a high level of nonverbal activity <ref type="bibr" target="#b14">[15]</ref>.</p><p>User engagement with virtual embodied conversational agents can be measured via user self-reports; by monitoring the user's responses, tracking the user's body postures, head movements and facial expressions during the interaction, or by manually logging behavioral responses of user experience <ref type="bibr" target="#b15">[16]</ref>. Carrol and Latulipe proposed a quantitative and psychometric survey, called Creativity Support Index (CSI), to assess a tool's creativity support by measuring six dimensions of creativity via self reports: Exploration, Expressiveness, Immersion, Enjoyment, Results Worth Effort and Collaboration <ref type="bibr" target="#b16">[17]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Interface</head><p>Creative PenPal is an interactive prototype, created with Javascript, which has all the interaction components except the back-end AI model. We have selected a collection of sketches as the database for creating a seamless experience that mimics an actual implementation of the AI model. The sketch generation is automated where the system selects sketches from the collection. We have two versions of the Creative PenPal prototype to investigate and compare the user engagement and collaborative experience between the two versions. The original version uses a conversing interaction and a virtual embodied AI (see Figure <ref type="figure" target="#fig_0">1</ref>). The virtual embodied AI character, a pencil, is shown in section A of Figure <ref type="figure" target="#fig_0">1</ref>. We will address the AI character as PenPal in the rest of the paper. Section B is where the conversation happens between the PenPal and the user via text and buttons. We can see the design task displayed in section C. Both the user and the AI collaborate in a design ideation task where both collaborators generate ideas for the design of an object as sketches. Users will design the specified object in the task by sketching on the canvas shown in section F. Users can undo a stroke using the "Undo Previous Sketch" button and start the design ideation over by using the "Clear the canvas" button. When users hit the "Inspire me" button shown in section B, the virtual AI character will show an inspirational sketch of a conceptually similar object, an object that have similar working mechanism or usage as the design task object, on its canvas shown in section G. Previous work on co-creative design ideation <ref type="bibr" target="#b17">[18]</ref> showed that users were more inspired by conceptually similar objects than visually similar objects that share structural similarity as the design task object. Users can also ask for visually similar objects or sketches of the design task object to get inspiration by saying they didn't like the conceptually similar object (described in the next section). Section E shows the name of the object located in the PenPal generated sketch. The other version uses an instructing interaction where users can instruct the AI using buttons without AI embodiment (Figure <ref type="figure" target="#fig_1">2</ref>). We will use both of these two versions to compare the impact of two different interaction designs on user engagement and collaborative experience with an AI.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Interaction Model</head><p>For the interaction model, we choose a conversing interaction. The conversation with the virtual embodied AI is simple so that the user will be able to go deeper into the ideation process without any interruption in the design flow. The embodied virtual agent will show some affective characteristics, for example, when the user likes its contribution, it will be seen as happy and when the user does not like the contribution from the AI, it will be sad. The conversation is divided into five different situational phases demonstrated in Figure <ref type="figure" target="#fig_1">2</ref>. Each phase includes the embodied state of the AI and conversational interaction between the user and the PenPal. The text without a comment bubble represents the embodied state of the AI in Figure <ref type="figure" target="#fig_1">2</ref>. The texts with comment bubbles represent dialogues of the user and the AI, and the icon indicates which dialogue belongs to whom. Different responses from the user initiate another phase, which is shown using arrows in Figure <ref type="figure" target="#fig_1">2</ref>. If the user can respond with different options, "/" sign is used in the Figure.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.1.">PenPal Introduction</head><p>This phase will start when the user starts the design task. PenPal will introduce itself and ask the users if they want to see an inspirational sketch from the AI by saying "Hi! I am your Creative PenPal. Do you want me to inspire you?". Users can respond immediately by pressing the button "Inspire me" or they can keep ideating by sketching and respond later.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2.">PenPal Generating Sketch and Collecting User Preferences</head><p>When the user hits the button "Inspire me" indicating the desire to see an inspirational sketch, the PenPal will move to the canvas and generate a sketch. The PenPal will ask the user whether they liked the sketch or not. The user  can reply with the "Yes" button or the "No" button. This phase is for collecting user preferences.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.3.">User Liked PenPals Sketch</head><p>When users select the "Yes" button in response to Pen-Pal's question to determine whether the user liked the sketch or not, it means the sketch inspired the user in </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.4.">User Did not Like PenPals Sketch</head><p>When users click the "No" button, indicating that Pen-Pal's generated sketch did not inspire them, PenPal arrives with a sad face and says, "Sorry that I could not inspire you!" (left side of Figure <ref type="figure" target="#fig_3">4</ref>, the gree arrow indicates transition). Then it suggests the user ask for specific types of objects as inspiration by saying, "Let's try to be more specific about what you want me to inspire with" (Right side of Figure <ref type="figure" target="#fig_3">4</ref>). The user can respond witree options, "Design Task Objects" (as our design task object is shopping cart, the button says "Shopping Carts"), "visually similar objects", or "conceptually similar objects". Visually similar objects have visual structural similarity as the design task object and conceptually similar objects have similar working mechanism or usage as the design task object. When the user clicks any of these three buttons, the PenPal will generate a sketch accordingly.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.5.">User Finished Sketching</head><p>When the user finishes the design ideation sketching, they let the virtual agent know by clicking the "Finish ideation" button. The virtual agent arrives and greets the user for completing the design ideation task by saying, "Well done! You did a great job! ".</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Study Protocol</head><p>The user experiment will take place virtually. We will use Google Meet to connect with the study participants.</p><p>The target sample size for the study is 50 participants including 25 males and 25 females as participants. This study will use a between-subject study where one group of participants will test the version with instructing interaction and without any embodied AI character. The other group will test the version with the conversing interaction and a virtual embodied AI agent. The study will start with a short pre-study survey to collect some demographic information about the participants, for example, gender, age-range, drawing/ sketching skills, etc. Then, the participant will carry out the design task using either one version of the Creative PenPal. The task for this study is-"Ideate the design of a shopping cart for the elderly within 20 minutes. You must include three design inspirations from the AI in the design". The whole task will be screen recorded. After the task, the participants will fill out Creativity Support Index (CSI), which is a well known psycho metric survey, for measuring six dimensions of creativity: Exploration, Expressiveness, Immersion, Enjoyment, Results Worth Effort and Collaboration <ref type="bibr" target="#b16">[17]</ref> to evaluate user engagement, collaboration and immersion. After that, a retrospective think-aloud will be conducted as the participants watch the screenrecording video of the task to understand the rationale behind the user interaction process and user experience. The study will end with a follow-up semi-structured interview to determine in depth qualitative data about the user engagement and overall experience with the AI.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.">Discussion</head><p>In the young yet fast-growing field of human-AI cocreativity, attention is needed to design human-centered co-creative systems where users are engaged in a successful collaborative experience. Interaction design where users can communicate with the AI for providing user preference improves the collaborative experience and user attitude towards the AI <ref type="bibr" target="#b1">[2]</ref>. Conversing virtual agents have transitioned into services such as ecommerce, leading to an increased ability for user engagement. In a conversing interaction, users can provide their feedback on the AI's contribution, which provides more information to the AI about user preferences. Conversing interaction also helps users perceive the AI as a partner rather than a tool. A user's confidence in an AI agent's ability to perform tasks is improved when imbuing the agent with embodiment and social behaviors compared to the agent solely depending on conversation <ref type="bibr" target="#b13">[14]</ref>. The embodiment improves the user perception of an AI agent in terms of a collaborative partner, an entity. Users also tend to trust an AI in terms of their ability when they can see their presence. The embodiment also helps design affective AI where an AI's feelings are visible in its expression or gesture. As a young field and new research area, interaction design is rarely discussed in the existing literature despite being a fundamental property of an adequate co-creative system. An adequate interaction model dramatically improves the quality of the collaboration and engages users.</p><p>Investigating the impact of conversing interaction and AI embodiment for designing effective co-creative systems that engage users is essential. We develop the prototype of Creative PenPal as an effort to explore the impact of a conversing embodied co-creative AI agent on user engagement, user perception and overall collaborative experience. We describe the study design that will provide insights for designing effective co-creative systems that engage users and improve their collaborative experience with the AI agent. With the insights and results from the study, we will improve the interaction design of Creative PenPal and implement the AI model in the improved prototype. At the time of the workshop, we will have the data and insights in our hands from the study, revealing the impact of the interaction model we used. During the workshop, we will be able to demonstrate the results and insights.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: Original Creative PenPal Interface with AI Embodiment and Conversing Interaction</figDesc><graphic coords="3,99.71,84.19,395.86,222.67" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: Creative PenPal Interface with Instructing Interaction and without AI Embodiment</figDesc><graphic coords="4,99.71,84.19,395.86,189.63" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Figure 3 :</head><label>3</label><figDesc>Figure 3: Conversation Model between PanPal and the User</figDesc><graphic coords="4,99.21,303.12,396.85,283.46" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Figure 4 :</head><label>4</label><figDesc>Figure 4: PenPal Collecting User Preference</figDesc><graphic coords="5,172.63,84.19,250.01,150.29" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_4"><head>Figure 5 :</head><label>5</label><figDesc>Figure 5: User Did not Like PenPal's Sketch</figDesc><graphic coords="6,89.29,84.19,416.70,154.68" type="bitmap" /></figure>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Human-computer co-creativity: Blending human and computational creativity</title>
		<author>
			<persName><forename type="first">N</forename><surname>Davis</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment</title>
				<meeting>the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment</meeting>
		<imprint>
			<date type="published" when="2013">2013</date>
			<biblScope unit="volume">9</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Novice-ai music co-creation via ai-steering tools for deep generative models</title>
		<author>
			<persName><forename type="first">R</forename><surname>Louie</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Coenen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">Z</forename><surname>Huang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Terry</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">J</forename><surname>Cai</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems</title>
				<meeting>the 2020 CHI Conference on Human Factors in Computing Systems</meeting>
		<imprint>
			<date type="published" when="2020">2020</date>
			<biblScope unit="page" from="1" to="13" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Designing for user engagement: Aesthetic and attractive user interfaces</title>
		<author>
			<persName><forename type="first">A</forename><surname>Sutcliffe</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Synthesis lectures on human-centered informatics</title>
		<imprint>
			<biblScope unit="volume">2</biblScope>
			<biblScope unit="page" from="1" to="55" />
			<date type="published" when="2009">2009</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Player responses to a live algorithm: Conceptualising computational creativity without recourse to human comparisons?</title>
		<author>
			<persName><forename type="first">O</forename><surname>Bown</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">ICCC</title>
		<imprint>
			<biblScope unit="page" from="126" to="133" />
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<monogr>
		<ptr target="http://mici.codingconduct.cc/,????" />
		<title level="m">Library of mixed-initiative creative interfaces</title>
				<imprint>
			<date type="published" when="2020">Accessed on 05/31/2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Mixedinitiative creative interfaces</title>
		<author>
			<persName><forename type="first">S</forename><surname>Deterding</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Hook</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Fiebrink</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Gillies</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Gow</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Akten</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Smith</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Liapis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Compton</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems</title>
				<meeting>the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems</meeting>
		<imprint>
			<date type="published" when="2017">2017</date>
			<biblScope unit="page" from="628" to="635" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<monogr>
		<title level="m" type="main">Interaction design: beyond human-computer interaction</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Rogers</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Sharp</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Preece</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2011">2011</date>
			<publisher>John Wiley &amp; Sons</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Imageto-image translation with conditional adversarial networks</title>
		<author>
			<persName><forename type="first">P</forename><surname>Isola</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J.-Y</forename><surname>Zhu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Zhou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">A</forename><surname>Efros</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the IEEE conference on computer vision and pattern recognition</title>
				<meeting>the IEEE conference on computer vision and pattern recognition</meeting>
		<imprint>
			<date type="published" when="2017">2017</date>
			<biblScope unit="page" from="1125" to="1134" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Workspace awareness in real-time distributed groupware: Framework, widgets, and evaluation</title>
		<author>
			<persName><forename type="first">C</forename><surname>Gutwin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Greenberg</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Roseman</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">People and Computers XI</title>
				<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="1996">1996</date>
			<biblScope unit="page" from="281" to="298" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Interactive improvisation with a robotic marimba player</title>
		<author>
			<persName><forename type="first">G</forename><surname>Hoffman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Weinberg</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Autonomous Robots</title>
		<imprint>
			<biblScope unit="volume">31</biblScope>
			<biblScope unit="page" from="133" to="153" />
			<date type="published" when="2011">2011</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Identifying mutual engagement</title>
		<author>
			<persName><forename type="first">N</forename><surname>Bryan-Kinns</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Hamilton</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Behaviour &amp; Information Technology</title>
		<imprint>
			<biblScope unit="volume">31</biblScope>
			<biblScope unit="page" from="101" to="125" />
			<date type="published" when="2012">2012</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<monogr>
		<author>
			<persName><forename type="first">M</forename><surname>Yee-King</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Inverno</surname></persName>
		</author>
		<title level="m">Experience driven design of creative systems</title>
				<imprint>
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<monogr>
		<title level="m" type="main">Interaction design: beyond human-computer interaction</title>
		<author>
			<persName><forename type="first">J</forename><surname>Preece</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Sharp</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Rogers</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2015">2015</date>
			<publisher>John Wiley &amp; Sons</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Does a digital assistant need a body? the influence of visual embodiment and social behavior on the perception of intelligent virtual agents in ar</title>
		<author>
			<persName><forename type="first">K</forename><surname>Kim</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Boelling</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Haesler</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Bailenson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Bruder</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">F</forename><surname>Welch</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IEEE International Symposium on Mixed and Augmented Reality (IS-MAR)</title>
				<imprint>
			<publisher>IEEE</publisher>
			<date type="published" when="2018">2018. 2018</date>
			<biblScope unit="page" from="105" to="114" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Social presence and interpersonal trust in avatar-based, collaborative net-communications</title>
		<author>
			<persName><forename type="first">G</forename><surname>Bente</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Rüggenberg</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><forename type="middle">C</forename><surname>Krämer</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Seventh Annual International Workshop on Presence</title>
				<meeting>the Seventh Annual International Workshop on Presence</meeting>
		<imprint>
			<date type="published" when="2004">2004</date>
			<biblScope unit="page" from="54" to="61" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Fostering user engagement in face-to-face humanagent interactions: a survey</title>
		<author>
			<persName><forename type="first">C</forename><surname>Clavel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Cafaro</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Campano</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Pelachaud</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Toward Robotic Socially Believable Behaving Systems-Volume II</title>
				<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2016">2016</date>
			<biblScope unit="page" from="93" to="120" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">Quantifying the creativity support of digital tools through the creativity support index</title>
		<author>
			<persName><forename type="first">E</forename><surname>Cherry</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Latulipe</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">ACM Transactions on Computer-Human Interaction (TOCHI)</title>
		<imprint>
			<biblScope unit="volume">21</biblScope>
			<biblScope unit="page" from="1" to="25" />
			<date type="published" when="2014">2014</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">Creative sketching partner: an analysis of human-ai co-creativity</title>
		<author>
			<persName><forename type="first">P</forename><surname>Karimi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Rezwana</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Siddiqui</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">L</forename><surname>Maher</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Dehbozorgi</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 25th International Conference on Intelligent User Interfaces</title>
				<meeting>the 25th International Conference on Intelligent User Interfaces</meeting>
		<imprint>
			<date type="published" when="2020">2020</date>
			<biblScope unit="page" from="221" to="230" />
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
