<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Addressing Present Bias in Movie Recommender Systems and Beyond</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author role="corresp">
							<persName><forename type="first">Kai</forename><surname>Lukoff</surname></persName>
							<email>kai1@uw.edu</email>
							<affiliation key="aff0">
								<orgName type="institution">University of Washington</orgName>
								<address>
									<settlement>Seattle</settlement>
									<region>WA</region>
									<country key="US">USA</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Addressing Present Bias in Movie Recommender Systems and Beyond</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">09C5E19CB3B5AA54715A037C9A2B0A35</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T08:13+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>present bias</term>
					<term>cognitive bias</term>
					<term>algorithmic bias</term>
					<term>recommender systems</term>
					<term>digital wellbeing</term>
					<term>movies</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Present bias leads people to choose smaller immediate rewards over larger rewards in the future. Recommender systems often reinforce present bias because they rely predominantly upon what people have done in the past to recommend what they should do in the future. How can recommender systems overcome this present bias to recommend items in ways that match with users' aspirations? Our workshop position paper presents the motivation and design for a user study to address this question in the domain of movies. We plan to ask Netflix users to rate movies that they have watched in the past for the longterm rewards that these movies provided (e.g., memorable or meaningful experiences). We will then evaluate how well long-term rewards can be predicted using existing data (e.g., movie critic ratings). We hope to receive feedback on this study design from other participants at the HUMANIZE workshop and spark conversations about ways to address present bias in recommender systems.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>People often select smaller immediate rewards over larger rewards in the future, a phenomenon that is known as present bias or time discounting. This applies to decisions such as what snack to eat <ref type="bibr" target="#b0">[1,</ref><ref type="bibr" target="#b1">2]</ref>, how much to save for retirement <ref type="bibr" target="#b2">[3]</ref>, or which movies to watch <ref type="bibr" target="#b1">[2]</ref>. For example, when people choose a movie to watch this evening they often choose guilty pleasures like The Fast and The Furious, which are enjoyable inthe-moment, but then quickly forgotten. By contrast, when they choose a movie to watch next week, they are more likely to choose films that are challenging but meaningful, such as Schindler's List <ref type="bibr" target="#b1">[2]</ref>.</p><p>Recommender systems (RS), algorithmic systems that predict the preference a user would give to an item, often reinforce present bias. Today, the dominant paradigm of recommender systems is behaviorism: recommendations are selected based on behavior traces ("what users do") and they largely neglect to capture explicit preferences ("what users say") <ref type="bibr" target="#b3">[4]</ref>. Since "what users do" reflects a present bias, RS that rely upon such actions to train their recommendations will prioritize items that offer high short-term rewards but low long-term rewards. In this way, recommender systems may reinforce what the current self wants rather than helping people reach their ideal self <ref type="bibr" target="#b4">[5]</ref>.</p><p>This position paper for the HUMANIZE workshop proposes a study design to address these topics in the domain of movies. In Study 1, a survey of Netflix users, we investigate: How should a RS make recommendations by asking ordinary users about the rather academic concept of "long-term rewards"? And can long-term rewards be predicted based on existing data (e.g., movie critic ratings)? In Study 2, a participatory design exercise with a movie RS, we ask: How do users want a RS to balance short-term and long-term rewards? And what controls would users like to have over such a RS?</p><p>We expect that our eventual findings will also inform the design of recommender systems that address present bias in other domains such as news, food, and fitness.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Related Work</head><p>The social psychologist Daniel Kahneman describes people as having both an experiencing self, who prefers short-term rewards like pleasure, and a remembering self, who prefers long-terms rewards like meaningful experiences <ref type="bibr" target="#b5">[6]</ref>. Lyngs et al. describe three different approaches to the thorny question of how to measure a user's "true preferences" <ref type="bibr" target="#b4">[5]</ref>. The first approach aligns with the experiencing self, the second with the remembering self, and the third with the wisdom of the crowd.</p><p>The first approach follows the experiencing self and asserts that what users do is what they really want, which many in the Silicon Valley push one step further to what we can get users to do is what they really want <ref type="bibr" target="#b6">[7]</ref>. Social media that are financed by advertising are "compelled to find ways to keep users engaged for as long as possible" <ref type="bibr" target="#b7">[8]</ref>. To achieve this, social media services often give the experiencing self exactly what it wants, knowing that it will override the preferences of the remembering self and lead the user to stay engaged for longer than they had intended.</p><p>The second approach prioritizes the remembering self, calling for systems to prompt the user to reflect on their ideal self. In this vein, Slovak et al. propose to design to help users to reflect upon how they wish to transform their behavior <ref type="bibr" target="#b8">[9]</ref>. Lukoff et al. previously explored how experience sampling can be used to measure how meaningful people find their interactions with smartphone apps immediately after use <ref type="bibr" target="#b9">[10]</ref>. However, building such reflection into RSs remains a major challenge because it is unclear how and when a system should ask a user about the "long-term rewards" of an experience. It may be that the common approach of asking users to rate items on a "5star" scale reflects a combination of shortterm and long-term rewards, and that a different prompt is required to capture evaluations of long-term rewards more specifically. It is also an open question how well such long-term rewards can be inferred from existing data.</p><p>The third perspective leverages the wisdom of the crowd, by using the collective elicited preferences of similar users with more experience to make recommendations. Recommender systems today tend to use the "behavior of the crowd" as input into their models, in the form of behavioral data of similar users, but largely neglect elicited preferences <ref type="bibr" target="#b3">[4]</ref>.</p><p>Finally, Ekstrand and Willemsen propose participatory design as a general corrective to the behaviorist bias of recommender systems <ref type="bibr" target="#b3">[4]</ref>. Harambam et al. explored using participatory methods to evaluate a recommender system for news, suggesting that giving users control might mitigate filter bubbles in news consumption <ref type="bibr" target="#b10">[11]</ref>. Participatory design is a promising way to investigate how users want a RS to balance short-term and long-term rewards and the controls they would like to have.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Proposed Study Design</head><p>In what follows, we propose a study design to better understand how to measure the long-term rewards of items in the context of movie recommendations. We hope to receive feedback on this study design from other participants at the HUMANIZE workshop and prompt conversations about ways to address present bias in recommender systems more broadly.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1.">Study 1: Eliciting user preferences for long-term rewards</head><p>Our first study is a survey of Netflix users that we are currently piloting, which addresses two research questions:</p><p>• RQ 1a: What wording should be used to ask users to rate the "short-term rewards" and "long-term rewards" of a movie? In other words, what wording captures the right construct and makes sense to users?</p><p>• RQ 1b: How well can a recommender system predict the long-term rewards of a movie for an individual using data other than explicit user ratings?</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1.1.">Study 1 Methods</head><p>We will ask Netflix users who watch at least one movie per month to download their past viewing history and share it with us. We will ask them to rate 30 movies: 10 watched in the past year, 10 watched 1-2 years ago, and 10 watched 2-3 years ago. Participants will rate each movie for short-term reward, longterm reward, and other constructs that might be correlated with these rewards (e.g., meaningfulness, memorability). The current wording of our questions is:</p><p>• For short-term rewards: How rewarding was this movie while you were watching it?</p><p>• For long-term rewards: How rewarding was this movie after you watched it?</p><p>Participants will rate all questions on a 1-5 scale, from "Not at all" to "Very. "</p><p>We are also interested in understanding what other constructs are correlated with the both short-term and long-term rewards. To this end, we are also asking about related constructs, such as:</p><p>• How enjoyable was this movie while you were watching it?</p><p>• How interesting was this movie while you were watching it?</p><p>• How meaningful was this movie after you watched it?</p><p>• How memorable was this movie after you watched it?</p><p>We are currently piloting all questions using a talk-aloud protocol in which participants explain their thinking to us as they complete the survey. We are checking to make sure that the wording makes sense to participants and to identify constructs that are related to short-term and long-term rewards. The constructs that are most closely related to these rewards in the piloting will be included in the final survey, so that each movie will be rated for a cluster of constructs all related to short-term and long-term rewards. For the final survey, we plan to recruit about 50 Netflix users to generate a total of 1,500 movie ratings.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1.2.">Study 1 Planned Analysis</head><p>To address RQ 1a, we will report the qualitative results of our talk-aloud piloting of the survey wording. We will also report the correlation between how participants rated our measures of short-term and long-term rewards and related constructs (e.g., meaningfulness, memorability).</p><p>To address RQ 1b, first we will test how well existing data correlates with long-term rewards. The existing data we plan to test includes: user ratings (from others), critic ratings, box office earnings, genre, and the day of the week the movie was watched. One notable limitation of our study design here is that we will not have access to behavioral data like clicks, views, or time spent about movies.</p><p>Second, we will create machine learning models to test how well we can predict the long-term rewards of a movie might provide for an individual, as assessed by metrics like precision and recall. Specifically, we will create:</p><p>• A generalized model that makes the same predictions for each movie for all participants;</p><p>• A personalized model that makes individualized predictions for each movie for each participant;</p><p>Finally, we will also create generalized and personalized models that predict short-terms rewards too, and compare their performance against the models that predict long-term rewards. Our suspicion is that existing data may be more predictive of short-term rewards than long-term rewards, because longterm rewards may require a form of reflection that most existing data do not capture.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2.">Study 2: Addressing present bias via user control mechanisms</head><p>Study 2 is currently planned as a participatory design exercise with a movie recommender system, in which we ask:</p><p>• RQ 2a: What preferences do users have for how a movie RS should weight the short-term versus long-term rewards of the movies it recommends? In what contexts would users prefer what weights?</p><p>• RQ 2b: How would users like to control how a movie RS weighs the shortterm versus long-term rewards of the movies it recommends?</p><p>At the workshop, we hope to elicit feedback on how Study 2 might be revised to best answer our research questions.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2.1.">Study 2 Methods</head><p>Our exercise will begin by showing participants a set of recommended movies that is heavily weighted towards short-term rewards, based on the ratings that we obtained in Study 1. Then we will show them a set of recommendations that is heavily weighted towards long-term rewards. We will ask participants to describe which recommendations they would prefer and why. We will also ask about which contexts (e.g., mood, day of the week) affect which types of rewards they would prefer.</p><p>Next, we will solicit feedback from users on a paper prototype of a RS that offers users control over recommendations at the input, process, and output stages. For instance, at the input stage, users could indicate their general preferences for short-term versus long-term rewards. At the process stage, users could choose from different "algorithmic personas" to filter their recommendations, e.g., "the guilty pleasure watcher" or "the classic movie snob. " At the output stage, users might control the order in which recommendations are sorted. This exercise draws from the study design in Harambam et al., in which users participants evaluated and described the control mechanisms they would like to have based on a prototype of a news recommender system, with a focus on addressing the bias of filter bubbles <ref type="bibr" target="#b10">[11]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Workshop relevance</head><p>Today's recommender systems often prioritize "what users do" and neglect "what users say. " As a result, they tend to reinforce the current self rather than foster the ideal self. This study design proposes to study this with regards to movies. But the same problem also applies to other domains such as online groceries, where the current self might want cookies while the ideal self wants blueberries, or digital news, where the current self might want to read stories that agree with their worldview while the ideal self wants to be challenged by different perspectives <ref type="bibr" target="#b12">[12]</ref>. The methods we propose in this study design are relevant beyond just movies.</p><p>We expect that all workshop participants will benefit from a lively discussion of how to conceptualize and measure user preferences in ways that go beyond the current behaviorist paradigm of prioritizing what users do over explicit preferences. Our proposal to ask users for their explicit ratings and correlate these with other data is just one possible approach, and we would like to discuss what other methods workshop participants would suggest and how well these apply to other domains such as groceries and news. Addressing present bias also raises philosophical issues: Is it always irrational to pursue shortterm rewards over long-term rewards? Are users in a position to judge their own longterm rewards? How far should computing systems go in nudging or shoving users towards long-term rewards?</p><p>Finally, present bias is just one of many cognitive biases. We hope that our submission will also contribute to the growing conversation on how to use psychological theory to address cognitive biases in intelligent user interfaces <ref type="bibr" target="#b13">[13]</ref>.</p></div>		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Acknowledgments</head><p>Thank you to Minkyong Kim, Ulrik Lyngs, David McDonald, Sean Munson, and Alexis Hiniker for feedback on this research agenda and drafts of the position paper.</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Mining behavioral economics to design persuasive technology for healthy choices</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">K</forename><surname>Lee</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Kiesler</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Forlizzi</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI &apos;11</title>
				<meeting>the SIGCHI Conference on Human Factors in Computing Systems, CHI &apos;11<address><addrLine>New York, NY, USA</addrLine></address></meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2011">2011</date>
			<biblScope unit="page" from="325" to="334" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Predicting hunger: The effects of appetite and delay on choice</title>
		<author>
			<persName><forename type="first">D</forename><surname>Read</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Van Leeuwen</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Organ. Behav. Hum. Decis. Process</title>
		<imprint>
			<biblScope unit="volume">76</biblScope>
			<biblScope unit="page" from="189" to="205" />
			<date type="published" when="1998">1998</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Increasing saving behavior through ageprogressed renderings of the future self</title>
		<author>
			<persName><forename type="first">H</forename><forename type="middle">E</forename><surname>Hershfield</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">G</forename><surname>Goldstein</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><forename type="middle">F</forename><surname>Sharpe</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Fox</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Yeykelis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">L</forename><surname>Carstensen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">N</forename><surname>Bailenson</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">J. Mark. Res</title>
		<imprint>
			<biblScope unit="volume">48</biblScope>
			<biblScope unit="page" from="S23" to="S37" />
			<date type="published" when="2011">2011</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Behaviorism is not enough: Better recommendations through listening to users</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">D</forename><surname>Ekstrand</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">C</forename><surname>Willemsen</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 10th ACM Conference on Recommender Systems, Rec-Sys &apos;16</title>
				<meeting>the 10th ACM Conference on Recommender Systems, Rec-Sys &apos;16<address><addrLine>New York, NY, USA</addrLine></address></meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2016">2016</date>
			<biblScope unit="page" from="221" to="224" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<monogr>
		<author>
			<persName><forename type="first">U</forename><surname>Lyngs</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Binns</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Van Kleek</surname></persName>
		</author>
		<title level="m">others, So, tell me what users want, what they really, really want!</title>
				<imprint>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
	<note>Extended Abstracts of the</note>
</biblStruct>

<biblStruct xml:id="b5">
	<monogr>
		<author>
			<persName><forename type="first">D</forename><surname>Kahneman</surname></persName>
		</author>
		<title level="m">Thinking, Fast and Slow</title>
				<imprint>
			<publisher>Straus and Giroux</publisher>
			<date type="published" when="2011">2011</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Captivating algorithms: Recommender systems as traps</title>
		<author>
			<persName><forename type="first">N</forename><surname>Seaver</surname></persName>
		</author>
		<idno type="DOI">10.1177/1359183518820366</idno>
		<ptr target="https://doi.org/10.1177/1359183518820366" />
	</analytic>
	<monogr>
		<title level="j">Journal of Material Culture</title>
		<imprint>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<monogr>
		<title level="m" type="main">Detection and design for cognitive biases in people and computing systems</title>
		<author>
			<persName><forename type="first">T</forename><surname>Dingler</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Tag</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Karapanos</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Kise</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Dengel</surname></persName>
		</author>
		<ptr target="http://critical-media.org/cobi/background.html" />
		<imprint>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Reflective practicum: A framework of sensitising concepts to design for transformative reflection</title>
		<author>
			<persName><forename type="first">P</forename><surname>Slovák</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Frauenberger</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Fitzpatrick</surname></persName>
		</author>
		<idno>acm.org</idno>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems</title>
				<meeting>the 2017 CHI Conference on Human Factors in Computing Systems</meeting>
		<imprint>
			<date type="published" when="2017">2017</date>
			<biblScope unit="page" from="2696" to="2707" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">What makes smartphone use meaningful or meaningless?</title>
		<author>
			<persName><forename type="first">K</forename><surname>Lukoff</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Yu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Kientz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Hiniker</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proc. ACM Interact. Mob. Wearable Ubiquitous Technol</title>
				<meeting>ACM Interact. Mob. Wearable Ubiquitous Technol</meeting>
		<imprint>
			<date type="published" when="2018">2018</date>
			<biblScope unit="volume">2</biblScope>
			<biblScope unit="page">26</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<monogr>
		<title/>
		<author>
			<persName><forename type="first">J</forename><surname>Harambam</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Bountouridis</surname></persName>
		</author>
		<imprint/>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Designing for the better by taking users into account: a qualitative evaluation of user control mechanisms in (news) recommender systems</title>
		<author>
			<persName><forename type="first">M</forename><surname>Makhortykh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Van Hoboken</surname></persName>
		</author>
		<ptr target=".org" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 13th ACM Conference on Recommender Systems</title>
				<meeting>the 13th ACM Conference on Recommender Systems</meeting>
		<imprint>
			<publisher>acm</publisher>
			<date type="published" when="2019">2019</date>
			<biblScope unit="page" from="69" to="77" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Presenting diverse political opinions: how and how much</title>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">A</forename><surname>Munson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Resnick</surname></persName>
		</author>
		<ptr target=".org" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the SIGCHI conference on human factors in computing systems</title>
				<meeting>the SIGCHI conference on human factors in computing systems</meeting>
		<imprint>
			<publisher>acm</publisher>
			<date type="published" when="2010">2010</date>
			<biblScope unit="page" from="1457" to="1466" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Designing theory-driven user-centric explainable AI</title>
		<author>
			<persName><forename type="first">D</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Q</forename><surname>Yang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Abdul</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><forename type="middle">Y</forename><surname>Lim</surname></persName>
		</author>
		<idno type="DOI">10.1145/3290605.3300831</idno>
		<ptr target="https://dl.acm.org/doi/abs/10.1145/3290605.3300831" />
	</analytic>
	<monogr>
		<title level="m">of the 2019 CHI Conference on</title>
				<imprint>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
