<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Tinkering: A Way Towards Designing Transparent Algorithmic User Interfaces</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author role="corresp">
							<persName><forename type="first">Dilruba</forename><surname>Showkat</surname></persName>
							<email>dilrubashowkat@gmail.com</email>
							<affiliation key="aff0">
								<orgName type="institution">Lehigh University</orgName>
								<address>
									<settlement>Bethlehem</settlement>
									<region>PA</region>
									<country key="US">USA</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Tinkering: A Way Towards Designing Transparent Algorithmic User Interfaces</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">13DAC776DE12DAC4ECBA9C5C59167D20</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T08:13+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>tinkering</term>
					<term>exploration</term>
					<term>transparent algorithmic user interface</term>
					<term>inclusive design</term>
					<term>algorithmic transparency</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>With the widespread use of algorithms in interactive systems, it becomes inevitable for the users to apply these algorithms with caution. Algorithms are applied to make decisions in healthcare, hiring, the criminal justice system, and social media news feed among others. Thus, algorithmic systems impact human lives and society in significant ways. As a consequence, currently, the focus has been shifted toward designing transparent algorithmic user interfaces (UI's) -to make the algorithmic aspects more explicit. Designing transparent algorithmic user interfaces requires the designer to bring the algorithmic aspects of control at the UI level without causing information overload. This research attempts to investigate this gap by proposing tinkering or playful experimentation as a means of designing transparent algorithmic UI's. Tinkering is a cognitive style related to problem-solving, decision making, enables exploration with the interactive system. The proposed approach of combining tinkering with transparent UI's serves two potential purposes: first, the exploratory nature of tinkering has the ability to make the algorithmic aspects transparent without hurting users experience (UX), while providing flexibility and sufficient control in the personalized interactive experience; second, it enables the designer to detect software inclusiveness issues in the design before they become part of the final software, by allowing us to measure how much algorithmic transparency is desired across different user groups.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>els and algorithms; in worst, they might enable the users to make a wrong decision. Users trust is violated when these algorithmic systems produce an outcome that is harmful, biased, and unethical. As a consequence, sometimes users end up stop using such product or services <ref type="bibr" target="#b5">[6,</ref><ref type="bibr" target="#b6">7]</ref>. Thus, designing transparent algorithmic user interfaces are getting more and more attention among the research community <ref type="bibr" target="#b7">[8]</ref> to make the algorithmic aspects explicit and more transparent.</p><p>Previous research have advocated for transparent recommendation systems in various domains <ref type="bibr" target="#b8">[9,</ref><ref type="bibr" target="#b9">10]</ref>, transparent statistical research practices <ref type="bibr" target="#b10">[11]</ref>, transparent debugging <ref type="bibr" target="#b11">[12,</ref><ref type="bibr" target="#b12">13]</ref>, and transparent journalism <ref type="bibr" target="#b13">[14,</ref><ref type="bibr" target="#b14">15]</ref>. While others have examined and emphasized the importance of transparent data collection process <ref type="bibr" target="#b15">[16,</ref><ref type="bibr" target="#b16">17]</ref>; Microsoft datasheets for datasets presents one example <ref type="bibr" target="#b17">[18,</ref><ref type="bibr" target="#b16">17]</ref> to achieve trans-parency and accountability during Machine Learning (ML) lifecycle for both dataset creator and dataset consumer.</p><p>Similarly, various techniques are also available to make the underlying algorithmic assumptions more open, interpretable, and easy to understand; explanation is one of them <ref type="bibr" target="#b1">[2,</ref><ref type="bibr" target="#b18">19]</ref>. Recently, researchers also explored the potential of socio-technically inspired perspective such as Social Transparency (ST) <ref type="bibr" target="#b19">[20]</ref>, perhaps, due to the social nature of interpretability <ref type="bibr" target="#b20">[21]</ref>. Explanation tool or Explainers, also known as interpretability tool are available as open-source Python packages to describe both white box and black box models <ref type="bibr" target="#b17">[18,</ref><ref type="bibr" target="#b21">22,</ref><ref type="bibr" target="#b22">23,</ref><ref type="bibr" target="#b23">24]</ref>. They provide an easy interpretation of the model's mechanism and outcome in a trustworthy, transparent, and safer manner <ref type="bibr" target="#b24">[25]</ref>. Applying these explainers requires the user to call pre-defined functions, integration with complex workflows <ref type="bibr" target="#b20">[21]</ref>, and is often "critiqued for its techno-centric view" <ref type="bibr" target="#b19">[20]</ref>; and, applying them requires programming. Furthermore, as these tools are publicly available and free to use, research shows that even expert data scientists overuse the explainers (InterpretML <ref type="bibr" target="#b21">[22]</ref>) prediction by overly trusting them, and sometimes use them without proper understanding <ref type="bibr" target="#b25">[26]</ref>.</p><p>Even though various design guidelines <ref type="bibr" target="#b7">[8,</ref><ref type="bibr" target="#b26">27,</ref><ref type="bibr" target="#b6">7]</ref> and principles <ref type="bibr" target="#b12">[13]</ref> exists, including explanatory prototypes <ref type="bibr" target="#b12">[13,</ref><ref type="bibr" target="#b11">12]</ref>, for designing transparent algorithmic UI's; however, design approaches considering users personality, cognitive style, problem-solving strategies are still unexplored. Research in education, psychology, marketing, and other domains indicate that there exists a significant difference in the way different users use and process information <ref type="bibr" target="#b27">[28,</ref><ref type="bibr" target="#b28">29,</ref><ref type="bibr" target="#b29">30]</ref>. We do not know how these different cognitive style or mental processes will play out in the case of designing a transparent algorithmic system. Likewise, how much transparency is even enough or desired across different critical audience <ref type="bibr" target="#b30">[31]</ref> is also unknown.</p><p>We also do not know how to measure a particular user groups transparency needs. To bridge these gaps, we propose a playful exploration approach called "tinkering" <ref type="bibr" target="#b31">[32,</ref><ref type="bibr" target="#b29">30]</ref> as a way of designing transparent algorithmic UI's by examining Facebook News Feed. There are a couple of benefits of our proposed approach: first, it is possible, that the exploratory nature of algorithm features (matrices) will not overwhelm the user by providing the user sufficient algorithmic control in the personalized interactive news feed experience; second, by enabling the measuring ability at the interface level of how much transparency is desired for each groups, we include the possibility of transparent interface designs that are inclusive (e.g., gender <ref type="bibr" target="#b32">[33,</ref><ref type="bibr" target="#b33">34]</ref>.) We do not intend to modify or suggest a new facebook ranking algorithm, instead, our main objective is to encourage a different perspective in the design of transparent algorithmic user interfaces. We end our discussion by suggesting potential future research directions.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Related Work</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1.">Domain Applications and Algorithmic Transparency</head><p>Algorithmic systems are everywhere; they range from search engine <ref type="bibr" target="#b34">[35]</ref>, social media news feed, to video/music or product recommendation systems. These systems has the ability to impact and influence the way we perceive, interact and experience the world around us. Much of the blessings associated with these systems are not free from its perils <ref type="bibr" target="#b35">[36]</ref>, in many cases, the algorithms are not fair <ref type="bibr" target="#b5">[6]</ref>.</p><p>For example, research showed that Google search algorithm displayed biased and racist content when queried for certain keywords such as "black girls" <ref type="bibr" target="#b34">[35]</ref>. The absence of context associated with the search results makes algorithmic interpretation even more difficult <ref type="bibr" target="#b15">[16,</ref><ref type="bibr" target="#b34">35]</ref>. Researchers also discovered biases in image annotation <ref type="bibr" target="#b3">[4]</ref> in computer vision across various facets such as race, gender, and weight <ref type="bibr" target="#b2">[3]</ref>. As a consequence, researchers advocated for making data's economic value transparent <ref type="bibr" target="#b18">[19]</ref>, because users will likely stop using a technology due to the lack of opacity about how their generated data is actually used. Algorithmic transparency has also received a considerable amount of attention in data science work practices <ref type="bibr" target="#b36">[37]</ref>, medical AI applications <ref type="bibr" target="#b37">[38]</ref> among many others. The lack of transparency causes mistrust <ref type="bibr" target="#b5">[6]</ref> and dissatisfaction in these systems. There exists an increasing opportunity to establish trust through transparency with the advancement of digital media and computer technology <ref type="bibr" target="#b38">[39]</ref>. For simplicity, in this paper, we examined algorithmic transparency in the case of Facebook news feed because it is a well-studied sociotechnical interactive system <ref type="bibr" target="#b1">[2,</ref><ref type="bibr" target="#b39">40,</ref><ref type="bibr" target="#b40">41]</ref>. Also, very little is known about how the news feed curation works <ref type="bibr" target="#b40">[41]</ref>, thus, we wanted to propose an early stage transparent algorithmic news feed prototype, to imagine how a transparent news feed might look like.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2.">Explanation, Interpretability, and Algorithmic Transparency</head><p>Previous research have shown the significance of explanation (e.g., how, what, why) to achieve transparency in algorithmic systems such as Facebook news feed curation <ref type="bibr" target="#b18">[19,</ref><ref type="bibr" target="#b1">2]</ref>. Explanation enables the user to become more aware, make judgment about the correctness of the output and mechanism; it also supports interpretability, accountability of algorithmic decision making action <ref type="bibr" target="#b1">[2]</ref>. Transparency and interpretability are related through explanation, these relationships are shown in Figure <ref type="figure" target="#fig_1">2</ref>. The way existing interpretability tool works is also through explanation <ref type="bibr" target="#b17">[18]</ref>. Explana-tion tools or explainers such as white box and black box API's/packages <ref type="bibr" target="#b17">[18]</ref> are available to describe a wide range of ML models. White box explainers (glass box or generalized additive models or GAM's <ref type="bibr" target="#b25">[26]</ref>) works directly on data to explain models such as linear regression that is easy to understand; while the black box explainers (post-hoc explanations) requires input and ML models output to explain models that are harder to explain such as neural networks <ref type="bibr" target="#b24">[25]</ref>. The explainers are open-source, available in Microsoft's Azure ML packages (e.g., SHAP <ref type="bibr" target="#b41">[42]</ref>, LIME <ref type="bibr" target="#b31">[32]</ref>, eli5 <ref type="bibr" target="#b21">[22]</ref>), Google Cloud API (e.g., What If Tool <ref type="bibr" target="#b22">[23]</ref>) platforms and also in Python Sklearn libraries. Figure <ref type="figure" target="#fig_0">1</ref> shows the output of calling the SHAP summary plot function visualization. These explainers operate on tabular, text, and image data <ref type="bibr" target="#b17">[18,</ref><ref type="bibr" target="#b24">25]</ref>. While these tools and visualizations have helped data scientists to understand the model's output in some cases, but it also depends on the explainers used. Research showed that due to the free availability of these tools, data scientists misuse them by overly trusting them <ref type="bibr" target="#b25">[26,</ref><ref type="bibr" target="#b4">5]</ref>. These tools are mainly used by data scientists and ML practitioners. However, they faced numerous challenges such as model instability (e.g., LIME, SHAP), tools not scaling with large dataset, difficulty in tool integration with their workflow <ref type="bibr" target="#b20">[21]</ref>; thus, these tools may not be accessible to technical nonexperts (e.g., legal professionals) <ref type="bibr" target="#b25">[26,</ref><ref type="bibr" target="#b20">21]</ref>, because applying them requires more than basic programming skill and experience (e.g., knowledge of ML models, built-in methods, see <ref type="bibr">Fig 1)</ref>. Moreover, we might be able to build transparent data science tools using these explainers, however, we cannot apply them for designing transparent news feed or sociotechnical systems (e.g., transparent twitter), because most of these news feed uses proprietary algorithm <ref type="bibr" target="#b39">[40]</ref>. Research also showed that explanation may enhance user's positive attitude towards a system, but not necessar- The right image shows the output of SHAP interaction values, which makes a calls to the function shap.TreeExplainer(model).shap_interaction_values() on tree models, to display the interaction among various demographic variables.</p><p>ily trust <ref type="bibr" target="#b6">[7]</ref>. These limitations encouraged us to discover a distinct way of designing algorithmic transparency at the interface level. Thus, in this study, we propose a cognitive style based approach through incorporating tinkering ability into the design.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.3.">Tinkering</head><p>Tinkering is a cognitive style or a "mindset" to approach problem-solving through "experimentation and discovery", <ref type="bibr" target="#b42">[43]</ref>; it is associated with exploratory behavior, trial and error method, deviation from instructions when learning <ref type="bibr" target="#b43">[44]</ref>. Tinkering is an act of playful experimentation that enhances motivation, influence learning, innovation <ref type="bibr" target="#b44">[45,</ref><ref type="bibr" target="#b43">44]</ref>, impact task completion and performance <ref type="bibr" target="#b45">[46,</ref><ref type="bibr" target="#b46">47,</ref><ref type="bibr" target="#b29">30]</ref>. Even though tinkering is often associated with making activities under playful conditions <ref type="bibr" target="#b47">[48]</ref>, tinkering behavior has shown to improve learning and educational benefits in domains such as engineering, robots, and programming (e.g., debugging, block-based) <ref type="bibr" target="#b47">[48,</ref><ref type="bibr" target="#b29">30,</ref><ref type="bibr" target="#b45">46]</ref>. However, tinkering on its own may <ref type="bibr" target="#b43">[44,</ref><ref type="bibr" target="#b47">48,</ref><ref type="bibr" target="#b44">45]</ref> or may not be a beneficial strategy for problemsolving <ref type="bibr" target="#b29">[30]</ref>, for example, Beckwith et al. <ref type="bibr" target="#b29">[30]</ref> proposed that effective tinkering happens when it is associated with pause and reflection about software features. We applied tinkering to design transparent algorithmic system with the hope that its exploratory nature will make the overall algorithmic transparency experience less overwhelming. Informed by several existing research <ref type="bibr" target="#b27">[28,</ref><ref type="bibr" target="#b29">30,</ref><ref type="bibr" target="#b48">49,</ref><ref type="bibr" target="#b49">50]</ref>, the association of gender with respect to tinkering cannot be overlooked, discussed below.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.3.1.">Tinkering and Gender</head><p>Previous research has identified gender differences in the tinkering attitude <ref type="bibr" target="#b29">[30,</ref><ref type="bibr" target="#b43">44,</ref><ref type="bibr" target="#b27">28]</ref>, and confirmed that females tend to tinker or explore new software features (e.g., spreadsheet) less compared to the males for problemsolving software <ref type="bibr" target="#b29">[30]</ref>, also in Computer Science education (e.g., programming assignment) <ref type="bibr" target="#b43">[44]</ref>. Numerous studies have showed that tinkering is a mental or psychological trait that distinguishes how different genders (males and females) approach a given task (e.g., making an Arduino project) <ref type="bibr" target="#b43">[44,</ref><ref type="bibr" target="#b46">47,</ref><ref type="bibr" target="#b27">28,</ref><ref type="bibr" target="#b29">30,</ref><ref type="bibr" target="#b47">48]</ref>; tinkering is also one of the facets of gender-inclusive design <ref type="bibr" target="#b48">[49]</ref>. Gender-inclusiveness <ref type="bibr" target="#b48">[49]</ref> de-sign do not suggest building a different version of the same software for a different group of user <ref type="bibr" target="#b27">[28]</ref>, rather, it advocates for designs that support different gender groups equally <ref type="bibr" target="#b50">[51]</ref>. Gender inclusivity relies upon five facets of gender differences: motivations, computer self-efficacy, tinkering, information processing style, risk aversion, that can impact the use of problem solving software. While "Inclusive Design considers the full range of human diversity with respect to ability, language, culture, gender, age, and other forms of human difference. " <ref type="bibr" target="#b51">[52]</ref>, gender is one aspect of inclusive design. Informed and inspired by previous research <ref type="bibr" target="#b50">[51,</ref><ref type="bibr" target="#b33">34,</ref><ref type="bibr" target="#b29">30,</ref><ref type="bibr" target="#b27">28]</ref>, in this work, we will focus on only gender inclusiveness.</p><p>In this study, we discuss tinkering as a means of designing transparent algorithmic UI's, because of it's inherent exploratory nature might be less overwhelming <ref type="bibr" target="#b12">[13]</ref> to the user while providing personalized user experience; tinkering also adds an additional ability in the design to detect gender differences in the design. Detecting gender issues early on during the design process improves the usability of the software for everyone, including marginalized users <ref type="bibr" target="#b27">[28,</ref><ref type="bibr" target="#b29">30]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Designing Transparent Algorithmic User Interface (UI) Through Tinkering</head><p>Transparent algorithmic system for various interactive domain applications will work differently; not only at the user interface level but also at the algorithmic level. For simplicity of design and illustration, we focused on only a single interactive domain, the Facebook news feed. We first provide the rationale behind designing tinkering approach to transparent UI design, followed by feature description, and a brief discussion of complete transparent news feed prototype, Glass News Feed. Finally, we show how tinkering based approach can be applied to determine how much algorithmic transparency is desired across different user groups, which essentially helps in the determination of gender differences in the interface design.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1.">Tinkering and Transparent Algorithmic User Interfaces</head><p>Transparent algorithmic UI's primary objective is to reveal how it works by -explaining the mechanism it uses to produce an outcome <ref type="bibr" target="#b1">[2]</ref>. Even though previous research suggests design guidelines for them, however, they did not addressed the cognitive aspects as a design element in the UI design <ref type="bibr" target="#b7">[8,</ref><ref type="bibr" target="#b26">27,</ref><ref type="bibr" target="#b12">13]</ref>. We decided to investigate this gap by proposing tinkering based transparent algorithmic UI (see Figure <ref type="figure" target="#fig_1">2</ref>). Tinkering is not only related to problem-solving but it is also associated with decision making <ref type="bibr" target="#b27">[28]</ref>, with regards to software feature (e.g., new or existing) exploration; thus, it allows us with the ability to measure algorithmic transparency needs (how much) across diverse population at the UI level. Following Beckwith et al. <ref type="bibr" target="#b29">[30]</ref>, we applied the term tinkering as users exploratory behavioral action and practice with software, here news feed features. Allowing the user to "playfully experiment" with the transparent algorithmic features serves two potential purposes: first, the design provides algorithmic information in a manner that is not overwhelming to the user while providing personalized interactive experience; second, tinkering-based design has the ability to be tested for gender inclusiveness issues. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2.">Case Study: Facebook Glass News Feed</head><p>Facebook is one of the most widely used sociotechnical systems. Undoubtedly, Facebook has opened numerous opportunities for work, business, collaboration and communication by connecting people worldwide; nonetheless, it has also caused various problems ranging from privacy threats, mental illness, addiction, to users trust violation. Facebook news feed has been well studied in the literature for users perception and understanding of news feed transparency <ref type="bibr" target="#b1">[2,</ref><ref type="bibr" target="#b39">40,</ref><ref type="bibr" target="#b40">41]</ref>. Facebook news feed works by allowing users to share content and consume content through automated selection and ranking algorithm. The news feed provides content that are relevant, interesting, informative, having high quality <ref type="bibr" target="#b40">[41]</ref>. Users are usually unaware of how the underlying algorithmic curation works <ref type="bibr" target="#b1">[2,</ref><ref type="bibr" target="#b52">53,</ref><ref type="bibr" target="#b40">41]</ref>. Therefore, as a case study, we turned our attention to design transparent algorithmic news feed using a tinkering based approach.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2.1.">Glass News Feed Features</head><p>Glass News Feed Algorithm Features: Facebook news feed algorithm applies users past action and behavior data to provide content. Even though existing Fecebook news feed provides certain amount of very high-level control about what the user sees and why (e.g., sort, hide, block, follow/unfollow, limited profile) <ref type="bibr" target="#b39">[40]</ref>, however, a transparent news feed requires other non-trivial control, which revolves around answering "how" question in addition to answering "what" and "why", but, Facebook news feed blog does not explain that very clearly <ref type="bibr" target="#b40">[41]</ref>. For simplicity, we turned users actions in the news feed into transparent algorithmic interface feature (see Figure <ref type="figure" target="#fig_2">3</ref>); for example, i) counts such as like and reaction count (e.g., happy, love) in photos, status, videos, ii) list such as friends list, family, acquaintance, and iii) other features such as liked pages (e.g., product, business), public groups user follow, <ref type="bibr" target="#b40">[41]</ref>, including descriptive features such as notes, tags can used as features. The real future transparent application might apply a different set of features in various categories. We also enabled the ability for the user to be able to create their own feature set and For simplicity, a feature can be in any of the following states: selected (✔) indicating on <ref type="bibr" target="#b29">[30]</ref>, unselected (as empty) indicating off, feature explanation: How (?) option <ref type="bibr" target="#b39">[40,</ref><ref type="bibr" target="#b29">30]</ref>; a click on the drop down menu activates these options for selection.</p><p>Figure <ref type="figure">4</ref>: Transparent Glass News Feed prototype applying Facebook user activities such as like counts, groups, emojis count, etc., as features to provide personalized interactive experience (left).</p><p>When "Refresh" button is clicked, the personalized news feed is displayed (right).</p><p>explore the news feed outcome. We showed only some of these features in the proposed prototype.</p><p>Tinkering Capability: We enabled tinker-ing capability in the design through incorporating Facebook data as interface features (see Figure <ref type="figure" target="#fig_2">3</ref>); these features can be frequently turned on and off by the user for exploration, as described in <ref type="bibr" target="#b29">[30,</ref><ref type="bibr" target="#b27">28]</ref>. Each of the features can be in one of the states: i) when checked, marked by ✔, meaning feature selected for current exploration, ii) when un-checked, indicated by an empty box, meaning not currently under exploration, and iii) a question mark (?) to provide feature-related explanation <ref type="bibr" target="#b39">[40]</ref>. These capabilities are hidden under the drop down menu, this button is activated when clicked, otherwise, it remains inactive to make sure that these extra abilities does not overwhelm the user. Tinkering count for any particular user can be measured by simply counting the number of features that were turned on and off during a session.</p><p>Subtle Explanation: Transparency cannot be implemented without providing some kind of explanation. Thus, inspired by Rader et al. <ref type="bibr" target="#b1">[2]</ref>, we subtly added "How" explanation. "How" explanation, "Informs participants that the ranking algorithm uses data collected about users and their behaviors to calculate score score for each story". Explanation "How" was indicated by a question (?) mark, and gets activated when clicked to indicate more information (see <ref type="bibr">Fig 3)</ref>, by showing other metadata information about the queried feature. This explanation feature becomes really essential when user creates their own "userdefined" feature set for experimentation with the news feed. This ability of defining userdefined feature set ensures enough flexibility for exploration without overwhelming the user with all possible tinkering options.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2.2.">Transparent Glass News Feed</head><p>The complete very first prototype of Glass news feed is presented in Figure <ref type="figure">4</ref>. Tinkering or feature set exploration window is depicted on the left, and the corresponding outcome is shown on the right. For simplicity, we assumed that the features will appear on the news feed itself, though, it is possible to design the exploration window in many different ways, for different applications.</p><p>The design of tinkering enabled transparent algorithmic UI was inspired by the design techniques suggested in problem-solving domain <ref type="bibr" target="#b29">[30]</ref>. We added tinkering capabilities in the Glass news feed design for feature set exploration and experiment with corresponding news feed output. The Glass news feed feature sets were derived from relevant research <ref type="bibr" target="#b40">[41]</ref>, and was kept to a minimum number to avoid causing information overload. We incorporated the ability to add a "user-defined" feature set to provide some flexibility. The entire interactive experience is built on the concept of "playful experimentation" while giving users enough control without hurting their interface experience <ref type="bibr" target="#b12">[13]</ref>.</p><p>The resulting news feed is displayed on the news feed with confidence or accuracy information (top right corner in Figure <ref type="figure">4</ref>). The algorithmic outcome (news feed after refresh) intentionally provides minimal information such as confidence accuracy, because we have no idea what specific selection or ranking algorithm Facebook originally uses for news feed curation <ref type="bibr" target="#b40">[41]</ref>. For similar reason, we did not apply visualization, however, it is a possibility <ref type="bibr" target="#b17">[18,</ref><ref type="bibr" target="#b4">5,</ref><ref type="bibr" target="#b12">13]</ref>. This is again our very first trying of tinkering approach to achieve algorithmic transparency in interactive systems.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2.3.">Measuring Gender Differences in</head><p>Glass News Feed Though, our main motivation for applying tinkering approach to design transparent algorithmic systems was to enable the exploratory nature of tinkering to unfold in the interface design, we suspect that the playful cognitive style might also be able to reduce cognitive load in the transparent systems <ref type="bibr" target="#b19">[20,</ref><ref type="bibr" target="#b14">15]</ref>. Another direct outcome from applying tinkering approach is that it allows the designer to check for gender issues in the design. The way our proposal is able to detect and measure gender differences is through measuring (counting) "how much" tinkering (on/off) an user engaged during an episode, consistent with prior study of tinkering in problemsolving <ref type="bibr" target="#b29">[30]</ref> domain. Similarly, whether our design suffers from gender issues or not can be measured by collecting users tinkering frequency, tinkering episode and tinkering rate.</p><p>For any particular task (in a user study), i) tinkering frequency is the number of features a user have turned on and off; ii) tinkering episode can be defined as a fixed amount of time for task completion; iii) tinkering rate is the ratio of the previous two measures (i and ii). Depending on the number of user groups taking part in the study, tinkering measures for each user groups can be passed to statistical or ML models for quantitative analysis. We did not show these measures in this study, rather, these are some of the potential areas for future exploration. Transparency is critical for designing interactive social media news feed for trust building and system acceptance. However, too much openness may make the system vulnerable to various kinds of exploitation, harm, and misuse. Thus, how to balance such competing, yet a necessary aspect of a transparent news feed requires further inquiry.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Limitations and Future Work</head><p>In this study, we proposed a tinkering based approach towards designing transparent algorithmic user interface. There are several limitations to this study that is worth mentioning. First, the Glass news feed design was inspired by relevant research in Facebook news feed <ref type="bibr" target="#b40">[41]</ref> and tinkering <ref type="bibr" target="#b29">[30,</ref><ref type="bibr" target="#b27">28]</ref>. While background research related to tinkering was broad and detailed, Facebook news feed research was limited, because Facebook news feed uses proprietary algorithm <ref type="bibr" target="#b39">[40]</ref>. A useful workaround suggested in <ref type="bibr" target="#b1">[2]</ref> can be beneficial for designing other socio-technical transparent systems, by content analysis of blog posts or related sources. Second, we proposed transparent algorithmic prototype for social media news feed only, there are other algorithmic domain applications such as recommender systems, data science tools, data journalism tools, that can be designed and tested using similar strategy suggested in this study. Our design was also very limited in features and capabilities. Future work might take our design concept, expand (features/matrics) it, and test with the various users to see how tinkering plays out in achieving transparency. Third, we addressed tinkering approach to the design of transparent algorithmic system, however, there are other facets of cognitive styles such as riskaversion, information processing style (e.g., <ref type="bibr" target="#b48">[49,</ref><ref type="bibr" target="#b33">34]</ref>) which might influence the use of transparent systems (especially for females), we did not address these complex relationships while designing our proposal. Thus, future work should examine other cognitive styles of problem solving and their influence on tinkering when designing transparent algorithmic system. Additionally, most previous studies investigated genders (males and females) influence in design research, thus, we need to expand our understanding by including marginalized LGBTQ+ communities in our design process. Finally, gender is one dimension in the broad spectrum of inclusive design <ref type="bibr" target="#b51">[52]</ref>, thus, future studies should investigate other diversity dimensions (e.g., race, class, language) while designing transparent systems.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Conclusion</head><p>The demand for transparent algorithmic user interfaces is on the rise. Previous research applied explanations associated with text and visualization techniques to improve the interpretability of ML models. These specialized tools are mainly used by technical experts such as data scientists and cannot be easily adapted for developing other transparent domain applications such as socio-technical systems. Furthermore, sample transparent UI prototypes in diverse domains exists, however, we do not know how to design a transparent interactive Facebook news feed that do not hurt the UX. Also, how much transparency is even desired across diverse population and how to measure that is also unknown. Thus, in this study, we proposed a very first tinkering based transparent algorithmic Glass News Feed UI prototype with the potential to navigate these multiple scenarios. This proposal can be easily expanded and adapted to design transparent algorithmic systems in various domain applications (e.g., transparent algorithmic tools for the journalists <ref type="bibr" target="#b53">[54]</ref>), which essentially requires further examination with various groups of users to understand its technical feasibility, ethical and societal implications (e.g., benefits, harms).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.">Acknowledgments</head><p>I would like to thank anonymous reviewers for their valuable comments and feedback.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1:The left image shows, Explainer SHAP<ref type="bibr" target="#b17">[18]</ref> depicting feature importance plot, when shap.summary_plot() function is called on the model outcome.The right image shows the output of SHAP interaction values, which makes a calls to the function shap.TreeExplainer(model).shap_interaction_values() on tree models, to display the interaction among various demographic variables.</figDesc><graphic coords="4,117.64,105.24,360.01,124.82" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: The upper portion of the figure illustrates the relationship of Explanation with Transparency and Interpretability. The association of tinkering based approach to algorithmic transparency is depicted below.</figDesc><graphic coords="6,164.44,105.24,266.41,143.71" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Figure 3 :</head><label>3</label><figDesc>Figure 3: Transparent Glass News Feed feature information represented to support user exploration.For simplicity, a feature can be in any of the following states: selected (✔) indicating on<ref type="bibr" target="#b29">[30]</ref>, unselected (as empty) indicating off, feature explanation: How (?) option<ref type="bibr" target="#b39">[40,</ref><ref type="bibr" target="#b29">30]</ref>; a click on the drop down menu activates these options for selection.</figDesc><graphic coords="7,178.84,105.24,237.61,110.51" type="bitmap" /></figure>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<monogr>
		<title level="m" type="main">Machine bias</title>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">M</forename><surname>Julia Angwin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Jeff</forename><surname>Larson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Kirchner</surname></persName>
		</author>
		<ptr target="https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing" />
		<imprint>
			<date type="published" when="2016">2016. 2020-07-08</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Explanations as mechanisms for supporting algorithmic transparency</title>
		<author>
			<persName><forename type="first">E</forename><surname>Rader</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Cotter</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Cho</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2018 CHI conference on human factors in computing systems</title>
				<meeting>the 2018 CHI conference on human factors in computing systems</meeting>
		<imprint>
			<date type="published" when="2018">2018</date>
			<biblScope unit="page" from="1" to="13" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<monogr>
		<title level="m" type="main">Researchers show that computer vision algorithms pretrained on imagenet exhibit multiple, distressing biases</title>
		<author>
			<persName><forename type="first">K</forename><surname>Wiggers</surname></persName>
		</author>
		<ptr target="https://venturebeat.com/2020/11/03/researchers-show-that-computer-vision-algorithms-pretrained" />
		<imprint>
			<date type="published" when="2020-04-11">2020. 2020-04-11</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Between subjectivity and imposition: Power dynamics in data annotation for computer vision</title>
		<author>
			<persName><forename type="first">M</forename><surname>Miceli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Schuessler</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Yang</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the ACM on Human-Computer Interaction</title>
				<meeting>the ACM on Human-Computer Interaction</meeting>
		<imprint>
			<date type="published" when="2020">2020</date>
			<biblScope unit="volume">4</biblScope>
			<biblScope unit="page" from="1" to="25" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Designing alternative representations of confusion matrices to support non-expert public understanding of algorithm performance</title>
		<author>
			<persName><forename type="first">H</forename><surname>Shen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Jin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Á</forename><forename type="middle">A</forename><surname>Cabrera</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Perer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Zhu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">I</forename><surname>Hong</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the ACM on Human-Computer Interaction</title>
				<meeting>the ACM on Human-Computer Interaction</meeting>
		<imprint>
			<date type="published" when="2020">2020</date>
			<biblScope unit="volume">4</biblScope>
			<biblScope unit="page" from="1" to="22" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">A qualitative exploration of perceptions of algorithmic fairness</title>
		<author>
			<persName><forename type="first">A</forename><surname>Woodruff</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">E</forename><surname>Fox</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Rousso-Schindler</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Warshaw</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2018 chi conference on human factors in computing systems</title>
				<meeting>the 2018 chi conference on human factors in computing systems</meeting>
		<imprint>
			<date type="published" when="2018">2018</date>
			<biblScope unit="page" from="1" to="14" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">How much information? effects of transparency on trust in an algorithmic interface</title>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">F</forename><surname>Kizilcec</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems</title>
				<meeting>the 2016 CHI Conference on Human Factors in Computing Systems</meeting>
		<imprint>
			<date type="published" when="2016">2016</date>
			<biblScope unit="page" from="2390" to="2395" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<monogr>
		<author>
			<persName><forename type="first">M</forename><surname>Eiband</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Schneider</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Bilandzic</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Fazekas-Con</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Haug</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Hussmann</surname></persName>
		</author>
		<title level="m">Bringing transparency design into practice, in: 23rd international conference on intelligent user interfaces</title>
				<imprint>
			<date type="published" when="2018">2018</date>
			<biblScope unit="page" from="211" to="223" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">The role of transparency in recommender systems</title>
		<author>
			<persName><forename type="first">R</forename><surname>Sinha</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Swearingen</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">CHI&apos;02 extended abstracts on Human factors in computing systems</title>
				<imprint>
			<date type="published" when="2002">2002</date>
			<biblScope unit="page" from="830" to="831" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Transparent, scrutable and explainable user models for personalized recommendation</title>
		<author>
			<persName><forename type="first">K</forename><surname>Balog</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Radlinski</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Arakelyan</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval</title>
				<meeting>the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval</meeting>
		<imprint>
			<date type="published" when="2019">2019</date>
			<biblScope unit="page" from="265" to="274" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Moving transparent statistics forward at chi</title>
		<author>
			<persName><forename type="first">M</forename><surname>Kay</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Haroz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Guha</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Dragicevic</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Wacharamanotham</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems</title>
				<meeting>the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems</meeting>
		<imprint>
			<date type="published" when="2017">2017</date>
			<biblScope unit="page" from="534" to="541" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Explanatory debugging: Supporting end-user debugging of machine-learned programs</title>
		<author>
			<persName><forename type="first">T</forename><surname>Kulesza</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Stumpf</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Burnett</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W.-K</forename><surname>Wong</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Riche</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Moore</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Oberst</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Shinsel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Mcintosh</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IEEE Symposium on Visual Languages and Human-Centric Computing</title>
				<imprint>
			<publisher>IEEE</publisher>
			<date type="published" when="2010">2010. 2010</date>
			<biblScope unit="page" from="41" to="48" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Principles of explanatory debugging to personalize interactive machine learning</title>
		<author>
			<persName><forename type="first">T</forename><surname>Kulesza</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Burnett</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W.-K</forename><surname>Wong</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Stumpf</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 20th international conference on intelligent user interfaces</title>
				<meeting>the 20th international conference on intelligent user interfaces</meeting>
		<imprint>
			<date type="published" when="2015">2015</date>
			<biblScope unit="page" from="126" to="137" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<monogr>
		<author>
			<persName><forename type="first">B</forename><surname>Kovach</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Rosenstiel</surname></persName>
		</author>
		<title level="m">The elements of journalism: What newspeople should know and the public should expect</title>
				<meeting><address><addrLine>CA</addrLine></address></meeting>
		<imprint>
			<publisher>Three Rivers Press</publisher>
			<date type="published" when="2014">2014</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Algorithmic transparency in the news media</title>
		<author>
			<persName><forename type="first">N</forename><surname>Diakopoulos</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Koliska</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Digital Journalism</title>
		<imprint>
			<biblScope unit="volume">5</biblScope>
			<biblScope unit="page" from="809" to="828" />
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<monogr>
		<title level="m" type="main">Data feminism</title>
		<author>
			<persName><forename type="first">C</forename><surname>D'ignazio</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">F</forename><surname>Klein</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2020">2020</date>
			<publisher>MIT Press</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<monogr>
		<title level="m" type="main">Datasheets for datasets</title>
		<author>
			<persName><forename type="first">T</forename><surname>Gebru</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Morgenstern</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Vecchione</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">Wortman</forename><surname>Vaughan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Wallach</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Daumé</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Iii</forename></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Crawford</surname></persName>
		</author>
		<ptr target="https://www.microsoft.com/en-us/research/publication/datasheets-for-datasets/" />
		<imprint>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<monogr>
		<author>
			<persName><forename type="first">M</forename></persName>
		</author>
		<ptr target="https://docs.microsoft.com/en-us/azure/machine-learning/concept-responsible-ml" />
		<title level="m">What is responsible machine learning?</title>
				<imprint>
			<date type="published" when="2020-11-12">2020. 2020. 2020-11-12</date>
		</imprint>
	</monogr>
	<note>preview</note>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">Transparency and explanation in deep reinforcement learning neural networks</title>
		<author>
			<persName><forename type="first">R</forename><surname>Iyer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Lewis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Sundar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Sycara</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society</title>
				<meeting>the 2018 AAAI/ACM Conference on AI, Ethics, and Society</meeting>
		<imprint>
			<date type="published" when="2018">2018</date>
			<biblScope unit="page" from="144" to="150" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<monogr>
		<author>
			<persName><forename type="first">U</forename><surname>Ehsan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Q</forename><forename type="middle">V</forename><surname>Liao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Muller</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">O</forename><surname>Riedl</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">D</forename><surname>Weisz</surname></persName>
		</author>
		<idno type="arXiv">arXiv:2101.04719</idno>
		<title level="m">Expanding explainability: Towards social transparency in ai systems</title>
				<imprint>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b20">
	<analytic>
		<title level="a" type="main">Human factors in model interpretability: Industry practices, challenges, and needs</title>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">R</forename><surname>Hong</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Hullman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Bertini</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the ACM on Human-Computer Interaction</title>
				<meeting>the ACM on Human-Computer Interaction</meeting>
		<imprint>
			<date type="published" when="2020">2020</date>
			<biblScope unit="volume">4</biblScope>
			<biblScope unit="page" from="1" to="26" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<monogr>
		<author>
			<persName><forename type="first">H</forename><surname>Nori</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Jenkins</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Koch</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Caruana</surname></persName>
		</author>
		<idno type="arXiv">arXiv:1909.09223</idno>
		<title level="m">Interpretml: A unified framework for machine learning interpretability</title>
				<imprint>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b22">
	<analytic>
		<title level="a" type="main">The what-if tool: Interactive probing of machine learning models</title>
		<author>
			<persName><forename type="first">J</forename><surname>Wexler</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Pushkarna</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Bolukbasi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Wattenberg</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Viégas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Wilson</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE transactions on visualization and computer graphics</title>
		<imprint>
			<biblScope unit="volume">26</biblScope>
			<biblScope unit="page" from="56" to="65" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b23">
	<analytic>
		<title level="a" type="main">The mythos of model interpretability</title>
		<author>
			<persName><forename type="first">Z</forename><forename type="middle">C</forename><surname>Lipton</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Queue</title>
		<imprint>
			<biblScope unit="volume">16</biblScope>
			<biblScope unit="page" from="31" to="57" />
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b24">
	<analytic>
		<title level="a" type="main">A survey of methods for explaining black box models</title>
		<author>
			<persName><forename type="first">R</forename><surname>Guidotti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Monreale</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Ruggieri</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Turini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Giannotti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Pedreschi</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">ACM computing surveys (CSUR)</title>
		<imprint>
			<biblScope unit="volume">51</biblScope>
			<biblScope unit="page" from="1" to="42" />
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b25">
	<analytic>
		<title level="a" type="main">Interpreting interpretability: Understanding data scientists&apos; use of interpretability tools for machine learning</title>
		<author>
			<persName><forename type="first">H</forename><surname>Kaur</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Nori</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Jenkins</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Caruana</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Wallach</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">Wortman</forename><surname>Vaughan</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems</title>
				<meeting>the 2020 CHI Conference on Human Factors in Computing Systems</meeting>
		<imprint>
			<date type="published" when="2020">2020</date>
			<biblScope unit="page" from="1" to="14" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b26">
	<analytic>
		<title level="a" type="main">Designing explanation interfaces for transparency and beyond</title>
		<author>
			<persName><forename type="first">C.-H</forename><surname>Tsai</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Brusilovsky</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IUI Workshops</title>
				<imprint>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b27">
	<analytic>
		<title level="a" type="main">Gender in end-user software engineering</title>
		<author>
			<persName><forename type="first">M</forename><surname>Burnett</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Wiedenbeck</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Grigoreanu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Subrahmaniyan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Beckwith</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Kissinger</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 4th international workshop on Enduser software engineering</title>
				<meeting>the 4th international workshop on Enduser software engineering</meeting>
		<imprint>
			<date type="published" when="2008">2008</date>
			<biblScope unit="page" from="21" to="24" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b28">
	<monogr>
		<author>
			<persName><forename type="first">J</forename><surname>Meyers-Levy</surname></persName>
		</author>
		<title level="m">Gender differences in information processing: A selectivity interpretation</title>
				<imprint>
			<date type="published" when="1986">1986</date>
		</imprint>
		<respStmt>
			<orgName>Northwestern University</orgName>
		</respStmt>
	</monogr>
	<note type="report_type">Ph.D. thesis</note>
</biblStruct>

<biblStruct xml:id="b29">
	<analytic>
		<title level="a" type="main">Tinkering and gender in end-user programmers&apos; debugging</title>
		<author>
			<persName><forename type="first">L</forename><surname>Beckwith</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Kissinger</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Burnett</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Wiedenbeck</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Lawrance</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Blackwell</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Cook</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the SIGCHI conference on Human Factors in computing systems</title>
				<meeting>the SIGCHI conference on Human Factors in computing systems</meeting>
		<imprint>
			<date type="published" when="2006">2006</date>
			<biblScope unit="page" from="231" to="240" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b30">
	<analytic>
		<title level="a" type="main">Transparent to whom? no algorithmic accountability without a critical audience</title>
		<author>
			<persName><forename type="first">J</forename><surname>Kemper</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Kolkman</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Communication &amp; Society</title>
		<imprint>
			<biblScope unit="volume">22</biblScope>
			<biblScope unit="page" from="2081" to="2096" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
	<note>Information</note>
</biblStruct>

<biblStruct xml:id="b31">
	<analytic>
		<title level="a" type="main">why should i trust you?&quot; explaining the predictions of any classifier</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">T</forename><surname>Ribeiro</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Singh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Guestrin</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining</title>
				<meeting>the 22nd ACM SIGKDD international conference on knowledge discovery and data mining</meeting>
		<imprint>
			<date type="published" when="2016">2016</date>
			<biblScope unit="page" from="1135" to="1144" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b32">
	<analytic>
		<title level="a" type="main">Sig: gender-inclusive software: What we know about building it</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">M</forename><surname>Burnett</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">F</forename><surname>Churchill</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">J</forename><surname>Lee</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems</title>
				<meeting>the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems</meeting>
		<imprint>
			<date type="published" when="2015">2015</date>
			<biblScope unit="page" from="857" to="860" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b33">
	<analytic>
		<title level="a" type="main">Gender-inclusive hci research and design: A conceptual review</title>
		<author>
			<persName><forename type="first">S</forename><surname>Stumpf</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Peters</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Bardzell</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Burnett</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Busse</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Cauchard</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Churchill</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Foundations and Trends in Human-Computer Interaction</title>
		<imprint>
			<biblScope unit="volume">13</biblScope>
			<biblScope unit="page" from="1" to="69" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b34">
	<monogr>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">U</forename><surname>Noble</surname></persName>
		</author>
		<title level="m">Algorithms of oppression: How search engines reinforce racism</title>
				<imprint>
			<publisher>nyu Press</publisher>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b35">
	<analytic>
		<title level="a" type="main">Judgment call the game: Using value sensitive design and design fiction to surface ethical concerns related to technology</title>
		<author>
			<persName><forename type="first">S</forename><surname>Ballard</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><forename type="middle">M</forename><surname>Chappell</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Kennedy</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2019 on Designing Interactive Systems Conference</title>
				<meeting>the 2019 on Designing Interactive Systems Conference</meeting>
		<imprint>
			<date type="published" when="2019">2019</date>
			<biblScope unit="page" from="421" to="433" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b36">
	<analytic>
		<title level="a" type="main">Human-centered study of data science work practices</title>
		<author>
			<persName><forename type="first">M</forename><surname>Muller</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Feinberg</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>George</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">J</forename><surname>Jackson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><forename type="middle">E</forename><surname>John</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">B</forename><surname>Kery</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Passi</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems</title>
				<imprint>
			<date type="published" when="2019">2019</date>
			<biblScope unit="page" from="1" to="8" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b37">
	<analytic>
		<title level="a" type="main">Transparency and reproducibility in artificial intelligence</title>
		<author>
			<persName><forename type="first">B</forename><surname>Haibe-Kains</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">A</forename><surname>Adam</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Hosny</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Khodakarami</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Waldron</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Mcintosh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Goldenberg</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Kundaje</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">S</forename><surname>Greene</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Nature</title>
		<imprint>
			<biblScope unit="volume">586</biblScope>
			<biblScope unit="page" from="E14" to="E16" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b38">
	<analytic>
		<title level="a" type="main">Understanding modern transparency</title>
		<author>
			<persName><forename type="first">A</forename><surname>Meijer</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">International Review of Administrative Sciences</title>
		<imprint>
			<biblScope unit="volume">75</biblScope>
			<biblScope unit="page" from="255" to="269" />
			<date type="published" when="2009">2009</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b39">
	<analytic>
		<title level="a" type="main">Understanding user beliefs about algorithmic curation in the facebook news feed</title>
		<author>
			<persName><forename type="first">E</forename><surname>Rader</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Gray</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 33rd annual ACM conference on human factors in computing systems</title>
				<meeting>the 33rd annual ACM conference on human factors in computing systems</meeting>
		<imprint>
			<date type="published" when="2015">2015</date>
			<biblScope unit="page" from="173" to="182" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b40">
	<analytic>
		<title level="a" type="main">Explaining the news feed algorithm: An analysis of the&quot; news feed fyi</title>
		<author>
			<persName><forename type="first">K</forename><surname>Cotter</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Cho</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Rader</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2017 CHI conference extended abstracts on human factors in computing systems</title>
				<meeting>the 2017 CHI conference extended abstracts on human factors in computing systems</meeting>
		<imprint>
			<date type="published" when="2017">2017</date>
			<biblScope unit="page" from="1553" to="1560" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b41">
	<analytic>
		<title level="a" type="main">A unified approach to interpreting model predictions</title>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">M</forename><surname>Lundberg</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S.-I</forename><surname>Lee</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Advances in neural information processing systems</title>
				<imprint>
			<date type="published" when="2017">2017</date>
			<biblScope unit="page" from="4765" to="4774" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b42">
	<analytic>
		<title level="a" type="main">Invent to learn: Making, tinkering, and engineering in the classroom</title>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">V</forename><surname>Loertscher</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Teacher Librarian</title>
		<imprint>
			<biblScope unit="volume">41</biblScope>
			<biblScope unit="page">45</biblScope>
			<date type="published" when="2013">2013</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b43">
	<analytic>
		<title level="a" type="main">Are females disinclined to tinker in computer science?</title>
		<author>
			<persName><forename type="first">S</forename><surname>Krieger</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Allen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Rawn</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 46th ACM Technical Symposium on Computer Science Education</title>
				<meeting>the 46th ACM Technical Symposium on Computer Science Education</meeting>
		<imprint>
			<date type="published" when="2015">2015</date>
			<biblScope unit="page" from="102" to="107" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b44">
	<analytic>
		<title level="a" type="main">Tool time: Gender and students&apos; use of tools, control, and authority</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">G</forename><surname>Jones</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Brader-Araje</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">W</forename><surname>Carboni</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Carter</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">J</forename><surname>Rua</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Banilower</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Hatch</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Research in Science Teaching: The Official Journal of the National Association for Research in Science Teaching</title>
		<imprint>
			<biblScope unit="volume">37</biblScope>
			<biblScope unit="page" from="760" to="783" />
			<date type="published" when="2000">2000</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b45">
	<analytic>
		<title level="a" type="main">Defining tinkering behavior in open-ended block-based programming assignments</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Dong</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Marwan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Catete</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Price</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Barnes</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 50th ACM Technical Symposium on Computer Science Education</title>
				<meeting>the 50th ACM Technical Symposium on Computer Science Education</meeting>
		<imprint>
			<date type="published" when="2019">2019</date>
			<biblScope unit="page" from="1204" to="1210" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b46">
	<analytic>
		<title level="a" type="main">Computational thinking and tinkering: Exploration of an early childhood robotics curriculum</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">U</forename><surname>Bers</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Flannery</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">R</forename><surname>Kazakoff</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Sullivan</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Computers &amp; Education</title>
		<imprint>
			<biblScope unit="volume">72</biblScope>
			<biblScope unit="page" from="145" to="157" />
			<date type="published" when="2014">2014</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b47">
	<analytic>
		<title level="a" type="main">Tinkering in scientific education</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">H</forename><surname>Lamers</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><forename type="middle">J</forename><surname>Verbeek</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">W</forename><surname>Van Der Putten</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">International Conference on Advances in Computer Entertainment Technology</title>
				<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2013">2013</date>
			<biblScope unit="page" from="568" to="571" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b48">
	<analytic>
		<title level="a" type="main">Gendermag: A method for evaluating software&apos;s gender inclusiveness</title>
		<author>
			<persName><forename type="first">M</forename><surname>Burnett</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Stumpf</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Macbeth</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Makri</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Beckwith</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Kwan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Peters</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Jernigan</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Interacting with Computers</title>
		<imprint>
			<biblScope unit="volume">28</biblScope>
			<biblScope unit="page" from="760" to="787" />
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b49">
	<analytic>
		<title level="a" type="main">Identifying gender differences in information processing style, self-efficacy, and tinkering for robot tele-operation</title>
		<author>
			<persName><forename type="first">D</forename><surname>Showkat</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Grimm</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">15th International Conference on Ubiquitous Robots (UR), IEEE</title>
				<imprint>
			<date type="published" when="2018">2018. 2018</date>
			<biblScope unit="page" from="443" to="448" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b50">
	<analytic>
		<title level="a" type="main">Doing inclusive design: From gendermag in the trenches to inclusive mag in the research lab</title>
		<author>
			<persName><forename type="first">M</forename><surname>Burnett</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the International Conference on Advanced Visual Interfaces</title>
				<meeting>the International Conference on Advanced Visual Interfaces</meeting>
		<imprint>
			<date type="published" when="2020">2020</date>
			<biblScope unit="page" from="1" to="6" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b51">
	<monogr>
		<ptr target="https://idrc.ocadu.ca/,ac-cessed:2021-10-02" />
		<title level="m">idrc, Inclusive design research center</title>
				<imprint>
			<date type="published" when="1975">1975</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b52">
	<analytic>
		<title level="a" type="main">How people form folk theories of social media feeds and what it means for how we study selfpresentation</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">A</forename><surname>Devito</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Birnholtz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">T</forename><surname>Hancock</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>French</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Liu</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2018 CHI conference on human factors in computing systems</title>
				<meeting>the 2018 CHI conference on human factors in computing systems</meeting>
		<imprint>
			<date type="published" when="2018">2018</date>
			<biblScope unit="page" from="1" to="12" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b53">
	<monogr>
		<title level="m" type="main">Outliers: More than numbers?</title>
		<author>
			<persName><forename type="first">D</forename><surname>Showkat</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">P S</forename><surname>Baumer</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
