=Paper=
{{Paper
|id=Vol-2903/IUI21WS-HUMANIZE-5
|storemode=property
|title=Tinkering: A Way Towards Designing Transparent Algorithmic User Interfaces
|pdfUrl=https://ceur-ws.org/Vol-2903/IUI21WS-HUMANIZE-5.pdf
|volume=Vol-2903
|authors=Dilruba Showkat
|dblpUrl=https://dblp.org/rec/conf/iui/Showkat21
}}
==Tinkering: A Way Towards Designing Transparent Algorithmic User Interfaces==
Tinkering: A Way Towards Designing
Transparent Algorithmic User Interfaces
Dilruba Showkat
Lehigh University, Bethlehem, PA, USA
Abstract
With the widespread use of algorithms in interactive systems, it becomes inevitable for the users to ap-
ply these algorithms with caution. Algorithms are applied to make decisions in healthcare, hiring, the
criminal justice system, and social media news feed among others. Thus, algorithmic systems impact
human lives and society in significant ways. As a consequence, currently, the focus has been shifted
toward designing transparent algorithmic user interfaces (UI’s) – to make the algorithmic aspects more
explicit. Designing transparent algorithmic user interfaces requires the designer to bring the algorithmic
aspects of control at the UI level without causing information overload. This research attempts to inves-
tigate this gap by proposing tinkering or playful experimentation as a means of designing transparent
algorithmic UI’s. Tinkering is a cognitive style related to problem-solving, decision making, enables ex-
ploration with the interactive system. The proposed approach of combining tinkering with transparent
UI’s serves two potential purposes: first, the exploratory nature of tinkering has the ability to make the
algorithmic aspects transparent without hurting users experience (UX), while providing flexibility and
sufficient control in the personalized interactive experience; second, it enables the designer to detect
software inclusiveness issues in the design before they become part of the final software, by allowing us
to measure how much algorithmic transparency is desired across different user groups.
Keywords
tinkering, exploration, transparent algorithmic user interface, inclusive design, algorithmic transparency
1. Research Problem and els and algorithms; in worst, they might en-
able the users to make a wrong decision. Users
Motivation trust is violated when these algorithmic sys-
Algorithms are rapidly applied in most of the tems produce an outcome that is harmful, bi-
interactive applications that we use today. For ased, and unethical. As a consequence, some-
instance, well-known algorithmic system in- times users end up stop using such product or
cludes YouTube for video recommendation, services [6, 7]. Thus, designing transparent
COMPAS risk assessment tool [1], Facebook algorithmic user interfaces are getting more
News Feed [2] among others. Research shows, and more attention among the research com-
that in many cases, these systems generated munity [8] to make the algorithmic aspects
predictions might suffer from biases [1, 3, 4], explicit and more transparent.
causing accountability and safety issues [5], Previous research have advocated for trans-
due to lack of clarity of the underlying mod- parent recommendation systems in various
domains [9, 10], transparent statistical research
Joint Proceedings of the ACM IUI 2021 Workshops, April practices [11], transparent debugging [12, 13],
13-17, 2021, College Station, USA and transparent journalism [14, 15]. While
" dilrubashowkat@gmail.com (D. Showkat) others have examined and emphasized the im-
© 2021 Copyright © 2021 for this paper by its authors. Use portance of transparent data collection pro-
permitted under Creative Commons License Attribution
4.0 International (CC BY 4.0) cess [16, 17]; Microsoft datasheets for datasets
CEUR
CEUR Workshop
http://ceur-ws.org
Proceedings presents one example [18, 17] to achieve trans-
(CEUR-WS.org)
Workshop ISSN 1613-0073
Proceedings
parency and accountability during Machine We also do not know how to measure a par-
Learning (ML) lifecycle for both dataset cre- ticular user groups transparency needs. To
ator and dataset consumer. bridge these gaps, we propose a playful ex-
Similarly, various techniques are also avail- ploration approach called “tinkering” [32, 30]
able to make the underlying algorithmic as- as a way of designing transparent algorith-
sumptions more open, interpretable, and easy mic UI’s by examining Facebook News Feed.
to understand; explanation is one of them [2, There are a couple of benefits of our proposed
19]. Recently, researchers also explored the approach: first, it is possible, that the exploratory
potential of socio-technically inspired perspec- nature of algorithm features (matrices) will
tive such as Social Transparency (ST) [20], not overwhelm the user by providing the user
perhaps, due to the social nature of interpret- sufficient algorithmic control in the person-
ability [21]. Explanation tool or Explainers, alized interactive news feed experience; sec-
also known as interpretability tool are avail- ond, by enabling the measuring ability at the
able as open-source Python packages to de- interface level of how much transparency is
scribe both white box and black box models desired for each groups, we include the pos-
[18, 22, 23, 24]. They provide an easy inter- sibility of transparent interface designs that
pretation of the model’s mechanism and out- are inclusive (e.g., gender [33, 34].) We do not
come in a trustworthy, transparent, and safer intend to modify or suggest a new facebook
manner [25]. Applying these explainers re- ranking algorithm, instead, our main objec-
quires the user to call pre-defined functions, tive is to encourage a different perspective in
integration with complex workflows [21], and the design of transparent algorithmic user in-
is often “critiqued for its techno-centric view” terfaces. We end our discussion by suggest-
[20]; and, applying them requires program- ing potential future research directions.
ming. Furthermore, as these tools are pub-
licly available and free to use, research shows
that even expert data scientists overuse the 2. Related Work
explainers (InterpretML [22]) prediction by
overly trusting them, and sometimes use them 2.1. Domain Applications and
without proper understanding [26]. Algorithmic Transparency
Even though various design guidelines [8,
Algorithmic systems are everywhere; they range
27, 7] and principles [13] exists, including ex-
from search engine [35], social media news
planatory prototypes [13, 12], for designing
feed, to video/music or product recommen-
transparent algorithmic UI’s; however, design
dation systems. These systems has the ability
approaches considering users personality, cog-
to impact and influence the way we perceive,
nitive style, problem-solving strategies are still
interact and experience the world around us.
unexplored. Research in education, psychol-
Much of the blessings associated with these
ogy, marketing, and other domains indicate
systems are not free from its perils [36], in
that there exists a significant difference in the
many cases, the algorithms are not fair [6].
way different users use and process informa-
For example, research showed that Google search
tion [28, 29, 30]. We do not know how these
algorithm displayed biased and racist content
different cognitive style or mental processes
when queried for certain keywords such as
will play out in the case of designing a trans-
“black girls” [35]. The absence of context as-
parent algorithmic system. Likewise, how much
sociated with the search results makes algo-
transparency is even enough or desired across
rithmic interpretation even more difficult [16,
different critical audience [31] is also unknown.
35]. Researchers also discovered biases in im- tion tools or explainers such as white box and
age annotation [4] in computer vision across black box API’s/packages [18] are available
various facets such as race, gender, and weight to describe a wide range of ML models. White
[3]. As a consequence, researchers advocated box explainers (glass box or generalized ad-
for making data’s economic value transpar- ditive models or GAM’s [26]) works directly
ent [19], because users will likely stop using on data to explain models such as linear re-
a technology due to the lack of opacity about gression that is easy to understand; while the
how their generated data is actually used. Al- black box explainers (post-hoc explanations)
gorithmic transparency has also received a requires input and ML models output to ex-
considerable amount of attention in data sci- plain models that are harder to explain such
ence work practices [37], medical AI applica- as neural networks [25]. The explainers are
tions [38] among many others. The lack of open-source, available in Microsoft’s Azure
transparency causes mistrust [6] and dissat- ML packages (e.g., SHAP [42], LIME [32], eli5
isfaction in these systems. There exists an in- [22]), Google Cloud API (e.g., What If Tool
creasing opportunity to establish trust through [23]) platforms and also in Python Sklearn
transparency with the advancement of digi- libraries. Figure 1 shows the output of call-
tal media and computer technology [39]. For ing the SHAP summary plot function visual-
simplicity, in this paper, we examined algo- ization. These explainers operate on tabular,
rithmic transparency in the case of Facebook text, and image data [18, 25].
news feed because it is a well-studied socio- While these tools and visualizations have
technical interactive system [2, 40, 41]. Also, helped data scientists to understand the model’s
very little is known about how the news feed output in some cases, but it also depends on
curation works [41], thus, we wanted to pro- the explainers used. Research showed that
pose an early stage transparent algorithmic due to the free availability of these tools, data
news feed prototype, to imagine how a trans- scientists misuse them by overly trusting them
parent news feed might look like. [26, 5]. These tools are mainly used by data
scientists and ML practitioners. However, they
2.2. Explanation, faced numerous challenges such as model in-
stability (e.g., LIME, SHAP), tools not scal-
Interpretability, and
ing with large dataset, difficulty in tool inte-
Algorithmic Transparency gration with their workflow [21]; thus, these
Previous research have shown the significance tools may not be accessible to technical non-
of explanation (e.g., how, what, why) to achieve experts (e.g., legal professionals) [26, 21], be-
transparency in algorithmic systems such as cause applying them requires more than ba-
Facebook news feed curation [19, 2]. Expla- sic programming skill and experience (e.g.,
nation enables the user to become more aware, knowledge of ML models, built-in methods,
make judgment about the correctness of the see Fig 1). Moreover, we might be able to
output and mechanism; it also supports in- build transparent data science tools using these
terpretability, accountability of algorithmic de- explainers, however, we cannot apply them
cision making action [2]. Transparency and for designing transparent news feed or socio-
interpretability are related through explana- technical systems (e.g., transparent twitter),
tion, these relationships are shown in Figure because most of these news feed uses propri-
2. The way existing interpretability tool works etary algorithm [40]. Research also showed
is also through explanation [18]. Explana- that explanation may enhance user’s positive
attitude towards a system, but not necessar-
Figure 1: The left image shows, Explainer SHAP [18] depicting feature importance plot,
when shap.summary_plot() function is called on the model outcome. The right im-
age shows the output of SHAP interaction values, which makes a calls to the function
shap.TreeExplainer(model).shap_interaction_values() on tree models, to display the interaction
among various demographic variables.
ily trust [7]. These limitations encouraged proposed that effective tinkering happens when
us to discover a distinct way of designing al- it is associated with pause and reflection about
gorithmic transparency at the interface level. software features. We applied tinkering to
Thus, in this study, we propose a cognitive design transparent algorithmic system with
style based approach through incorporating the hope that its exploratory nature will make
tinkering ability into the design. the overall algorithmic transparency experi-
ence less overwhelming. Informed by several
2.3. Tinkering existing research [28, 30, 49, 50], the associa-
tion of gender with respect to tinkering can-
Tinkering is a cognitive style or a “mindset” not be overlooked, discussed below.
to approach problem-solving through “exper-
imentation and discovery”, [43]; it is associ- 2.3.1. Tinkering and Gender
ated with exploratory behavior, trial and er-
ror method, deviation from instructions when Previous research has identified gender dif-
learning [44]. Tinkering is an act of playful ferences in the tinkering attitude [30, 44, 28],
experimentation that enhances motivation, in- and confirmed that females tend to tinker or
fluence learning, innovation [45, 44], impact explore new software features (e.g., spread-
task completion and performance [46, 47, 30]. sheet) less compared to the males for problem-
Even though tinkering is often associated with solving software [30], also in Computer Sci-
making activities under playful conditions [48], ence education (e.g., programming assignment)
tinkering behavior has shown to improve learn- [44]. Numerous studies have showed that tin-
ing and educational benefits in domains such kering is a mental or psychological trait that
as engineering, robots, and programming (e.g., distinguishes how different genders (males and
debugging, block-based) [48, 30, 46]. How- females) approach a given task (e.g., making
ever, tinkering on its own may [44, 48, 45] or an Arduino project) [44, 47, 28, 30, 48]; tin-
may not be a beneficial strategy for problem- kering is also one of the facets of gender-inclusive
solving [30], for example, Beckwith et al. [30] design [49]. Gender-inclusiveness [49] de-
sign do not suggest building a different ver- transparent UI design, followed by feature de-
sion of the same software for a different group scription, and a brief discussion of complete
of user [28], rather, it advocates for designs transparent news feed prototype, Glass News
that support different gender groups equally Feed. Finally, we show how tinkering based
[51]. Gender inclusivity relies upon five facets approach can be applied to determine how
of gender differences: motivations, computer much algorithmic transparency is desired across
self-efficacy, tinkering, information process- different user groups, which essentially helps
ing style, risk aversion, that can impact the in the determination of gender differences in
use of problem solving software. While “In- the interface design.
clusive Design considers the full range of hu-
man diversity with respect to ability, language, 3.1. Tinkering and Transparent
culture, gender, age, and other forms of hu-
Algorithmic User Interfaces
man difference.” [52], gender is one aspect
of inclusive design. Informed and inspired Transparent algorithmic UI’s primary objec-
by previous research [51, 34, 30, 28], in this tive is to reveal how it works by – explaining
work, we will focus on only gender inclusive- the mechanism it uses to produce an outcome
ness. [2]. Even though previous research suggests
In this study, we discuss tinkering as a means design guidelines for them, however, they did
of designing transparent algorithmic UI’s, be- not addressed the cognitive aspects as a de-
cause of it’s inherent exploratory nature might sign element in the UI design [8, 27, 13]. We
be less overwhelming [13] to the user while decided to investigate this gap by proposing
providing personalized user experience; tin- tinkering based transparent algorithmic UI (see
kering also adds an additional ability in the Figure 2). Tinkering is not only related to
design to detect gender differences in the de- problem-solving but it is also associated with
sign. Detecting gender issues early on dur- decision making [28], with regards to soft-
ing the design process improves the usabil- ware feature (e.g., new or existing) exploration;
ity of the software for everyone, including thus, it allows us with the ability to measure
marginalized users [28, 30]. algorithmic transparency needs (how much)
across diverse population at the UI level. Fol-
lowing Beckwith et al. [30], we applied the
3. Designing Transparent term tinkering as users exploratory behav-
Algorithmic User ioral action and practice with software, here
news feed features. Allowing the user to “play-
Interface (UI) Through fully experiment” with the transparent algo-
Tinkering rithmic features serves two potential purposes:
first, the design provides algorithmic infor-
Transparent algorithmic system for various mation in a manner that is not overwhelming
interactive domain applications will work dif- to the user while providing personalized in-
ferently; not only at the user interface level teractive experience; second, tinkering-based
but also at the algorithmic level. For simplic- design has the ability to be tested for gender
ity of design and illustration, we focused on inclusiveness issues.
only a single interactive domain, the Face-
book news feed. We first provide the ratio-
nale behind designing tinkering approach to
Figure 2: The upper portion of the figure illustrates the relationship of Explanation with Transparency
and Interpretability. The association of tinkering based approach to algorithmic transparency is de-
picted below.
3.2. Case Study: Facebook Glass 3.2.1. Glass News Feed Features
News Feed Glass News Feed Algorithm Features: Facebook
Facebook is one of the most widely used socio- news feed algorithm applies users past action
technical systems. Undoubtedly, Facebook has and behavior data to provide content. Even
opened numerous opportunities for work, busi- though existing Fecebook news feed provides
ness, collaboration and communication by con- certain amount of very high-level control about
necting people worldwide; nonetheless, it has what the user sees and why (e.g., sort, hide,
also caused various problems ranging from block, follow/unfollow, limited profile) [40],
privacy threats, mental illness, addiction, to however, a transparent news feed requires other
users trust violation. Facebook news feed has non-trivial control, which revolves around an-
been well studied in the literature for users swering “how” question in addition to answer-
perception and understanding of news feed ing “what” and “why”, but, Facebook news
transparency [2, 40, 41]. Facebook news feed feed blog does not explain that very clearly
works by allowing users to share content and [41]. For simplicity, we turned users actions
consume content through automated selec- in the news feed into transparent algorithmic
tion and ranking algorithm. The news feed interface feature (see Figure 3); for example,
provides content that are relevant, interest- i) counts such as like and reaction count (e.g.,
ing, informative, having high quality [41]. Usershappy, love) in photos, status, videos, ii) list
are usually unaware of how the underlying such as friends list, family, acquaintance, and
algorithmic curation works [2, 53, 41]. There- iii) other features such as liked pages (e.g.,
fore, as a case study, we turned our attention product, business), public groups user follow,
to design transparent algorithmic news feed [41], including descriptive features such as
using a tinkering based approach. notes, tags can used as features. The real fu-
ture transparent application might apply a dif-
ferent set of features in various categories.
We also enabled the ability for the user to
be able to create their own feature set and
Figure 3: Transparent Glass News Feed feature information represented to support user exploration.
For simplicity, a feature can be in any of the following states: selected (✔) indicating on [30], unselected
(as empty) indicating off, feature explanation: How (?) option [40, 30]; a click on the drop down menu
activates these options for selection.
Figure 4: Transparent Glass News Feed prototype applying Facebook user activities such as like
counts, groups, emojis count, etc., as features to provide personalized interactive experience (left).
When “Refresh” button is clicked, the personalized news feed is displayed (right).
explore the news feed outcome. We showed ing capability in the design through incor-
only some of these features in the proposed porating Facebook data as interface features
prototype. (see Figure 3); these features can be frequently
turned on and off by the user for exploration,
Tinkering Capability: We enabled tinker- as described in [30, 28]. Each of the features
can be in one of the states: i) when checked, design the exploration window in many dif-
marked by ✔, meaning feature selected for ferent ways, for different applications.
current exploration, ii) when un-checked, in- The design of tinkering enabled transpar-
dicated by an empty box, meaning not cur- ent algorithmic UI was inspired by the design
rently under exploration, and iii) a question techniques suggested in problem-solving do-
mark (?) to provide feature-related explana- main [30]. We added tinkering capabilities in
tion [40]. These capabilities are hidden un- the Glass news feed design for feature set ex-
der the drop down menu, this button is ac- ploration and experiment with correspond-
tivated when clicked, otherwise, it remains ing news feed output. The Glass news feed
inactive to make sure that these extra abil- feature sets were derived from relevant re-
ities does not overwhelm the user. Tinker- search [41], and was kept to a minimum num-
ing count for any particular user can be mea- ber to avoid causing information overload. We
sured by simply counting the number of fea- incorporated the ability to add a “user-defined”
tures that were turned on and off during a feature set to provide some flexibility. The
session. entire interactive experience is built on the
concept of “playful experimentation” while
Subtle Explanation: Transparency cannot giving users enough control without hurting
be implemented without providing some kind their interface experience [13].
of explanation. Thus, inspired by Rader et al. The resulting news feed is displayed on the
[2], we subtly added “How” explanation. “How” news feed with confidence or accuracy infor-
explanation, “Informs participants that the rank-mation (top right corner in Figure 4). The al-
ing algorithm uses data collected about users gorithmic outcome (news feed after refresh)
and their behaviors to calculate score score intentionally provides minimal information
for each story”. Explanation “How” was in- such as confidence accuracy, because we have
dicated by a question (?) mark, and gets ac- no idea what specific selection or ranking al-
tivated when clicked to indicate more infor- gorithm Facebook originally uses for news
mation (see Fig 3), by showing other meta- feed curation [41]. For similar reason, we did
data information about the queried feature. not apply visualization, however, it is a pos-
This explanation feature becomes really es- sibility [18, 5, 13]. This is again our very first
sential when user creates their own “user- trying of tinkering approach to achieve algo-
defined” feature set for experimentation with rithmic transparency in interactive systems.
the news feed. This ability of defining user-
defined feature set ensures enough flexibility 3.2.3. Measuring Gender Differences in
for exploration without overwhelming the user Glass News Feed
with all possible tinkering options.
Though, our main motivation for applying
tinkering approach to design transparent al-
3.2.2. Transparent Glass News Feed
gorithmic systems was to enable the exploratory
The complete very first prototype of Glass nature of tinkering to unfold in the interface
news feed is presented in Figure 4. Tinker- design, we suspect that the playful cognitive
ing or feature set exploration window is de- style might also be able to reduce cognitive
picted on the left, and the corresponding out- load in the transparent systems [20, 15]. An-
come is shown on the right. For simplicity, other direct outcome from applying tinker-
we assumed that the features will appear on ing approach is that it allows the designer to
the news feed itself, though, it is possible to check for gender issues in the design. The
way our proposal is able to detect and mea- limited, because Facebook news feed uses pro-
sure gender differences is through measur- prietary algorithm [40]. A useful workaround
ing (counting) “how much” tinkering (on/off) suggested in [2] can be beneficial for design-
an user engaged during an episode, consis- ing other socio-technical transparent systems,
tent with prior study of tinkering in problem- by content analysis of blog posts or related
solving [30] domain. Similarly, whether our sources. Second, we proposed transparent al-
design suffers from gender issues or not can gorithmic prototype for social media news feed
be measured by collecting users tinkering fre- only, there are other algorithmic domain ap-
quency, tinkering episode and tinkering rate. plications such as recommender systems, data
For any particular task (in a user study), i) tin- science tools, data journalism tools, that can
kering frequency is the number of features be designed and tested using similar strategy
a user have turned on and off; ii) tinkering suggested in this study. Our design was also
episode can be defined as a fixed amount of very limited in features and capabilities. Fu-
time for task completion; iii) tinkering rate is ture work might take our design concept, ex-
the ratio of the previous two measures (i and pand (features/matrics) it, and test with the
ii). Depending on the number of user groups various users to see how tinkering plays out
taking part in the study, tinkering measures in achieving transparency. Third, we addressed
for each user groups can be passed to sta- tinkering approach to the design of transpar-
tistical or ML models for quantitative anal- ent algorithmic system, however, there are
ysis. We did not show these measures in this other facets of cognitive styles such as risk-
study, rather, these are some of the potential aversion, information processing style (e.g.,
areas for future exploration. Transparency is [49, 34]) which might influence the use of trans-
critical for designing interactive social media parent systems (especially for females), we
news feed for trust building and system ac- did not address these complex relationships
ceptance. However, too much openness may while designing our proposal. Thus, future
make the system vulnerable to various kinds work should examine other cognitive styles
of exploitation, harm, and misuse. Thus, how of problem solving and their influence on tin-
to balance such competing, yet a necessary kering when designing transparent algorith-
aspect of a transparent news feed requires mic system. Additionally, most previous stud-
further inquiry. ies investigated genders (males and females)
influence in design research, thus, we need to
expand our understanding by including marginal-
4. Limitations and Future ized LGBTQ+ communities in our design pro-
Work cess. Finally, gender is one dimension in the
broad spectrum of inclusive design [52], thus,
In this study, we proposed a tinkering based future studies should investigate other diver-
approach towards designing transparent al- sity dimensions (e.g., race, class, language)
gorithmic user interface. There are several while designing transparent systems.
limitations to this study that is worth men-
tioning. First, the Glass news feed design was
inspired by relevant research in Facebook news 5. Conclusion
feed [41] and tinkering [30, 28]. While back-
The demand for transparent algorithmic user
ground research related to tinkering was broad
interfaces is on the rise. Previous research
and detailed, Facebook news feed research was
applied explanations associated with text and
visualization techniques to improve the in- tors in computing systems, 2018, pp. 1–
terpretability of ML models. These special- 13.
ized tools are mainly used by technical ex- [3] K. Wiggers, Researchers show that
perts such as data scientists and cannot be computer vision algorithms pre-
easily adapted for developing other transpar- trained on imagenet exhibit multi-
ent domain applications such as socio-technical ple, distressing biases, 2020. URL:
systems. Furthermore, sample transparent UI https://venturebeat.com/2020/11/03/
prototypes in diverse domains exists, how- researchers-show-that-computer-vision-algorithms-pretrained-on
ever, we do not know how to design a trans- accessed: 2020-04-11.
parent interactive Facebook news feed that [4] M. Miceli, M. Schuessler, T. Yang,
do not hurt the UX. Also, how much trans- Between subjectivity and imposition:
parency is even desired across diverse pop- Power dynamics in data annotation for
ulation and how to measure that is also un- computer vision, Proceedings of the
known. Thus, in this study, we proposed a ACM on Human-Computer Interaction
very first tinkering based transparent algo- 4 (2020) 1–25.
rithmic Glass News Feed UI prototype with [5] H. Shen, H. Jin, Á. A. Cabrera, A. Perer,
the potential to navigate these multiple sce- H. Zhu, J. I. Hong, Designing alter-
narios. This proposal can be easily expanded native representations of confusion ma-
and adapted to design transparent algorith- trices to support non-expert public un-
mic systems in various domain applications derstanding of algorithm performance,
(e.g., transparent algorithmic tools for the jour- Proceedings of the ACM on Human-
nalists [54]), which essentially requires fur- Computer Interaction 4 (2020) 1–22.
ther examination with various groups of users [6] A. Woodruff, S. E. Fox, S. Rousso-
to understand its technical feasibility, ethical Schindler, J. Warshaw, A qualitative ex-
and societal implications (e.g., benefits, harms). ploration of perceptions of algorithmic
fairness, in: Proceedings of the 2018 chi
conference on human factors in com-
6. Acknowledgments puting systems, 2018, pp. 1–14.
[7] R. F. Kizilcec, How much information?
I would like to thank anonymous reviewers
effects of transparency on trust in an
for their valuable comments and feedback.
algorithmic interface, in: Proceedings
of the 2016 CHI Conference on Human
References Factors in Computing Systems, 2016,
pp. 2390–2395.
[1] S. M. Julia Angwin, Jeff Larson, [8] M. Eiband, H. Schneider, M. Bilandzic,
L. Kirchner, Machine bias, 2016. URL: J. Fazekas-Con, M. Haug, H. Hussmann,
https://www.propublica.org/article/ Bringing transparency design into prac-
tice, in: 23rd international conference
machine-bias-risk-assessments-in-criminal-sentencing,
accessed: 2020-07-08. on intelligent user interfaces, 2018, pp.
[2] E. Rader, K. Cotter, J. Cho, Explanations 211–223.
as mechanisms for supporting algorith- [9] R. Sinha, K. Swearingen, The role
mic transparency, in: Proceedings of of transparency in recommender sys-
the 2018 CHI conference on human fac- tems, in: CHI’02 extended abstracts on
Human factors in computing systems,
2002, pp. 830–831.
[10] K. Balog, F. Radlinski, S. Arakelyan, URL: https://docs.microsoft.com/
Transparent, scrutable and explainable en-us/azure/machine-learning/
user models for personalized recom- concept-responsible-ml, accessed:
mendation, in: Proceedings of the 42nd 2020-11-12.
International ACM SIGIR Conference [19] R. Iyer, Y. Li, H. Li, M. Lewis, R. Sun-
on Research and Development in Infor- dar, K. Sycara, Transparency and ex-
mation Retrieval, 2019, pp. 265–274. planation in deep reinforcement learn-
[11] M. Kay, S. Haroz, S. Guha, P. Dragicevic, ing neural networks, in: Proceedings of
C. Wacharamanotham, Moving trans- the 2018 AAAI/ACM Conference on AI,
parent statistics forward at chi, in: Pro- Ethics, and Society, 2018, pp. 144–150.
ceedings of the 2017 CHI Conference [20] U. Ehsan, Q. V. Liao, M. Muller,
Extended Abstracts on Human Factors M. O. Riedl, J. D. Weisz, Expanding
in Computing Systems, 2017, pp. 534– explainability: Towards social trans-
541. parency in ai systems, arXiv preprint
[12] T. Kulesza, S. Stumpf, M. Burnett, W.- arXiv:2101.04719 (2021).
K. Wong, Y. Riche, T. Moore, I. Oberst, [21] S. R. Hong, J. Hullman, E. Bertini,
A. Shinsel, K. McIntosh, Explanatory Human factors in model interpretabil-
debugging: Supporting end-user de- ity: Industry practices, challenges, and
bugging of machine-learned programs, needs, Proceedings of the ACM on
in: 2010 IEEE Symposium on Visual Human-Computer Interaction 4 (2020)
Languages and Human-Centric Com- 1–26.
puting, IEEE, 2010, pp. 41–48. [22] H. Nori, S. Jenkins, P. Koch, R. Caruana,
[13] T. Kulesza, M. Burnett, W.-K. Wong, Interpretml: A unified framework for
S. Stumpf, Principles of explanatory de- machine learning interpretability, arXiv
bugging to personalize interactive ma- preprint arXiv:1909.09223 (2019).
chine learning, in: Proceedings of the [23] J. Wexler, M. Pushkarna, T. Bolukbasi,
20th international conference on intelli- M. Wattenberg, F. Viégas, J. Wilson,
gent user interfaces, 2015, pp. 126–137. The what-if tool: Interactive probing of
[14] B. Kovach, T. Rosenstiel, The elements machine learning models, IEEE trans-
of journalism: What newspeople should actions on visualization and computer
know and the public should expect, graphics 26 (2019) 56–65.
Three Rivers Press (CA), 2014. [24] Z. C. Lipton, The mythos of model in-
[15] N. Diakopoulos, M. Koliska, Algorith- terpretability, Queue 16 (2018) 31–57.
mic transparency in the news media, [25] R. Guidotti, A. Monreale, S. Ruggieri,
Digital Journalism 5 (2017) 809–828. F. Turini, F. Giannotti, D. Pedreschi, A
[16] C. D’Ignazio, L. F. Klein, Data feminism, survey of methods for explaining black
MIT Press, 2020. box models, ACM computing surveys
[17] T. Gebru, J. Morgenstern, B. Vecchione, (CSUR) 51 (2018) 1–42.
J. Wortman Vaughan, H. Wallach, [26] H. Kaur, H. Nori, S. Jenkins, R. Caruana,
H. Daumé III, K. Crawford, Datasheets H. Wallach, J. Wortman Vaughan, In-
for datasets, 2018. URL: https: terpreting interpretability: Understand-
//www.microsoft.com/en-us/research/ ing data scientists’ use of interpretabil-
publication/datasheets-for-datasets/. ity tools for machine learning, in: Pro-
[18] M. 2020, What is responsible ma- ceedings of the 2020 CHI Conference on
chine learning? (preview), 2020. Human Factors in Computing Systems,
2020, pp. 1–14. and Trends in Human–Computer Inter-
[27] C.-H. Tsai, P. Brusilovsky, Designing action 13 (2020) 1–69.
explanation interfaces for transparency [35] S. U. Noble, Algorithms of oppression:
and beyond., in: IUI Workshops, 2019. How search engines reinforce racism,
[28] M. Burnett, S. Wiedenbeck, V. Grigore- nyu Press, 2018.
anu, N. Subrahmaniyan, L. Beckwith, [36] S. Ballard, K. M. Chappell, K. Kennedy,
C. Kissinger, Gender in end-user soft- Judgment call the game: Using value
ware engineering, in: Proceedings of sensitive design and design fiction to
the 4th international workshop on End- surface ethical concerns related to tech-
user software engineering, 2008, pp. nology, in: Proceedings of the 2019 on
21–24. Designing Interactive Systems Confer-
[29] J. Meyers-Levy, Gender differences in ence, 2019, pp. 421–433.
information processing: A selectivity [37] M. Muller, M. Feinberg, T. George, S. J.
interpretation, Ph.D. thesis, Northwest- Jackson, B. E. John, M. B. Kery, S. Passi,
ern University, 1986. Human-centered study of data science
[30] L. Beckwith, C. Kissinger, M. Burnett, work practices, in: Extended Abstracts
S. Wiedenbeck, J. Lawrance, A. Black- of the 2019 CHI Conference on Human
well, C. Cook, Tinkering and gender in Factors in Computing Systems, 2019,
end-user programmers’ debugging, in: pp. 1–8.
Proceedings of the SIGCHI conference [38] B. Haibe-Kains, G. A. Adam, A. Hosny,
on Human Factors in computing sys- F. Khodakarami, L. Waldron, B. Wang,
tems, 2006, pp. 231–240. C. McIntosh, A. Goldenberg, A. Kun-
[31] J. Kemper, D. Kolkman, Transparent daje, C. S. Greene, et al., Transparency
to whom? no algorithmic accountabil- and reproducibility in artificial intelli-
ity without a critical audience, Infor- gence, Nature 586 (2020) E14–E16.
mation, Communication & Society 22 [39] A. Meijer, Understanding modern
(2019) 2081–2096. transparency, International Review of
[32] M. T. Ribeiro, S. Singh, C. Guestrin, Administrative Sciences 75 (2009) 255–
"why should i trust you?" explaining 269.
the predictions of any classifier, in: [40] E. Rader, R. Gray, Understanding user
Proceedings of the 22nd ACM SIGKDD beliefs about algorithmic curation in the
international conference on knowledge facebook news feed, in: Proceedings
discovery and data mining, 2016, pp. of the 33rd annual ACM conference on
1135–1144. human factors in computing systems,
[33] M. M. Burnett, E. F. Churchill, M. J. Lee, 2015, pp. 173–182.
Sig: gender-inclusive software: What [41] K. Cotter, J. Cho, E. Rader, Explaining
we know about building it, in: Pro- the news feed algorithm: An analysis of
ceedings of the 33rd Annual ACM Con- the" news feed fyi" blog, in: Proceedings
ference Extended Abstracts on Human of the 2017 CHI conference extended
Factors in Computing Systems, 2015, abstracts on human factors in comput-
pp. 857–860. ing systems, 2017, pp. 1553–1560.
[34] S. Stumpf, A. Peters, S. Bardzell, M. Bur- [42] S. M. Lundberg, S.-I. Lee, A unified
nett, D. Busse, J. Cauchard, E. Churchill, approach to interpreting model predic-
Gender-inclusive hci research and de- tions, in: Advances in neural infor-
sign: A conceptual review, Foundations mation processing systems, 2017, pp.
4765–4774. ing style, self-efficacy, and tinkering for
[43] D. V. Loertscher, Invent to learn: Mak- robot tele-operation, in: 2018 15th In-
ing, tinkering, and engineering in the ternational Conference on Ubiquitous
classroom, Teacher Librarian 41 (2013) Robots (UR), IEEE, 2018, pp. 443–448.
45. [51] M. Burnett, Doing inclusive design:
[44] S. Krieger, M. Allen, C. Rawn, Are fe- From gendermag in the trenches to in-
males disinclined to tinker in computer clusive mag in the research lab, in:
science?, in: Proceedings of the 46th Proceedings of the International Con-
ACM Technical Symposium on Com- ference on Advanced Visual Interfaces,
puter Science Education, 2015, pp. 102– 2020, pp. 1–6.
107. [52] idrc, Inclusive design research center,
[45] M. G. Jones, L. Brader-Araje, L. W. Car- 1975. URL: https://idrc.ocadu.ca/, ac-
boni, G. Carter, M. J. Rua, E. Banilower, cessed: 2021-10-02.
H. Hatch, Tool time: Gender and stu- [53] M. A. DeVito, J. Birnholtz, J. T. Hancock,
dents’ use of tools, control, and au- M. French, S. Liu, How people form
thority, Journal of Research in Sci- folk theories of social media feeds and
ence Teaching: The Official Journal of what it means for how we study self-
the National Association for Research presentation, in: Proceedings of the
in Science Teaching 37 (2000) 760–783. 2018 CHI conference on human factors
[46] Y. Dong, S. Marwan, V. Catete, T. Price, in computing systems, 2018, pp. 1–12.
T. Barnes, Defining tinkering behavior [54] D. Showkat, E. P. S. Baumer, Outliers:
in open-ended block-based program- More than numbers? (2020).
ming assignments, in: Proceedings of
the 50th ACM Technical Symposium on
Computer Science Education, 2019, pp.
1204–1210.
[47] M. U. Bers, L. Flannery, E. R. Kazakoff,
A. Sullivan, Computational thinking
and tinkering: Exploration of an early
childhood robotics curriculum, Com-
puters & Education 72 (2014) 145–157.
[48] M. H. Lamers, F. J. Verbeek, P. W.
van der Putten, Tinkering in scientific
education, in: International Conference
on Advances in Computer Entertain-
ment Technology, Springer, 2013, pp.
568–571.
[49] M. Burnett, S. Stumpf, J. Macbeth,
S. Makri, L. Beckwith, I. Kwan, A. Pe-
ters, W. Jernigan, Gendermag: A
method for evaluating software’s gen-
der inclusiveness, Interacting with
Computers 28 (2016) 760–787.
[50] D. Showkat, C. Grimm, Identifying gen-
der differences in information process-