=Paper=
{{Paper
|id=Vol-2903/IUI21WS-HAIGEN-2
|storemode=property
|title=Nine Potential Pitfalls when Designing Human-AI Co-Creative Systems
|pdfUrl=https://ceur-ws.org/Vol-2903/IUI21WS-HAIGEN-2.pdf
|volume=Vol-2903
|authors=Daniel Buschek,Lukas Mecke,Florian Lehmann,Hai Dang
|dblpUrl=https://dblp.org/rec/conf/iui/BuschekMLD21
}}
==Nine Potential Pitfalls when Designing Human-AI Co-Creative Systems==
Nine Potential Pitfalls when Designing
Human-AI Co-Creative Systems
Daniel Buscheka , Lukas Meckeb,c , Florian Lehmanna and Hai Danga
a Research Group HCI + AI, Department of Computer Science, University of Bayreuth, Bayreuth, Germany
b Bundeswehr University Munich, Munich, Germany
c LMU Munich, Munich, Germany
Abstract
This position paper examines potential pitfalls on the way towards achieving human-AI co-creation with
generative models in a way that is beneficial to the users’ interests. In particular, we collected a set of
nine potential pitfalls, based on the literature and our own experiences as researchers working at the
intersection of HCI and AI. We illustrate each pitfall with examples and suggest ideas for addressing it.
Reflecting on all pitfalls, we discuss and conclude with implications for future research directions. With
this collection, we hope to contribute to a critical and constructive discussion on the roles of humans and
AI in co-creative interactions, with an eye on related assumptions and potential side-effects for creative
practices and beyond.
Keywords
HCI, Artificial Intelligence, Co-Creation, Design
1. Introduction also entered specifically artistic domains, in-
cluding visual art [7], creative writing and
Ongoing advances in generative AI systems poetry [8, 9]. More examples can be found in
have sparked great interest in using them in- a curated “ML x Art” list1 .
teractively in creative contexts and for digital A common vision, also present in the call
content creation and manipulation: Some ex- for this workshop, paints a picture of cre-
amples include (1) generating or modifying ative human use of such AI as tools. In
images with generative adversarial networks this view, these new interactive systems are
(GANs) [1, 2, 3], (2) generating controllable hoped to realise key ideas from creativity
movements for virtual characters with re- support tools (CST, [10]) by leveraging AI
current neural networks, deep reinforcement capabilities. More specifically, this support
learning and physics simulations [4], and (3) could cast humans and AI in many differ-
controllable machine capabilities for gener- ent roles (for a recent overview see [11]).
ating or summarizing when working with This includes, for example, using AI as a di-
text [5, 6]. Such computational methods have vergent or convergent agent, as described
by Hoffmann [12], that is, to generate or
Joint Proceedings of the ACM IUI 2021 Workshops, April
13-17, 2021, College Station, USA evaluate (human) ideas. Related, Kantosalo
" daniel.buschek@uni-bayreuth.de (D. Buschek); and Toivonen [13] highlight alternating co-
lukas.meckek@unibw.de (L. Mecke); creation, with the AI “pleasing” and “provok-
florian.lehmann@uni-bayreuth.de (F. Lehmann);
ing” the user. Moreover, Negrete-Yankelevich
hai.dang@uni-bayreuth.de (H. Dang)
and Morales-Zaragoza [14] describe a related
© 2021 Copyright for this paper by its authors. Use permit-
ted under Creative Commons License Attribution 4.0 Inter-
set of roles, including AI as an “apprentice”,
national (CC BY 4.0).
CEUR
http://ceur-ws.org
CEUR Workshop Proceedings 1 https://mlart.co/, last accessed 17.12.2020
(CEUR-WS.org)
Workshop ISSN 1613-0073
Proceedings
whose work is judged and selectively chosen sign [15]. Other work collected dark UI/UX
by humans, or a leader-like role, which only patterns empirically by reviewing a large set
leaves final configurations to the user. of existing mobile applications [17]. Both ap-
Within this range of roles, the workshop proaches seem challenging to directly trans-
call emphasises the generative capabilities of fer to collecting pitfalls in the context of co-
AI. In this paper, we thus focus on the role creative generative AI, since there are no pre-
of AI as a generator, and the underlying goal viously defined pitfalls and no easily accessi-
of freeing its users to focus on a larger cre- ble collections (or “app stores”) of many us-
ative vision, while the AI takes care of more able applications for review.
tedious steps. Therefore, we followed a qualitative, spec-
With this goal in mind, this paper exam- ulative approach and brainstormed on poten-
ines potential pitfalls on the way towards tial pitfalls, or “what could go wrong” (cf. [18])
achieving it in practice. Our research ap- in interactions with co-creative AI. Here we
proach is related to work on dark patterns are loosely inspired by aspects of speculative
in UI/UX design [15], which also examines design [19], although that area typically aims
– sometimes speculatively [16], sometimes to address broader issues than what we fo-
empirically [17] – what “could go wrong”, cus on here. Further inspiring “speculative
in order to ultimately inspire directions for futures” for human-AI co-creative systems,
interaction design that are beneficial to the along with a conceptual framework, can be
users’ interests. In doing so, we thus hope to found in the work by Bown and Brown [20].
contribute to a critical and constructive dis- We particularly explore issues grounded in
cussion on the roles of humans and AI in today’s interactions and UIs, which can be
co-creative interactions, with an eye on re- reasonably well imagined to potentially oc-
lated assumptions and potential side-effects cur with the current state of the art of gen-
for creative practices and beyond. erative AI models. In particular, our brain-
storming started from three prompts: (1) Is-
sues arising from currently limited capabili-
2. Research Approach ties of AI, and (2) from exploring what might
happen with too much AI involvement; plus
Our interest in collecting pitfalls is inspired
(3) thinking beyond use and usage situations.
by work on dark patterns [16, 17, 15]: Both
Considering this approach, we see the pitfalls
pitfalls and dark patterns identify issues with
presented here not as a comprehensive and
user interfaces and interactions that result
“definitive” list but rather as a stimulus for
in experiences or outcomes which might not
discussion in the research community – at
be in the user’s best interest. However, in
the workshop and beyond.
contrast to what is often assumed in dark
patterns, pitfalls do not imply bad intention,
rather oversight or lack of information2 . 3. Nine Potential Pitfalls
Concretely, related work collected specu-
lative dark patterns for explainability, trans- Table 1 shows the pitfalls we collected. In
parency and control in intelligent interac- particular, we present nine pitfalls, three for
tive systems [16] by transferring dark pat- each of our starting prompts, that is, for lim-
terns previously described for UI/UX de- ited AI (pitfalls 1-3), too much AI involve-
2 https://www.merriam-webster.com/dictionary/
ment (pitfalls 4-6), and for aspects beyond use
(pitfalls 7-9).
pitfall, last accessed 17.12.2020
Name Affected Problem description Example How it might have How it might be
aspects happened (examples) addressed (examples)
Limited AI
1 Invisible AI model, A (generative) AI component An AI face image editor Model with limited UI: Show boundaries e.g. via
boundaries creativity, imposes unknown cannot make faces bald generalisability beyond uncertainty, samples,
exploration restrictions on creativity and without also turning them training data, and entangled precision/recall [21]. AI:
exploration. male-looking. or nonsensical (latent) Improve generalisabilty,
dimensions w.r.t. human disentanglement; consider
understanding. narrowing scope.
2 Lack of usability, The UI imposes a Image generator is controlled Fine-grained AI control is Human-centred design with
expressive creativity, “bottleneck” on creative use with (i.e. many 1D inputs for difficult. “Conservative” UI target group, e.g. to inform
interaction exploration of the AI. a high D latent space) [22] - design focused on ensuring preferable tradeoffs of UI
vs. rich image editor tools input stays in (training) data expressiveness and model
like brushes. distribution. “breaking points”.
3 False sense of trust, AI suggests answers or When prompted to complete Language models are capable Learn an additional model,
proficiency reliability completions that the user a sentence about the of memorizing excerpts of that can attribute generated
cannot verify or that population of a large city the text and reproducing them content to an explicit source
generate a false sense of AI delivers a reasonable when prompted with a to allow for verifying
proficiency. number that could be correct similar context. correctness.
– but might not be.
Too much AI
4 Conflicts of usability, UX, AI overwrites what the user In a co-creative text editor, Language model optimised Keep track of user edits to
territory control has manually created/edited. the user replaces terms in for word probability and protect them, ask for
generated text. Later, the AI user’s term was less likely. confirmation before changes,
(partly) reverts these or to integrate this info into
changes. inference.
5 Agony of usability, UX, AI provides overwhelming An AI photo editor displays UI design process was Clarifying use cases and
choice productivity amount/detail of content an excessive number of focused on showing AI support needs, responsive /
that distracts or creates suggested variants. The capabilities instead of user malleable UI concepts,
agony of choice. resulting small previews needs. changeable user settings.
make it hard to discern and
decide.
6 Time waster usability, UX, AI interrupts user or draws A co-creative music Same as above. Also: Timing Same as above.
productivity attention away from the composition tool of the AI’s involvement not Attention-aware UI (e.g. AI
creative task itself. continuously shows melody tested with users or varying waits to not disrupt user’s
completions, which keep the preferences between users. focused work; or stops
user busy with exploring or suggestions if user has
understanding the system explored it for a while).
instead of their ideas.
Beyond use
7 AI bias accountabil- AI suggestions are biased in An AI story generator writes AI picked up biases in the Design for easy human
ity, fairness, a certain unwanted way, gender-stereotypical training data or created bias revision/rejection.
transparency w.r.t. human meaning and protagonists (e.g. w.r.t. through its learning method. Addressing AI bias (e.g. see
values. roles/occupations). Development process [23, 24]). Learning from user
unaware of biases. feedback/actions.
8 Conflict of creativity, A system and a user In a co-creative text editor Co-creative systems operate Should we attribute an AI
Creation & responsibility, collaborate to create an the AI suggests formulations on a continuum between and training data providers
Responsibility ownership output. Ownership and that appear verbatim in the user and system creation, as contributors? Do we need
responsibility are unclear training data. Who is the challenging attributions of systems to check for
owner of the resulting text? ownership. (accidental) plagiarism?
9 User and Data privacy, Private data may be exposed 1) A user A works with a AI models are trained on a Remove private information
Privacy responsibility through the AI system or its cloud-based AI text creator large corpus of data and can from training sets and work
training data. and their data is transmitted sometimes default to with AI either encrypted or
unencrypted. 2) The AI replicating this data when locally.
reveals (private parts of) prompted.
another user B’s data to A
(e.g. [25, 26]).
Table 1
Overview of the collected pitfalls. Additionally, Figure 2 visualises one example for each of the cate-
gories “Limited AI”, “Too much AI” and “Beyond use”.
The table characterises each pitfall with 4. Discussion
a name, affected aspects (categories), a de-
scription of the problem, and a concise pit- 4.1. What are the Consequences
fall “vignette”: This includes an example sce- of these Pitfalls?
nario describing a system in which this is-
sue arises, along with an illustrating diag- While Table 1 lists concrete example prob-
nosis of how this might have happened in lems, here we reflect more broadly on the
the design and development of said system, consequences of such pitfalls for co-creative
plus corresponding ideas for potential solu- generative systems. In particular, we see two
tions or open questions. For each category of broad directions – overt and covert conse-
pitfalls (“limited AI”, “too much AI”, and “be- quences.
yond use”) we picked one example for further First, users might be annoyed, distracted,
illustration in Figure 2. or otherwise put off by bad user experi-
As an additional overview, Figure 1 locates ences due to these pitfalls. For example, cases
these pitfalls within an interaction loop in where the AI directly overwrites the user
human-AI co-creative systems; the loop is (pitfall 4), or distracts the user from their pro-
taken from a framework by Guzdial and Riedl ductive task (pitfall 6) might be particularly
[27]. In this figure we illustrate our underly- harmful in this regard. Observing AI failures
ing mental model of human-AI interaction. It might lead to algorithm aversion, as described
consists of the user and the AI as potential by Dietvorst et al. [29]. In these cases, users
actors collaborating on a shared artifact. The might avoid future use of such systems.
AI can get involved in the creation process in In contrast, users might also be affected
one of two ways: It can either be prompted to negatively without noticing it. For example,
contribute through the user interface (e.g. us- this might be the case if the AI implies invis-
ing a predefined function to achieve an image ible boundaries (pitfall 1) that hinder creative
manipulation) or it can act without a (user) exploration. Similarly, “silent” issues might
prompt, e.g. to suggest edits or flag errors. result from the generative AI introducing in-
We further include the training data in this correct information (pitfall 3), distractions
model, as it provides the basis for the AI’s (pitfall 6), or biases and legal issues (pitfalls
actions and decisions. While we located the 7-9). Users might only (much later) stumble
pitfalls within this model, these locations are across issues in downstream processes, eval-
by no means the only possible ones. They uations or reflections. If such issues then af-
represent our interpretations of which point fect evaluation of the user’s creative work
in the interaction loop is most likely affected (e.g. due to false information, pitfall 3), this
by each pitfall. As an example, a lack of ex- might result in algorithm anxiety, described
pressive interaction may not only be rooted by Jhaver et al. [30].
in the user interface, but can also be caused Overall, the pitfalls might thus result in
by insufficient training data to support more a range of possible consequences, from bad
meaningful options. user experiences, negative impacts on cre-
ative work, abandonment of tools, to broader
issues, including privacy related and legal
ones.
Limited AI User and
Too Much AI Data Privacy
Conflict of
Beyond Use Territory
training
Conflict of
Creation Time AI Bias
Waster
unprompted output
AI
creation
Artefact
User prompted output
False Sense Invisible AI
of Proficiency Boundaries
prompt
User Interface
Lack of expressive Agony of
Interaction Choice
Figure 1: Visualisation of our underlying mental model of the interaction loop in human-AI co-creative
systems. We place our identified pitfalls (see Table 1) in this loop based on the position where they
most likely occur.
4.2. How can the Pitfalls Inform 4.2.2. Informing Comparisons and
Research and Design of Baselines
Co-Creative Generative Moreover, the problematic systems described
Systems? in the pitfalls in Table 1 might inspire infor-
mative baseline systems for comparison with
Put briefly, this position paper describes what
(hopefully) better solutions. For example, a
could go wrong in order to stimulate discus-
typical HCI user study on an AI photo edi-
sions of how to get it right. More concretely,
tor might compare an AI vs non-AI version.
here we describe three potential uses.
However, as illustrated with the example for
pitfall 5 (Agony of Choice), another insightful
4.2.1. Raising Awareness of Design evaluation might further use a baseline that
Considerations involves AI “even more” than the intended
The described pitfalls can help researchers design solution to be evaluated.
and designers to think about a wide range
of concrete aspects of interaction and UI de- 4.2.3. Making the Criteria for
sign for co-creative generative systems (e.g. Successful Design Explicit
temporal and spatial integration of AI actions
Evaluating technical systems for creative use
in UIs). In this way, they may raise aware-
is challenging [32], for example, since cre-
ness for making design choices explicit that
ativity and quality criteria are often hard
might have otherwise not been prominently
to operationalise, and may require interdis-
considered. These design choices could then
ciplinary knowledge. Additionally involving
also be considered in light of relevant frame-
AI can be expected to complicate evaluations
works, such as Horvitz’ mixed initiative prin-
further. Here, our pitfalls and examples may
ciples [31] or the co-creative framework de-
provide helpful concrete starting points, as
scribed by Guzdial and Riedl [27] (cf. Fig-
a thinking prompt towards developing eval-
ure 1).
uations that satisfy both HCI and AI inter-
AI Photo Editor Edit Suggestions
nonumy eirmod tempor invidunt ut labore et dolore magna
aliquyam erat, sed diam voluptua.
My social security number is 078
AI Suggestions:
-----------------------------------------------------------------
-05-1120
(a) An AI photo editor displays an excessive (b) Example for an AI leaking sensitive
number of suggestions. Due to the num- information from the training dataset
ber of options and the small previews (based on [25]), either as a suggestion or
(making it hard to see what each option as a response to a primer (enabling active
achieves) the user is left in an agony of attacks). Such an attack has been demon-
choice. strated by Carlini et al. [26].
@ai.draft: First article of an article series about the largest cities in the world.
In this article we will start with the cities Tokyo, Mexico City, and Istanbul.
Drafted by AI
In our article series, we are going to visit the largest cities in the world. This
time we will focus Tokyo, Mexico City, and Istanbul, three very different
cultural centers. We start right away with the largest city of these: Tokyo.
The Japanese city is considered the largest city in the world in terms of its
population. 37.468.000 people are living there.
(c) A text editing tool could offer intelligent features, e.g. drafting paragraphs or completing a sentence.
Yet, the AI might not have the capability to refer to sources – to the human it remains unclear if the
claims in a text are true. This leads to a false sense of proficiency. Here, the AI drafted a paragraph
with claims about Tokyo’s approximate population (bold). However, it refers to the metropolitan
area, not the city proper. The interface in the figure is inspired by Yang et al. [28].
Figure 2: Collection of visual examples for the pitfalls shown in Table 1. Here we show potential
interfaces and situations in which selected pitfalls may occur, leading to (a) agony of choice, (b) a
breach of privacy or (c) a false sense of proficiency.
ests. For instance, readers and workshop par- considerations here, we do not expect this to
ticipants (with different backgrounds) could be case: Co-creative systems involving both
think about how they would improve the de- human and AI actions are not only limited
sign – and evaluate it – for a concrete prob- by AI capabilities. We also have to expect
lematic example system in Table 1; and in problems arising from interaction and UI de-
particular how they might then make explicit sign as well as from integration into creative
and formulate their criteria in these cases. human practices. For example, a lack of ex-
pressiveness in interactions (pitfall 2) can still
4.3. Will the Pitfalls Vanish with cause problems for creative human use, even
in a system with a powerful, “perfect” gener-
Better AI?
ative model under the hood.
One may ask if the illustrated issues might In summary, the pitfalls highlight that
simply vanish in future systems that can human-AI co-creative systems sit at the in-
build on better AI capabilities. Based on our tersection of HCI and AI, and that successful
designs need to consider human-centred as- of the National Academy of Sciences
pects in the process. Our pitfalls reflect this (2020). URL: https://www.pnas.org/
in their mix of issues relating to interaction, content/early/2020/08/31/1907375117.
UI and AI. We thus aim to motivate interdis- doi:10.1073/pnas.1907375117.
ciplinary work on such systems, also regard- [2] E. Härkönen, A. Hertzmann,
ing research and design methodology. J. Lehtinen, S. Paris, GANSpace:
Discovering Interpretable GAN
Controls (2020). URL: https:
5. Conclusion //arxiv.org/abs/2004.02546v1.
[3] T. Karras, S. Laine, M. Aittala, J. Hell-
One vision of interactive use of AI tools in
sten, J. Lehtinen, T. Aila, Analyzing and
co-creative settings focuses on the role of the
improving the image quality of style-
AI as a generator that augments what peo-
gan, in: Proceedings of the IEEE/CVF
ple can achieve in creative tasks. This pa-
Conference on Computer Vision and
per examined potential pitfalls on the way to-
Pattern Recognition (CVPR), 2020.
wards achieving this vision in practice, start-
[4] S. Park, H. Ryu, S. Lee, S. Lee,
ing from three speculation prompts: Issues
J. Lee, Learning predict-and-
arising from (1) limited AI, (2) too much AI
simulate policies from unorga-
involvement, and (3) thinking beyond use
nized human motion data, ACM
and usage situations.
Trans. Graph. 38 (2019). URL: https:
Concretely, we collected a set of nine po-
//doi.org/10.1145/3355089.3356501.
tential pitfalls (Table 1) and discussed pos-
doi:10.1145/3355089.3356501.
sible consequences and takeaways for re-
[5] S. Dathathri, A. Madotto, J. Lan, J. Hung,
searchers and designers along with illustrat-
E. Frank, P. Molino, J. Yosinski, R. Liu,
ing examples. With this collection, we hope
Plug and Play Language Models: A Sim-
to contribute to a critical and constructive
ple Approach to Controlled Text Gen-
discussion on the roles of humans and AI in
eration, arXiv:1912.02164 [cs] (2020).
co-creative interactions, with an eye on re-
URL: http://arxiv.org/abs/1912.02164,
lated assumptions and potential side-effects
arXiv: 1912.02164.
for creative practices and beyond.
[6] S. Gehrmann, H. Strobelt, R. Krüger,
H. Pfister, A. M. Rush, Visual interac-
Acknowledgments tion with deep learning models through
collaborative semantic inference, IEEE
This project is funded by the Bavarian State Transactions on Visualization and
Ministry of Science and the Arts and coordi- Computer Graphics 26 (2020) 884–894.
nated by the Bavarian Research Institute for doi:10.1109/TVCG.2019.2934595.
Digital Transformation (bidt). [7] M. Akten, R. Fiebrink, M. Grierson,
Learning to see: You are what you
see, in: ACM SIGGRAPH 2019 Art
References Gallery, SIGGRAPH ’19, Association
for Computing Machinery, New York,
[1] D. Bau, J.-Y. Zhu, H. Strobelt,
NY, USA, 2019. URL: https://doi.org/
A. Lapedriza, B. Zhou, A. Torralba, Un-
10.1145/3306211.3320143. doi:10.1145/
derstanding the role of individual units
3306211.3320143.
in a deep neural network, Proceedings
[8] K. I. Gero, L. B. Chilton, Metapho- ity Support Systems, Springer Interna-
ria: An algorithmic companion for tional Publishing, Cham, 2016, pp. 37–
metaphor creation, in: Proceed- 48.
ings of the 2019 CHI Conference on [13] A. Kantosalo, H. Toivonen, Modes for
Human Factors in Computing Sys- Creative Human-Computer Collabo-
tems, CHI ’19, Association for Com- ration: Alternating and Task-Divided
puting Machinery, New York, NY, USA, Co-Creativity, in: Proceedings of
2019, p. 1–12. URL: https://doi.org/ the Seventh International Conference
10.1145/3290605.3300526. doi:10.1145/ on Computational Creativity (ICCC
3290605.3300526. 2016), Sony CSL, 2016, pp. 77–84.
[9] M. Ghazvininejad, X. Shi, J. Priyadarshi, URL: https://researchportal.helsinki.fi/
K. Knight, Hafez: an Interactive Poetry en/publications/modes-for-creative-
Generation System, in: Proceedings human-computer-collaboration-
of ACL 2017, System Demonstrations, alternating-and-t.
Association for Computational Lin- [14] S. Negrete-Yankelevich, N. Morales-
guistics, Vancouver, Canada, 2017, pp. Zaragoza, The apprentice framework:
43–48. URL: https://www.aclweb.org/ planning and assessing creativity, Fifth
anthology/P17-4008. International Conference on Computa-
[10] J. Frich, L. MacDonald Vermeulen, tional Creativity, 2014.
C. Remy, M. M. Biskjaer, P. Dals- [15] C. M. Gray, Y. Kou, B. Battles, J. Hog-
gaard, Mapping the landscape of cre- gatt, A. L. Toombs, The dark (pat-
ativity support tools in hci, in: Pro- terns) side of ux design, in: Pro-
ceedings of the 2019 CHI Conference ceedings of the 2018 CHI Conference
on Human Factors in Computing Sys- on Human Factors in Computing Sys-
tems, CHI ’19, Association for Com- tems, CHI ’18, Association for Com-
puting Machinery, New York, NY, USA, puting Machinery, New York, NY, USA,
2019, p. 1–18. URL: https://doi.org/ 2018, p. 1–14. URL: https://doi.org/
10.1145/3290605.3300619. doi:10.1145/ 10.1145/3173574.3174108. doi:10.1145/
3290605.3300619. 3173574.3174108.
[11] A. Kantosalo, A. Jordanous, Role-Based [16] M. Chromik, M. Eiband, S. T. Völkel,
Perceptions of Computer Participants D. Buschek, Dark patterns of explain-
in Human-Computer Co-Creativity, in: ability, transparency, and user control
7th Computational Creativity Sympo- for intelligent systems, in: C. Trat-
sium at AISB 2020, 2020. URL: https: tner, D. Parra, N. Riche (Eds.), Joint Pro-
//research.aalto.fi/en/publications/ ceedings of the ACM IUI 2019 Work-
role-based-perceptions-of-computer- shops co-located with the 24th ACM
participants-in-human-computer. Conference on Intelligent User Inter-
[12] O. Hoffmann, On modeling human- faces (ACM IUI 2019), Los Angeles,
computer co-creativity, in: S. Ku- USA, March 20, 2019, volume 2327
nifuji, G. A. Papadopoulos, A. M. of CEUR Workshop Proceedings, CEUR-
Skulimowski, J. Kacprzyk (Eds.), WS.org, 2019. URL: http://ceur-ws.org/
Knowledge, Information and Creativ- Vol-2327/IUI19WS-ExSS2019-7.pdf.
[17] L. Di Geronimo, L. Braz, E. Fregnan, [23] J. Buolamwini, T. Gebru, Gender
F. Palomba, A. Bacchelli, Ui dark pat- shades: Intersectional accuracy dispar-
terns and where to find them: A study ities in commercial gender classifica-
on mobile applications and user percep- tion, in: S. A. Friedler, C. Wilson
tion, in: Proceedings of the 2020 CHI (Eds.), Proceedings of the 1st Confer-
Conference on Human Factors in Com- ence on Fairness, Accountability and
puting Systems, CHI ’20, Association Transparency, volume 81 of Proceedings
for Computing Machinery, New York, of Machine Learning Research, PMLR,
NY, USA, 2020, p. 1–14. URL: https: New York, NY, USA, 2018, pp. 77–91.
//doi.org/10.1145/3313831.3376600. URL: http://proceedings.mlr.press/v81/
doi:10.1145/3313831.3376600. buolamwini18a.html.
[18] L. Colusso, C. L. Bennett, P. Gabriel, [24] D. S. Shah, H. A. Schwartz, D. Hovy,
D. K. Rosner, Design and diver- Predictive biases in natural language
sity? speculations on what could go processing models: A conceptual frame-
wrong, in: Proceedings of the 2019 work and overview, in: Proceedings of
on Designing Interactive Systems the 58th Annual Meeting of the Asso-
Conference, DIS ’19, Association for ciation for Computational Linguistics,
Computing Machinery, New York, NY, Association for Computational Linguis-
USA, 2019, p. 1405–1413. URL: https: tics, Online, 2020, pp. 5248–5264. URL:
//doi.org/10.1145/3322276.3323690. https://www.aclweb.org/anthology/
doi:10.1145/3322276.3323690. 2020.acl-main.468. doi:10.18653/v1/
[19] A. Dunne, F. Raby, Speculative Ev- 2020.acl-main.468.
erything: Design, Fiction, and Social [25] N. Carlini, C. Liu, Ú. Erlingsson, J. Kos,
Dreaming, MIT Press, 2013. Google- D. Song, The secret sharer: Evaluating
Books-ID: 9gQyAgAAQBAJ. and testing unintended memorization
[20] O. Bown, A. R. Brown, Interaction in neural networks, in: 28th USENIX
Design for Metacreative Systems, Security Symposium (USENIX Secu-
Springer International Publishing, rity 19), USENIX Association, Santa
Cham, 2018, pp. 67–87. URL: https: Clara, CA, 2019, pp. 267–284. URL:
//doi.org/10.1007/978-3-319-73356-2_5. https://www.usenix.org/conference/
doi:10.1007/978-3-319-73356- usenixsecurity19/presentation/carlini.
2_5. [26] N. Carlini, F. Tramer, E. Wallace,
[21] T. Kynkäänniemi, T. Karras, S. Laine, M. Jagielski, A. Herbert-Voss, K. Lee,
J. Lehtinen, T. Aila, Improved precision A. Roberts, T. Brown, D. Song, U. Er-
and recall metric for assessing genera- lingsson, A. Oprea, C. Raffel, Extracting
tive models, in: Advances in Neural In- training data from large language
formation Processing Systems, 2019, pp. models, 2020. arXiv:2012.07805.
3927–3936. [27] M. Guzdial, M. Riedl, An Interaction
[22] E. Härkönen, A. Hertzmann, J. Lehti- Framework for Studying Co-Creative
nen, S. Paris, Ganspace: Discovering in- AI, arXiv:1903.09709 [cs] (2019). URL:
terpretable gan controls, arXiv preprint http://arxiv.org/abs/1903.09709, arXiv:
arXiv:2004.02546 (2020). 1903.09709.
[28] Q. Yang, J. Cranshaw, S. Amershi, tems, CHI ’18, Association for Com-
S. T. Iqbal, J. Teevan, Sketching nlp: puting Machinery, New York, NY, USA,
A case study of exploring the right 2018, p. 1–12. URL: https://doi.org/
things to design with language intelli- 10.1145/3173574.3173995. doi:10.1145/
gence, CHI ’19, Association for Com- 3173574.3173995.
puting Machinery, New York, NY, USA, [31] E. Horvitz, Principles of mixed-
2019, p. 1–12. URL: https://doi.org/ initiative user interfaces, in: Pro-
10.1145/3290605.3300415. doi:10.1145/ ceedings of the SIGCHI Conference
3290605.3300415. on Human Factors in Computing
[29] B. J. Dietvorst, J. P. Simmons, C. Massey, Systems, CHI ’99, Association for
Algorithm aversion: people erro- Computing Machinery, New York,
neously avoid algorithms after seeing NY, USA, 1999, p. 159–166. URL:
them err, Journal of Experimental Psy- https://doi.org/10.1145/302979.303030.
chology. General 144 (2015) 114–126. doi:10.1145/302979.303030.
doi:10.1037/xge0000033. [32] C. Lamb, D. G. Brown, C. L. A.
[30] S. Jhaver, Y. Karpfen, J. Antin, Al- Clarke, Evaluating computational
gorithmic anxiety and coping strate- creativity: An interdisciplinary tuto-
gies of airbnb hosts, in: Proceed- rial, ACM Comput. Surv. 51 (2018).
ings of the 2018 CHI Conference on URL: https://doi.org/10.1145/3167476.
Human Factors in Computing Sys- doi:10.1145/3167476.