<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Nine Potential Pitfalls when Designing Human-AI Co-Creative Systems</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Daniel Buschek</string-name>
          <email>daniel.buschek@uni-bayreuth.de</email>
          <xref ref-type="aff" rid="aff2">2</xref>
          <xref ref-type="aff" rid="aff4">4</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Lukas Mecke</string-name>
          <email>lukas.meckek@unibw.de</email>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Florian Lehmann</string-name>
          <email>lforian.lehmann@uni-bayreuth.de</email>
          <xref ref-type="aff" rid="aff2">2</xref>
          <xref ref-type="aff" rid="aff4">4</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Hai Dang</string-name>
          <email>hai.dang@uni-bayreuth.de</email>
          <xref ref-type="aff" rid="aff2">2</xref>
          <xref ref-type="aff" rid="aff4">4</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>AI, Department of Computer Science, University of Bayreuth</institution>
          ,
          <addr-line>Bayreuth</addr-line>
          ,
          <country country="DE">Germany</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Bundeswehr University Munich</institution>
          ,
          <addr-line>Munich</addr-line>
          ,
          <country country="DE">Germany</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Joint Proceedings of the ACM IUI 2021 Workshops</institution>
        </aff>
        <aff id="aff3">
          <label>3</label>
          <institution>LMU Munich</institution>
          ,
          <addr-line>Munich</addr-line>
          ,
          <country country="DE">Germany</country>
        </aff>
        <aff id="aff4">
          <label>4</label>
          <institution>Research Group HCI</institution>
        </aff>
      </contrib-group>
      <abstract>
        <p>This position paper examines potential pitfalls on the way towards achieving human-AI co-creation with generative models in a way that is beneficial to the users' interests. In particular, we collected a set of nine potential pitfalls, based on the literature and our own experiences as researchers working at the intersection of HCI and AI. We illustrate each pitfall with examples and suggest ideas for addressing it. Reflecting on all pitfalls, we discuss and conclude with implications for future research directions. With this collection, we hope to contribute to a critical and constructive discussion on the roles of humans and AI in co-creative interactions, with an eye on related assumptions and potential side-efects for creative practices and beyond.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;HCI</kwd>
        <kwd>Artificial Intelligence</kwd>
        <kwd>Co-Creation</kwd>
        <kwd>Design</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Ongoing advances in generative AI systems
have sparked great interest in using them
interactively in creative contexts and for digital
content creation and manipulation: Some
examples include (
        <xref ref-type="bibr" rid="ref1">1</xref>
        ) generating or modifying
images with generative adversarial networks
(GANs) [1, 2, 3], (
        <xref ref-type="bibr" rid="ref2">2</xref>
        ) generating controllable
movements for virtual characters with
recurrent neural networks, deep reinforcement
learning and physics simulations [4], and (
        <xref ref-type="bibr" rid="ref3">3</xref>
        )
controllable machine capabilities for
generating or summarizing when working with
text [5, 6]. Such computational methods have
© 2021 Copyright for this paper by its authors. Use
permitted under Creative Commons License Attribution 4.0
InterCPWrEooUrckReshdoinpgs IhStpN:/c1e6u1r3-w-0s.o7r3g (nCCatEiEoUnUaRlR(C-CWBYSW4..o0o).rrgk)shop Proceedings
also entered specifically artistic domains,
including visual art [7], creative writing and
poetry [8, 9]. More examples can be found in
a curated “ML x Art” list1.
      </p>
      <p>A common vision, also present in the call
for this workshop, paints a picture of
creative human use of such AI as tools. In
this view, these new interactive systems are
hoped to realise key ideas from creativity
support tools (CST, [10]) by leveraging AI
capabilities. More specifically, this support
could cast humans and AI in many
diferent roles (for a recent overview see [11]).</p>
      <p>This includes, for example, using AI as a
divergent or convergent agent, as described
by Hofmann [12], that is, to generate or
evaluate (human) ideas. Related, Kantosalo
and Toivonen [13] highlight alternating
cocreation, with the AI “pleasing” and
“provoking” the user. Moreover, Negrete-Yankelevich
and Morales-Zaragoza [14] describe a related
set of roles, including AI as an “apprentice”,
1https://mlart.co/, last accessed 17.12.2020
whose work is judged and selectively chosen sign [15]. Other work collected dark UI/UX
by humans, or a leader-like role, which only patterns empirically by reviewing a large set
leaves final configurations to the user. of existing mobile applications [17]. Both
ap</p>
      <p>Within this range of roles, the workshop proaches seem challenging to directly
transcall emphasises the generative capabilities of fer to collecting pitfalls in the context of
coAI. In this paper, we thus focus on the role creative generative AI, since there are no
preof AI as a generator, and the underlying goal viously defined pitfalls and no easily
accessiof freeing its users to focus on a larger cre- ble collections (or “app stores”) of many
usative vision, while the AI takes care of more able applications for review.
tedious steps. Therefore, we followed a qualitative,
spec</p>
      <p>
        With this goal in mind, this paper exam- ulative approach and brainstormed on
potenines potential pitfalls on the way towards tial pitfalls, or “what could go wrong” (cf. [18])
achieving it in practice. Our research ap- in interactions with co-creative AI. Here we
proach is related to work on dark patterns are loosely inspired by aspects of speculative
in UI/UX design [15], which also examines design [19], although that area typically aims
– sometimes speculatively [16], sometimes to address broader issues than what we
foempirically [17] – what “could go wrong”, cus on here. Further inspiring “speculative
in order to ultimately inspire directions for futures” for human-AI co-creative systems,
interaction design that are beneficial to the along with a conceptual framework, can be
users’ interests. In doing so, we thus hope to found in the work by Bown and Brown [20].
contribute to a critical and constructive dis- We particularly explore issues grounded in
cussion on the roles of humans and AI in today’s interactions and UIs, which can be
co-creative interactions, with an eye on re- reasonably well imagined to potentially
oclated assumptions and potential side-efects cur with the current state of the art of
genfor creative practices and beyond. erative AI models. In particular, our
brainstorming started from three prompts: (
        <xref ref-type="bibr" rid="ref1">1</xref>
        )
Issues arising from currently limited
capabili2. Research Approach ties of AI, and (
        <xref ref-type="bibr" rid="ref2">2</xref>
        ) from exploring what might
happen with too much AI involvement; plus
(
        <xref ref-type="bibr" rid="ref3">3</xref>
        ) thinking beyond use and usage situations.
      </p>
      <sec id="sec-1-1">
        <title>Considering this approach, we see the pitfalls</title>
        <p>presented here not as a comprehensive and
“definitive” list but rather as a stimulus for
discussion in the research community – at
the workshop and beyond.</p>
        <p>Our interest in collecting pitfalls is inspired
by work on dark patterns [16, 17, 15]: Both
pitfalls and dark patterns identify issues with
user interfaces and interactions that result
in experiences or outcomes which might not
be in the user’s best interest. However, in
contrast to what is often assumed in dark
patterns, pitfalls do not imply bad intention,
rather oversight or lack of information2. 3. Nine Potential Pitfalls</p>
        <p>Concretely, related work collected
speculative dark patterns for explainability, trans- Table 1 shows the pitfalls we collected. In
parency and control in intelligent interac- particular, we present nine pitfalls, three for
tive systems [16] by transferring dark pat- each of our starting prompts, that is, for
limterns previously described for UI/UX de- ited AI (pitfalls 1-3), too much AI
involvement (pitfalls 4-6), and for aspects beyond use
(pitfalls 7-9).</p>
        <p>2https://www.merriam-webster.com/dictionary/
pitfall, last accessed 17.12.2020</p>
        <p>Image generator is controlled Fine-grained AI control is Human-centred design with
with (i.e. many 1D inputs for dificult. “Conservative” UI target group, e.g. to inform
a high D latent space) [22] - design focused on ensuring preferable tradeofs of UI
vs. rich image editor tools input stays in (training) data expressiveness and model
like brushes. distribution. “breaking points”.</p>
        <p>When prompted to complete Language models are capable Learn an additional model,
a sentence about the of memorizing excerpts of that can attribute generated
population of a large city the text and reproducing them content to an explicit source
AI delivers a reasonable when prompted with a to allow for verifying
number that could be correct similar context. correctness.
– but might not be.
2 Lack of
expressive
interaction
usability,
creativity,
exploration</p>
        <p>The UI imposes a
“bottleneck” on creative use
of the AI.
3 False sense of trust,</p>
        <p>proficiency reliability
4 Conflicts of
territory
usability, UX,
control
5 Agony of
choice
usability, UX,
productivity</p>
        <p>AI suggests answers or
completions that the user
cannot verify or that
generate a false sense of
proficiency.</p>
        <sec id="sec-1-1-1">
          <title>Too much AI</title>
          <p>AI overwrites what the user In a co-creative text editor,
has manually created/edited. the user replaces terms in
generated text. Later, the AI
(partly) reverts these
changes.</p>
          <p>AI provides overwhelming
amount/detail of content
that distracts or creates
agony of choice.</p>
          <p>An AI photo editor displays
an excessive number of
suggested variants. The
resulting small previews
make it hard to discern and
decide.
6 Time waster
usability, UX,
productivity</p>
          <p>AI interrupts user or draws
attention away from the
creative task itself.</p>
          <p>Language model optimised
for word probability and
user’s term was less likely.</p>
          <p>UI design process was
focused on showing AI
capabilities instead of user
needs.</p>
          <p>A co-creative music Same as above. Also: Timing
composition tool of the AI’s involvement not
continuously shows melody tested with users or varying
completions, which keep the preferences between users.
user busy with exploring or
understanding the system
instead of their ideas.</p>
        </sec>
        <sec id="sec-1-1-2">
          <title>Beyond use</title>
          <p>Keep track of user edits to
protect them, ask for
confirmation before changes,
or to integrate this info into
inference.</p>
          <p>Clarifying use cases and
support needs, responsive /
malleable UI concepts,
changeable user settings.</p>
          <p>Same as above.</p>
          <p>Attention-aware UI (e.g. AI
waits to not disrupt user’s
focused work; or stops
suggestions if user has
explored it for a while).
7 AI bias
accountability, fairness,
transparency</p>
          <p>AI suggestions are biased in
a certain unwanted way,
w.r.t. human meaning and
values.</p>
          <p>An AI story generator writes
gender-stereotypical
protagonists (e.g. w.r.t.
roles/occupations).</p>
          <p>AI picked up biases in the Design for easy human
training data or created bias revision/rejection.
through its learning method. Addressing AI bias (e.g. see
Development process [23, 24]). Learning from user
unaware of biases. feedback/actions.
8 Conflict of creativity, A system and a user</p>
          <p>Creation &amp; responsibility, collaborate to create an
Responsibility ownership output. Ownership and</p>
          <p>responsibility are unclear
9 User and Data privacy,</p>
          <p>Privacy responsibility
In a co-creative text editor Co-creative systems operate
the AI suggests formulations on a continuum between
that appear verbatim in the user and system creation,
training data. Who is the challenging attributions of
owner of the resulting text? ownership.</p>
          <p>Should we attribute an AI
and training data providers
as contributors? Do we need
systems to check for
(accidental) plagiarism?
Private data may be exposed 1) A user A works with a AI models are trained on a Remove private information
through the AI system or its cloud-based AI text creator large corpus of data and can from training sets and work
training data. and their data is transmitted sometimes default to with AI either encrypted or
unencrypted. 2) The AI replicating this data when locally.
reveals (private parts of) prompted.
another user B’s data to A
(e.g. [25, 26]).</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-2">
      <title>4. Discussion</title>
      <p>The table characterises each pitfall with
a name, afected aspects (categories), a
description of the problem, and a concise pit- 4.1. What are the Consequences
fall “vignette”: This includes an example sce- of these Pitfalls?
nario describing a system in which this
issue arises, along with an illustrating
diagnosis of how this might have happened in
the design and development of said system,
plus corresponding ideas for potential
solutions or open questions. For each category of
pitfalls (“limited AI”, “too much AI”, and
“beyond use”) we picked one example for further
illustration in Figure 2.</p>
      <p>As an additional overview, Figure 1 locates
these pitfalls within an interaction loop in
human-AI co-creative systems; the loop is
taken from a framework by Guzdial and Riedl
[27]. In this figure we illustrate our
underlying mental model of human-AI interaction. It
consists of the user and the AI as potential
actors collaborating on a shared artifact. The</p>
      <sec id="sec-2-1">
        <title>AI can get involved in the creation process in</title>
        <p>one of two ways: It can either be prompted to
contribute through the user interface (e.g.
using a predefined function to achieve an image
manipulation) or it can act without a (user)
prompt, e.g. to suggest edits or flag errors.
We further include the training data in this
model, as it provides the basis for the AI’s
actions and decisions. While we located the
pitfalls within this model, these locations are
by no means the only possible ones. They
represent our interpretations of which point
in the interaction loop is most likely afected
by each pitfall. As an example, a lack of
expressive interaction may not only be rooted
in the user interface, but can also be caused
by insuficient training data to support more
meaningful options.</p>
        <p>AI
User Interface
training</p>
        <p>AI Bias
Invisible AI</p>
        <p>Boundaries
Lack of expressive</p>
        <p>Interaction</p>
        <p>Agony of
Choice
4.2. How can the Pitfalls Inform
Research and Design of
Co-Creative Generative
Systems?</p>
      </sec>
      <sec id="sec-2-2">
        <title>Moreover, the problematic systems described</title>
        <p>in the pitfalls in Table 1 might inspire
inforPut briefly, this position paper describes what mative baseline systems for comparison with
(hopefully) better solutions. For example, a
could go wrong in order to stimulate
discustypical HCI user study on an AI photo
edisions of how to get it right. More concretely,
tor might compare an AI vs non-AI version.
here we describe three potential uses.</p>
      </sec>
      <sec id="sec-2-3">
        <title>However, as illustrated with the example for</title>
        <p>pitfall 5 (Agony of Choice), another insightful
4.2.1. Raising Awareness of Design evaluation might further use a baseline that
Considerations involves AI “even more” than the intended
design solution to be evaluated.
nonumy eirmod tempor invidunt ut labore et dolore magna
aliquyam erat, sed diam voluptua.</p>
        <p>My social security number is 078</p>
        <p>AI Suggestions:
---------------------------------05-1120
(a) An AI photo editor displays an excessive
number of suggestions. Due to the
number of options and the small previews
(making it hard to see what each option
achieves) the user is left in an agony of
choice.</p>
        <p>(b) Example for an AI leaking sensitive
information from the training dataset
(based on [25]), either as a suggestion or
as a response to a primer (enabling active
attacks). Such an attack has been
demonstrated by Carlini et al. [26].</p>
        <p>Drafted by AI
In our article series, we are going to visit the largest cities in the world. This
time we will focus Tokyo, Mexico City, and Istanbul, three very different
cultural centers. We start right away with the largest city of these: Tokyo.</p>
        <p>The Japanese city is considered the largest city in the world in terms of its
population. 37.468.000 people are living there.
(c) A text editing tool could ofer intelligent features, e.g. drafting paragraphs or completing a sentence.</p>
        <p>Yet, the AI might not have the capability to refer to sources – to the human it remains unclear if the
claims in a text are true. This leads to a false sense of proficiency. Here, the AI drafted a paragraph
with claims about Tokyo’s approximate population (bold). However, it refers to the metropolitan
area, not the city proper. The interface in the figure is inspired by Yang et al. [28].
ests. For instance, readers and workshop par- considerations here, we do not expect this to
ticipants (with diferent backgrounds) could be case: Co-creative systems involving both
think about how they would improve the de- human and AI actions are not only limited
sign – and evaluate it – for a concrete prob- by AI capabilities. We also have to expect
lematic example system in Table 1; and in problems arising from interaction and UI
departicular how they might then make explicit sign as well as from integration into creative
and formulate their criteria in these cases. human practices. For example, a lack of
expressiveness in interactions (pitfall 2) can still
4.3. Will the Pitfalls Vanish with cause problems for creative human use, even
Better AI? in a system with a powerful, “perfect”
generative model under the hood.</p>
        <p>One may ask if the illustrated issues might In summary, the pitfalls highlight that
simply vanish in future systems that can human-AI co-creative systems sit at the
inbuild on better AI capabilities. Based on our tersection of HCI and AI, and that successful
designs need to consider human-centred
aspects in the process. Our pitfalls reflect this
in their mix of issues relating to interaction,</p>
      </sec>
      <sec id="sec-2-4">
        <title>UI and AI. We thus aim to motivate interdisciplinary work on such systems, also regarding research and design methodology.</title>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>5. Conclusion</title>
      <sec id="sec-3-1">
        <title>One vision of interactive use of AI tools in</title>
        <p>
          co-creative settings focuses on the role of the
AI as a generator that augments what
people can achieve in creative tasks. This
paper examined potential pitfalls on the way
towards achieving this vision in practice,
starting from three speculation prompts: Issues
arising from (
          <xref ref-type="bibr" rid="ref1">1</xref>
          ) limited AI, (
          <xref ref-type="bibr" rid="ref2">2</xref>
          ) too much AI
involvement, and (
          <xref ref-type="bibr" rid="ref3">3</xref>
          ) thinking beyond use
and usage situations.
        </p>
        <p>Concretely, we collected a set of nine
potential pitfalls (Table 1) and discussed
possible consequences and takeaways for
researchers and designers along with
illustrating examples. With this collection, we hope
to contribute to a critical and constructive
discussion on the roles of humans and AI in
co-creative interactions, with an eye on
related assumptions and potential side-efects
for creative practices and beyond.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>Acknowledgments</title>
      <sec id="sec-4-1">
        <title>This project is funded by the Bavarian State</title>
      </sec>
      <sec id="sec-4-2">
        <title>Ministry of Science and the Arts and coordi</title>
        <p>nated by the Bavarian Research Institute for</p>
      </sec>
      <sec id="sec-4-3">
        <title>Digital Transformation (bidt).</title>
        <p>[8] K. I. Gero, L. B. Chilton, Metapho- ity Support Systems, Springer
Internaria: An algorithmic companion for tional Publishing, Cham, 2016, pp. 37–
metaphor creation, in: Proceed- 48.
ings of the 2019 CHI Conference on [13] A. Kantosalo, H. Toivonen, Modes for
Human Factors in Computing Sys- Creative Human-Computer
Collabotems, CHI ’19, Association for Com- ration: Alternating and Task-Divided
puting Machinery, New York, NY, USA, Co-Creativity, in: Proceedings of
2019, p. 1–12. URL: https://doi.org/ the Seventh International Conference
10.1145/3290605.3300526. doi:10.1145/ on Computational Creativity (ICCC
3290605.3300526. 2016), Sony CSL, 2016, pp. 77–84.
[9] M. Ghazvininejad, X. Shi, J. Priyadarshi, URL: https://researchportal.helsinki.fi/
K. Knight, Hafez: an Interactive Poetry
en/publications/modes-for-creative</p>
      </sec>
      <sec id="sec-4-4">
        <title>Generation System, in: Proceedings human-computer-collaboration</title>
        <p>of ACL 2017, System Demonstrations, alternating-and-t.</p>
        <p>Association for Computational Lin- [14] S. Negrete-Yankelevich, N.
Moralesguistics, Vancouver, Canada, 2017, pp. Zaragoza, The apprentice framework:
43–48. URL: https://www.aclweb.org/ planning and assessing creativity, Fifth
anthology/P17-4008. International Conference on
Computa[10] J. Frich, L. MacDonald Vermeulen, tional Creativity, 2014.</p>
        <p>C. Remy, M. M. Biskjaer, P. Dals- [15] C. M. Gray, Y. Kou, B. Battles, J.
Hoggaard, Mapping the landscape of cre- gatt, A. L. Toombs, The dark
(patativity support tools in hci, in: Pro- terns) side of ux design, in:
Proceedings of the 2019 CHI Conference ceedings of the 2018 CHI Conference
on Human Factors in Computing Sys- on Human Factors in Computing
Systems, CHI ’19, Association for Com- tems, CHI ’18, Association for
Computing Machinery, New York, NY, USA, puting Machinery, New York, NY, USA,
2019, p. 1–18. URL: https://doi.org/ 2018, p. 1–14. URL: https://doi.org/
10.1145/3290605.3300619. doi:10.1145/ 10.1145/3173574.3174108. doi:10.1145/
3290605.3300619. 3173574.3174108.
[11] A. Kantosalo, A. Jordanous, Role-Based [16] M. Chromik, M. Eiband, S. T. Völkel,
Perceptions of Computer Participants D. Buschek, Dark patterns of
explainin Human-Computer Co-Creativity, in: ability, transparency, and user control
7th Computational Creativity Sympo- for intelligent systems, in: C.
Tratsium at AISB 2020, 2020. URL: https: tner, D. Parra, N. Riche (Eds.), Joint
Pro//research.aalto.fi/en/publications/ ceedings of the ACM IUI 2019
Workrole-based-perceptions-of-computer- shops co-located with the 24th ACM
participants-in-human-computer. Conference on Intelligent User
Inter[12] O. Hofmann, On modeling human- faces (ACM IUI 2019), Los Angeles,
computer co-creativity, in: S. Ku- USA, March 20, 2019, volume 2327
nifuji, G. A. Papadopoulos, A. M. of CEUR Workshop Proceedings,
CEURSkulimowski, J. Kacprzyk (Eds.), WS.org, 2019. URL: http://ceur-ws.org/
Knowledge, Information and Creativ- Vol-2327/IUI19WS-ExSS2019-7.pdf.
[17] L. Di Geronimo, L. Braz, E. Fregnan, [23] J. Buolamwini, T. Gebru, Gender
F. Palomba, A. Bacchelli, Ui dark pat- shades: Intersectional accuracy
disparterns and where to find them: A study ities in commercial gender
classificaon mobile applications and user percep- tion, in: S. A. Friedler, C. Wilson
tion, in: Proceedings of the 2020 CHI (Eds.), Proceedings of the 1st
ConferConference on Human Factors in Com- ence on Fairness, Accountability and
puting Systems, CHI ’20, Association Transparency, volume 81 of Proceedings
for Computing Machinery, New York, of Machine Learning Research, PMLR,
NY, USA, 2020, p. 1–14. URL: https: New York, NY, USA, 2018, pp. 77–91.
//doi.org/10.1145/3313831.3376600. URL: http://proceedings.mlr.press/v81/
doi:10.1145/3313831.3376600. buolamwini18a.html.
[18] L. Colusso, C. L. Bennett, P. Gabriel, [24] D. S. Shah, H. A. Schwartz, D. Hovy,
D. K. Rosner, Design and diver- Predictive biases in natural language
sity? speculations on what could go processing models: A conceptual
framewrong, in: Proceedings of the 2019 work and overview, in: Proceedings of
on Designing Interactive Systems the 58th Annual Meeting of the
AssoConference, DIS ’19, Association for ciation for Computational Linguistics,
Computing Machinery, New York, NY, Association for Computational
LinguisUSA, 2019, p. 1405–1413. URL: https: tics, Online, 2020, pp. 5248–5264. URL:
//doi.org/10.1145/3322276.3323690. https://www.aclweb.org/anthology/
doi:10.1145/3322276.3323690. 2020.acl-main.468. doi:10.18653/v1/
[19] A. Dunne, F. Raby, Speculative Ev- 2020.acl-main.468.
erything: Design, Fiction, and Social [25] N. Carlini, C. Liu, Ú. Erlingsson, J. Kos,
Dreaming, MIT Press, 2013. Google- D. Song, The secret sharer: Evaluating
Books-ID: 9gQyAgAAQBAJ. and testing unintended memorization
[20] O. Bown, A. R. Brown, Interaction in neural networks, in: 28th USENIX
Design for Metacreative Systems, Security Symposium (USENIX
SecuSpringer International Publishing, rity 19), USENIX Association, Santa
Cham, 2018, pp. 67–87. URL: https: Clara, CA, 2019, pp. 267–284. URL:
//doi.org/10.1007/978-3-319-73356-2_5. https://www.usenix.org/conference/
doi:10.1007/978-3-319-73356- usenixsecurity19/presentation/carlini.
2_5. [26] N. Carlini, F. Tramer, E. Wallace,
[21] T. Kynkäänniemi, T. Karras, S. Laine, M. Jagielski, A. Herbert-Voss, K. Lee,
J. Lehtinen, T. Aila, Improved precision A. Roberts, T. Brown, D. Song, U.
Erand recall metric for assessing genera- lingsson, A. Oprea, C. Rafel, Extracting
tive models, in: Advances in Neural In- training data from large language
formation Processing Systems, 2019, pp. models, 2020. arXiv:2012.07805.
3927–3936. [27] M. Guzdial, M. Riedl, An Interaction
[22] E. Härkönen, A. Hertzmann, J. Lehti- Framework for Studying Co-Creative
nen, S. Paris, Ganspace: Discovering in- AI, arXiv:1903.09709 [cs] (2019). URL:
terpretable gan controls, arXiv preprint http://arxiv.org/abs/1903.09709, arXiv:
arXiv:2004.02546 (2020). 1903.09709.
[28] Q. Yang, J. Cranshaw, S. Amershi, tems, CHI ’18, Association for
ComS. T. Iqbal, J. Teevan, Sketching nlp: puting Machinery, New York, NY, USA,
A case study of exploring the right 2018, p. 1–12. URL: https://doi.org/
things to design with language intelli- 10.1145/3173574.3173995. doi:10.1145/
gence, CHI ’19, Association for Com- 3173574.3173995.
puting Machinery, New York, NY, USA, [31] E. Horvitz, Principles of
mixed2019, p. 1–12. URL: https://doi.org/ initiative user interfaces, in:
Pro10.1145/3290605.3300415. doi:10.1145/ ceedings of the SIGCHI Conference
3290605.3300415. on Human Factors in Computing
[29] B. J. Dietvorst, J. P. Simmons, C. Massey, Systems, CHI ’99, Association for
Algorithm aversion: people erro- Computing Machinery, New York,
neously avoid algorithms after seeing NY, USA, 1999, p. 159–166. URL:
them err, Journal of Experimental Psy- https://doi.org/10.1145/302979.303030.
chology. General 144 (2015) 114–126. doi:10.1145/302979.303030.
doi:10.1037/xge0000033. [32] C. Lamb, D. G. Brown, C. L. A.
[30] S. Jhaver, Y. Karpfen, J. Antin, Al- Clarke, Evaluating computational
gorithmic anxiety and coping strate- creativity: An interdisciplinary
tutogies of airbnb hosts, in: Proceed- rial, ACM Comput. Surv. 51 (2018).
ings of the 2018 CHI Conference on URL: https://doi.org/10.1145/3167476.
Human Factors in Computing Sys- doi:10.1145/3167476.</p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>D.</given-names>
            <surname>Bau</surname>
          </string-name>
          , J.-Y. Zhu,
          <string-name>
            <given-names>H.</given-names>
            <surname>Strobelt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Lapedriza</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Zhou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Torralba</surname>
          </string-name>
          ,
          <article-title>Understanding the role of individual units in a deep neural network</article-title>
          ,
          <source>Proceedings of the National Academy of Sciences</source>
          (
          <year>2020</year>
          ). URL: https://www.pnas.org/ content/early/2020/08/31/1907375117. doi:
          <volume>10</volume>
          .1073/pnas.1907375117.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>E.</given-names>
            <surname>Härkönen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Hertzmann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Lehtinen</surname>
          </string-name>
          , S. Paris, GANSpace: Discovering Interpretable GAN Controls (
          <year>2020</year>
          ). URL: https: //arxiv.org/abs/
          <year>2004</year>
          .02546v1.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>T.</given-names>
            <surname>Karras</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Laine</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Aittala</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Hellsten</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Lehtinen</surname>
          </string-name>
          , T. Aila,
          <article-title>Analyzing and improving the image quality of stylegan</article-title>
          ,
          <source>in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)</source>
          ,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>S.</given-names>
            <surname>Park</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Ryu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <article-title>Learning predict-andsimulate policies from unorganized human motion data</article-title>
          ,
          <source>ACM Trans. Graph</source>
          .
          <volume>38</volume>
          (
          <year>2019</year>
          ). URL: https: //doi.org/10.1145/3355089.3356501. doi:
          <volume>10</volume>
          .1145/3355089.3356501.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>S.</given-names>
            <surname>Dathathri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Madotto</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Lan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Hung</surname>
          </string-name>
          , E. Frank,
          <string-name>
            <given-names>P.</given-names>
            <surname>Molino</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Yosinski</surname>
          </string-name>
          , R. Liu,
          <article-title>Plug and Play Language Models: A Simple Approach to Controlled Text Generation</article-title>
          , arXiv:
          <year>1912</year>
          .02164 [cs] (
          <year>2020</year>
          ). URL: http://arxiv.org/abs/
          <year>1912</year>
          .02164, arXiv:
          <year>1912</year>
          .02164.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>S.</given-names>
            <surname>Gehrmann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Strobelt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Krüger</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Pfister</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. M.</given-names>
            <surname>Rush</surname>
          </string-name>
          ,
          <article-title>Visual interaction with deep learning models through collaborative semantic inference</article-title>
          ,
          <source>IEEE Transactions on Visualization and Computer Graphics</source>
          <volume>26</volume>
          (
          <year>2020</year>
          )
          <fpage>884</fpage>
          -
          <lpage>894</lpage>
          . doi:
          <volume>10</volume>
          .1109/TVCG.
          <year>2019</year>
          .
          <volume>2934595</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>M.</given-names>
            <surname>Akten</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Fiebrink</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Grierson</surname>
          </string-name>
          ,
          <article-title>Learning to see: You are what you see, in: ACM SIGGRAPH 2019 Art Gallery</article-title>
          , SIGGRAPH '19,
          <string-name>
            <surname>Association</surname>
          </string-name>
          for Computing Machinery, New York, NY, USA,
          <year>2019</year>
          . URL: https://doi.org/ 10.1145/3306211.3320143. doi:
          <volume>10</volume>
          .1145/ 3306211.3320143.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>