<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>and appropriation of a corporation's Street Journal (2019). URL: https://www.wsj.com/
groupware system</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Toward General Design Principles for Generative AI Applications</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Justin D. Weisz</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Michael Muller</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Jessica He</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Stephanie Houde</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>IBM Research AI</institution>
          ,
          <addr-line>Cambridge, MA</addr-line>
          ,
          <country country="US">USA</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>IBM Research AI</institution>
          ,
          <addr-line>Seattle, WA</addr-line>
          ,
          <country country="US">USA</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>IBM Research AI</institution>
          ,
          <addr-line>Yorktown Heights, NY</addr-line>
          ,
          <country country="US">USA</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2023</year>
      </pub-date>
      <volume>33</volume>
      <fpage>0000</fpage>
      <lpage>0003</lpage>
      <abstract>
        <p>Generative AI technologies are growing in power, utility, and use. As generative technologies are being incorporated into mainstream applications, there is a need for guidance on how to design those applications to foster productive and safe use. Based on recent research on human-AI co-creation within the HCI and AI communities, we present a set of seven principles for the design of generative AI applications. These principles are grounded in an environment of generative variability. Six principles are focused on designing for characteristics of generative AI: multiple outcomes &amp; imperfection; exploration &amp; control; and mental models &amp; explanations. In addition, we urge designers to design against potential harms that may be caused by a generative model's hazardous output, misuse, or potential for human displacement. We anticipate these principles to usefully inform design decisions made in the creation of novel human-AI applications, and we invite the community to apply, revise, and extend these principles to their own work.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;generative AI</kwd>
        <kwd>design principles</kwd>
        <kwd>human-centered AI</kwd>
        <kwd>foundation models</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <sec id="sec-1-1">
        <title>As generative AI technologies continue to grow in power</title>
        <p>
          and utility, their use is becoming more mainstream.
Generative models, including LLM-based foundation
models [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ], are being used for applications such as general
Q&amp;A (e.g. ChatGPT1), software engineering assistance
(e.g. Copilot2), task automation (e.g. Adept3),
copywriting (e.g. Jasper.ai4), and the creation of high-fidelity
artwork (e.g. DALL-E 2 [
          <xref ref-type="bibr" rid="ref2">2</xref>
          ], Stable Difusion [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ],
Midjourney5). Given the explosion in popularity of these
new kinds of generative applications, there is a need for
guidance on how to design those applications to foster
productive and safe use, in line with human-centered AI
values [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ].
        </p>
        <p>
          Fostering productive use is a challenge, as revealed in
a recent literature survey by Campero et al. [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ]. They
found that many human-AI collaborative systems failed
to achieve positive synergy – the notion that a
humanAI team is able to accomplish superior outcomes above
either party working alone. In fact, some studies have
found the opposite efect, that human-AI teams produced
inferior results to either a human or AI working alone [
          <xref ref-type="bibr" rid="ref6 ref7 ref8 ref9">6,
7, 8, 9</xref>
          ].
        </p>
        <p>
          Fostering safe use is a challenge because of the
potential risks and harms that stem from generative AI,
either because of how the model was trained (e.g. [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ])
or because of how it is applied (e.g. [
          <xref ref-type="bibr" rid="ref11 ref12">11, 12</xref>
          ]). tize designers to the idea that generative AI applications
        </p>
        <p>In order to address these issues, we propose a set of may cause a variety of harms (likely inadvertently, but
design principles to aid the designers of generative AI sys- possibly intentionally). We hope these principles provide
tems. These principles are grounded in an environment the human-AI co-creation community with a reasoned
of generative variability, indicating the two proper- way to think through the design of novel generative AI
ties of generative AI systems inherently diferent from applications.
traditional discriminative6 AI systems: generative,
because the aim of generative AI applications is to produce
artifacts as outputs, rather than determine decision bound- 2. Design Principles for
aries as discriminative AI systems do, and variability, Generative AI Applications
indicating the fact that, for a given input, a generative
system may produce a variety of possible outputs, many We developed seven design principles for generative AI
of which may be valid; in the discriminative case, it is applications based on recent research in the HCI and AI
expected that the output of a model does not vary for a communities, specifically around human-AI co-creative
given input. processes. We conducted a literature review of research</p>
        <p>
          We note that our principles are meant to generally studies, guidelines, and analytic frameworks from these
apply to generative AI applications. Other sets of de- communities [17, 18, 19, 22, 23, 20, 21, 24, 25, 26, 27,
sign principles exist for specific kinds of generative AI 28, 29, 30], which included experiments in human-AI
applications, including Liu and Chilton’s guidelines for co-creation [
          <xref ref-type="bibr" rid="ref31 ref32 ref33 ref34 ref35 ref36 ref37">31, 32, 33, 34, 35, 36, 37</xref>
          ], examinations of
engineering prompts for text-to-image models [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ], and representative generative applications [38, 39, 40, 34, 41,
advice about one-shot prompts for generation of texts of 2, 3, 42], and a review of publications in recent
workdiferent kinds [
          <xref ref-type="bibr" rid="ref14 ref15 ref16">14, 15, 16</xref>
          ]. There are also more general shops [
          <xref ref-type="bibr" rid="ref43 ref44 ref45 ref46">43, 44, 45, 46</xref>
          ].
        </p>
        <p>
          AI-related design guidelines [
          <xref ref-type="bibr" rid="ref17 ref18 ref19 ref20 ref21">17, 18, 19, 20, 21</xref>
          ].
        </p>
        <p>Six of our principles are presented as “design for...” 2.1. The Environment: Generative
statements, indicating the characteristics that designers Variability
should keep in mind when making important design
decisions. One is presented as a “design against...” statement, Generative AI technologies present unique challenges
directing designers to design against potential harms that for designers of AI systems compared to discriminative
may arise from hazardous model outputs, misuse, poten- AI systems. First, generative AI is generative in nature,
tial for human displacement, or other harms we have not which means their purpose is to produce artifacts as
outyet considered. The principles interact with each other put, rather than decisions, labels, classifications, and/or
in complex ways, schematically represented via overlap- decision boundaries. These artifacts may be comprised of
ping circles in Figure 1. For example, the characteristic diferent types of media, such as text, images, audio,
anidenoted in one principle (e.g. multiple outputs) can some- mations or videos. Second, the outputs of a generative AI
times be leveraged as a strategy for addressing another model are variable in nature. Whereas discriminitive AI
principle (e.g. exploration). Principles are also connected aims for deterministic outcomes, generative AI systems
by a user’s aims, such as producing a singular artifact, may not produce the same output for a given input each
seeking inspiration or creative ideas, or learning about a time. In fact, by design, they can produce multiple and
domain. They are also connected by design features or divergent outputs for a given input, some or all of which
attributes of a generative AI application, such as the sup- may be satisfactory to the user. Thus, it may be dificult
port for versioning, curation, or sandbox environments. for users to achieve replicable results when working with</p>
        <p>
          Our aim for these principles is threefold: (1) to pro- a generative AI application.
vide the designers of generative AI applications with the Although the very nature of generative applications
language to discuss issues unique to generative AI; (2) to violates the common HCI principle that a system should
provide strategies and guidance to help designers make respond consistently to a user’s input (for critiques of this
important design decisions around how end users will position, see [
          <xref ref-type="bibr" rid="ref12 ref47 ref48 ref49 ref50 ref51">47, 48, 49, 50, 51, 12</xref>
          ]), we take the position
interact with a generative AI application; and (3) to sensi- that this environment in which generative applications
6Our use of the term discriminative is to indicate that the task operate – generative variability – is a core strength.
Genconducted by the AI algorithm is one of determining to which erative applications enable users to explore or populate a
class or group a data instance belongs; classification and clus- “space” of possible outcomes to their query. Sometimes,
tering algorithms are examples of discriminative AI. Although this exploration is explicit, as in the case of systems that
our use of the term discriminative may evoke imagery of hu- enable latent space manipulations of an artifact. Other
gmeannetdici,scorrimoitnhaetriolnin(ees.)g,. ouvriaursaecifaoll,lorwelsigtihouess,cgieenntdificercoidnevnetnit-y, times, exploration of a space occurs when a generative
tion established in the machine learning community (see, e.g., model produces multiple candidate outputs for a given
https://en.wikipedia.org/wiki/Discriminative_model) input, such as multiple distinct images for a given prompt
[
          <xref ref-type="bibr" rid="ref2 ref3">2, 3</xref>
          ] or multiple implementations of a source code pro- generate such earlier outputs? One strategy is to keep
gram [
          <xref ref-type="bibr" rid="ref36 ref37">36, 37</xref>
          ]. Recent studies also show how users may track of all of these outputs, as well as the parameters that
improve their knowledge of a domain by working with a produced them, by versioning them. Such versioning can
generative model and its variable outputs [
          <xref ref-type="bibr" rid="ref36 ref42">36, 42</xref>
          ]. happen manually (e.g. the user clicks a button to “save”
        </p>
        <p>This concept of generative variability is crucially im- their current working state) or automatically.
portant for designers of generative AI applications to
communicate to users. Users who approach a generative 2.2.2. Curation
AI system without understanding its probabilistic nature
and its capacity to produce varied outputs will struggle to
interact with it in productive ways. The design principles
we outline in the following sections – designing for
multiple outcomes &amp; imperfection, for exploration &amp; human
control, and for mental models &amp; explanations – are all
rooted in the notion that generative AI systems are
distinct and unique because they operate in an environment
of generative variability.</p>
      </sec>
      <sec id="sec-1-2">
        <title>When a generative model is capable of producing multi</title>
        <p>
          ple outputs, users may need tools to curate those outputs.
Curation may include collecting, filtering, sorting,
selecting, or organizing outputs (possibly from the versioned
queue) into meaningful subsets or groups, or creating
prioritized lists or hierarchies of outputs according to
some subjective or objective criteria. For example,
CogMol7 generates novel molecular compounds, which can
be sorted by various properties, such as their molecular
weight, toxicity, or water solubility [
          <xref ref-type="bibr" rid="ref56 ref57">56, 57</xref>
          ]. In addition,
the confidence of the model in each output it produced
may be a useful way to sort or rank outputs, although in
some cases, model confidence scores may not be
indicative of the quality of the model’s output [
          <xref ref-type="bibr" rid="ref32">32</xref>
          ].
        </p>
        <sec id="sec-1-2-1">
          <title>2.2. Design for Multiple Outputs</title>
        </sec>
      </sec>
      <sec id="sec-1-3">
        <title>7http://covid19-mol.mybluemix.net</title>
        <p>
          Generative AI technologies such as encoder-decoder
models [
          <xref ref-type="bibr" rid="ref52 ref53">52, 53</xref>
          ], generative adversarial networks [
          <xref ref-type="bibr" rid="ref54">54</xref>
          ], and
transformer models [
          <xref ref-type="bibr" rid="ref55">55</xref>
          ] are probabilistic in nature and
thus are capable of producing multiple, distinct outputs
for a user’s input. Designers therefore need to under- 2.2.3. Annotation
stand the extent to which these multiple outputs should When a generative model has produced a large number
be visible to users. Do users need the ability to anno- of outputs, users may desire to add marks, decorators,
tate or curate? Do they need the ability to compare or or annotations to outputs of interest. These annotations
contrast? How many outputs does a user need? may be applied to the output itself (e.g. “I like this”) or it
        </p>
        <p>
          Understanding the user’s task can help answer these may be applied to a portion or subset of the output (e.g.
questions. If the user’s task is one of production, in which lfagging lines of source code that look problematic and
the ultimate goal is to produce a single, satisfying arti- need review).
fact, then designs that help the user filter and visualize
diferences may be preferable. For example, a software
engineer’s goal is often to implement a method that per- 2.2.4. Visualizing Diferences
forms a specific behavior. Tools such as Copilot take a In some cases, a generative model may produce a diverse
user’s input, such as a method signature or documen- set of distinct outputs, such as images of cats that look
tation, and provide a singular output. Contrarily, if the strikingly diferent from each other. In other cases, a
user’s task is one of exploration, then designs that help generative model may produce a set of outputs for which
the user curate, annotate, and mutate may be preferable. it is dificult to discern diferences, such as a source code
For example, a software engineer may wish to explore a translation from one language to another. In this case,
space of possible test cases for a code module. Or, an artist tools that aid users in visualizing the similarities and
may wish to explore diferent compositions or styles to diferences between multiple outputs can be useful.
Desee a broad range of possibilities. Below we discuss a set pending on the users’ goals, they may seek to find the
of strategies for helping design for multiple outputs. invariant aspects across outcomes, such as identifying
which parts of a source code translation were the same
2.2.1. Versioning across multiple translations, indicating a confidence in its
correctness. Or, users may prioritize the variant aspects
for greater creativity and inspiration. For example,
Sentient Sketchbook [
          <xref ref-type="bibr" rid="ref58">58</xref>
          ] is a video game co-creation system
that displays a number of diferent metrics of the maps
it generates, enabling users to compare newly-generated
maps with their current map to understand how they
difer.
        </p>
        <p>Because of the randomness involved in the generative
process, as well as other user-configurable parameters
(e.g. a random seed, a temperature, or other types of
user controls), it may be dificult for a user to produce
exactly the same outcome twice. As a user interacts with
a generative AI application and creates a set of outputs,
they may find that they prefer earlier outputs to later ones.</p>
        <p>How can they recover or reset the state of the system to</p>
        <sec id="sec-1-3-1">
          <title>2.3. Design for Imperfection</title>
          <p>
            may contain imperfections (and thus require human
review, discussed further in Weisz et al. [
            <xref ref-type="bibr" rid="ref36">36</xref>
            ]) can be an
efective way for handling imperfection.
          </p>
          <p>
            It is highly important for users to understand that the
quality of a generative model’s outputs will vary. Users
who expect a generative AI application to produce exactly
the artifact they desire will experience frustration when 2.3.3. Co-Creation
they work with the system and find that it often pro- User experiences that allow for co-creation, in which
duces imperfect artifacts. By “imperfect,” we mean that both the user and the AI can edit a candidate artifact, will
the artifact itself may have imperfections, such as visual be more efective than user experiences that assume or
misrepresentations in an image, bugs or errors in source aim for the generative model to produce a perfect output.
code, missing desired elements (e.g. “an illustration of a Allowing users to edit a model’s outputs provides them
bunny with a carrot” fails to include a carrot), violations with the opportunity to find and fix imperfections, and
of constraints specified in the input prompt (e.g. “write ultimately achieve a satisfactory artifact. One example
a 10 word sentence” produces a much longer or shorter of this idea is Github Copilot [
            <xref ref-type="bibr" rid="ref62">62</xref>
            ], which is embedded
sentence), or even untruthful or misleading answers (e.g. in the VSCode IDE. In the case when Copilot produces
a summary of a scientific topic that includes non-existent an imperfect block of source code, developers are able to
references [
            <xref ref-type="bibr" rid="ref59">59</xref>
            ]). But, “imperfect” can also mean “doesn’t edit it right in context without any friction. By contrast,
satisfy the user’s desire,” such as when the user prompts tools like Midjourney or Stable Difusion only produce
a model and doesn’t get back any satisfying outputs (e.g. a gallery of images to chose from; editing those images
the user didn’t like any of the illustrations of a bunny requires the user to shift to a diferent environment (e.g.
with a carrot). Below we discuss a set of strategies for Photoshop).
helping design for imperfection.
2.3.4. Sandbox / Playground Environment
2.3.1. Multiple Outputs
          </p>
        </sec>
      </sec>
      <sec id="sec-1-4">
        <title>A sandbox or playground environment ensures that when</title>
        <p>
          Our previous design principle is also a strategy for han- a user interacts with a generated artifact, their
interacdling imperfect outputs. If a generative model is allowed tions (such as edits, manipulations, or annotations) do
to produce multiple outputs, the likelihood that one of not impact the larger context or environment in which
those outputs is satisfying to the user is increased. One they are working. Returning to the example of Github
example of this efect is in how code translation models Copilot, since it is situated inside a developer’s IDE, code
are evaluated, via a metric called @ [
          <xref ref-type="bibr" rid="ref60 ref61">60, 61</xref>
          ]. The it produces is directly inserted into the working code
idea is that the model is allowed to produce  code trans- file. Although this design choice enables co-creation, it
lations for a given input, and if any of them pass a set of also poses a risk that imperfect code is injected into a
unit tests, then the model is said to have produced a cor- production code base. A sandbox environment that
rerect translation. In this way, generating multiple outputs quires users to explicitly copy and paste code in order to
serves to mitigate the fact that the model’s most-likely commit it to the current working file may guard against
output may be imperfect. However, it is left up to the the accidental inclusion of imperfect outputs in a larger
user to review the set of outputs and identify the one that environment or product.
is satisfactory; with multiple outputs that are very similar
to each other, this task may be dificult [
          <xref ref-type="bibr" rid="ref37">37</xref>
          ], implying 2.4. Design for Human Control
the need for a way to easily visualize diferences.
2.3.2. Evaluation &amp; Identification
Given that generative models may not produce perfect
(or perfectly satisfying) outputs, they may still be able
to provide users with a signal about the quality of its
output, or indicate parts that require human review. As
previously discussed, a model’s per-output confidence
scores may be used (with care) to indicate the quality of a
model’s output. Or, domain-specific metrics (e.g.
molecular toxicity, compiler errors) may be useful indicators
to evaluate whether an artifact achieved a desirable level
of quality. Thus, evaluating the quality of generated
artifacts and identifying which portions of those artifacts
Keeping humans in control of AI systems is a core tenet
of human-centered AI [
          <xref ref-type="bibr" rid="ref4 ref63 ref64">63, 64, 4</xref>
          ]. Providing users with
controls in generative applications can improve their
experience by increasing their eficiency, comprehension,
and ownership of generated outcomes [
          <xref ref-type="bibr" rid="ref34">34</xref>
          ]. But, in
cocreative contexts, there are multiple ways to interpret
what kinds of “control” people need. We identify three
kinds of controls applicable to generative AI applications.
2.4.1. Generic Controls
        </p>
      </sec>
      <sec id="sec-1-5">
        <title>One aspect of control relates to the exploration of a de</title>
        <p>
          sign space or range of possible outcomes (as discussed in
Section 2.5). Users need appropriate controls in order to
drive their explorations, such as control over the num- satisfy the constraints). In either case, the control itself is
ber of outputs produced from an input or the amount dependent on the fact that the model is producing a
speof variability present in the outputs. We refer to these cific kind of artifact, such as a molecule, and would not
kinds of controls as generic controls, as they are applica- logically make sense for other kinds of artifacts in other
ble to any particular generative technology or domain. domains (e.g. how would you control the water solubility
As an example, some generative projects may involve a for a text-to-image model?). Thus, we refer to these types
“lifecycle” pattern in which users benefit from seeing a of controls, independent of how they are implemented, as
great diversity of outputs in early stages of the process domain specific . Other examples of domain-specific
conin order to search for ideas, inspirations, or directions. trols include the reading level of a text, the color palette
Later stages of the project may focus on a smaller number or artistic style of an image, or the run time or memory
(or singular) output, requiring controls that specifically eficiency of source code.
operate on that output. Many generative algorithms
include a user-controllable parameter called temperature. 2.5. Design for Exploration
A low temperature setting produces outcomes that are
very similar to each other; conversely, a high tempera- Because users are working in an environment of
generture setting produces outcomes that are very dissimilar ative variability, they will need some way to “explore”
to each other. In the “lifecycle” model, users may first or “navigate” the space of potential outputs in order to
set a high temperature for increased diversity, and then identify one (or more) that satisfies their needs. Below
reduce it when they wish to focus on a particular area of we discuss a set of strategies for helping design for
exinterest in the output space. This efect was observed in ploration.
a study of a music co-creation tool, in which novice users
dragged temperature control sliders to the extreme ends 2.5.1. Multiple Outputs
to explore the limits of what the AI could generate [
          <xref ref-type="bibr" rid="ref34">34</xref>
          ].
        </p>
      </sec>
      <sec id="sec-1-6">
        <title>The ability for a generative model to produce multiple</title>
        <p>
          outputs (Section 2.2) is an enabler of exploration.
Return2.4.2. Technology-specific Controls ing to the bunny and carrot example, an artist may wish
Other types of controls will depend on the particular gen- to explore diferent illustrative styles and prompt (and
reerative technology being employed. Encoder-decoder prompt) the model for additional candidates of “a bunny
models, for example, often allow users to perform latent with a carrot” in various kinds of styles or configurations.
space manipulations of an artifact in order to control Or, a developer can explore diferent ways to implement
semantically-meaningful attributes. For example, Liu an algorithm by prompting (and re-prompting) a model to
and Chilton [
          <xref ref-type="bibr" rid="ref65">65</xref>
          ] demonstrate how semantic sliders can produce implementations that possess diferent attributes
be used to control attributes of 3D models of animals, (e.g. “implement this using recursion,” “implement this
such as the animal’s torso length, neck length, and neck using iteration,” or “implement this using memoization”).
rotation. Transformer models use a temperature parame- In this way, a user can get a sense of the diferent
possiter to control the amount of randomness in the genera- bilities the model is capable of producing.
tion process [
          <xref ref-type="bibr" rid="ref66">66</xref>
          ]. Natural language prompting, and the
emerging discipline of prompt engineering [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ], provide 2.5.2. Control
additional ways to tune or tweak the outputs of large
language models. We refer to these kinds of controls as
technology-specific controls , as the controls exposed to a
user in a user interface will depend upon the particular
generative AI technology used in the application.
        </p>
        <p>
          Depending on the specific technical architecture used
by the generative application, there may be diferent
ways for users to control it (Section 2.4). No matter the
specific mechanisms of control, providing controls to a
user provides them with the ability to interactively work
with the model to explore the space of possible outputs
for their given input.
2.4.3. Domain-specific Controls
Some types of user controls will be domain-specific,
dependent on the type of artifact being produced. For exam- 2.5.3. Sandbox / Playground Environment
ple, generative models that produce molecules as output
might be controlled by having the user specify desired
properties such as molecular weight or water solubility;
these types of constraints might be propagated to the
model itself (e.g. expressed as a constraint in the encoder
phase), or they may simply act as a filter on the model’s
output (e.g. hide anything from the user that doesn’t
A sandbox or playground environment can enable
exploration by providing a separate place in which new
candidates can be explored, without interfering with a
user’s main working environment. For example, in a
project using Copilot, Cheng et al. [
          <xref ref-type="bibr" rid="ref67">67</xref>
          ] suggest
providing, “a sandbox mechanism to allow users to play with
the prompt in the context of their own project.”
2.5.4. Visualization
partner? does it act proactively or does it just respond to
the user? does it make changes to an artifact directly or
does it simply make recommendations for the user?
        </p>
        <sec id="sec-1-6-1">
          <title>2.7. Design for Explanations</title>
        </sec>
      </sec>
      <sec id="sec-1-7">
        <title>One way to help users understand the space in which</title>
        <p>they are exploring is to explicitly visualize it for them.</p>
        <p>
          Kreminski et al. [
          <xref ref-type="bibr" rid="ref33">33</xref>
          ] introduce the idea of expressive
range coverage analysis (ERCA) in which a user is shown
a visualization of the “range” of possible generated
artifacts across a variety of metrics. Then, as users interact
with the system and produce specific artifact instances,
those instances are included in the visualization to show
how much of the “range” or “space” was explored by the
user.
        </p>
        <p>
          Generative AI applications will be unfamiliar and
possibly unusual to many users. They will want to know
what the application can (and cannot) do, how well it
works, and how to work with it efectively. Some users
may even wish to understand the technical details of how
the underlying generative AI algorithms work, although
these details may not be necessary to work efectively
2.6. Design for Mental Models with the model (as discussed in [
          <xref ref-type="bibr" rid="ref36">36</xref>
          ]).
In recent years, the explainable AI (XAI) community
Users form mental models when they work with techno- has made tremendous progress at developing techniques
logical systems [
          <xref ref-type="bibr" rid="ref68 ref69 ref70">68, 69, 70</xref>
          ]. These models represent the for explaining how AI systems work [
          <xref ref-type="bibr" rid="ref21 ref76 ref77 ref78 ref79">76, 77, 21, 78, 79</xref>
          ].
user’s understanding of how the system works and how Much of the work in XAI has focused on discriminative
to work with it efectively to produce the outcomes they algorithms: how they generally make decisions (e.g. via
desire. Due to the environment of generative variabil- interpretable models [80, Chapter 5] or feature
impority, generative AI applications will pose new challenges tance [80, Section 8.5], and why they make a decision in a
to users because these applications may violate existing specific instance (e.g. via counterfactual explanations [ 80,
mental models of how computing systems behave (i.e. Section 9.3].
in a deterministic fashion). Therefore, we recommend Recent work in human-centered XAI (HCXAI) has
designing to support users in creating accurate mental emphasized designing explanations that cater to human
models of generative AI applications in the following knowledge and human needs [
          <xref ref-type="bibr" rid="ref77">77</xref>
          ]. This work grew out of
ways. a general shift toward human-centered data science [
          <xref ref-type="bibr" rid="ref47">47</xref>
          ],
in which the import of explanations is not for a technical
2.6.1. Orientation to Generative Variability user (data scientist), but for an end user who might be
Users may need a general introduction to the concept impacted by a machine learning model.
of generative AI. They should understand that the sys- In the case of generative AI, recent work has begun
tem may produce multiple outputs for their query (Sec- to explore the needs for explainability. Sun et al. [
          <xref ref-type="bibr" rid="ref35">35</xref>
          ]
extion 2.2), that those outputs may contain flaws or im- plored explainability needs of software engineers
workperfections (Section 2.3), and that their efort may be ing with a generative AI model for various types of use
required to collaborate with the system in order to pro- cases, such as code translation and autocompletion. They
duce desired artifacts via various kinds of controls (Sec- identified a number of types of questions that software
tion 2.4). engineers had about the generative AI, its capabilities,
and its limitations, indicating that explainability is an
im2.6.2. Role of the AI portant feature for generative AI applications. They also
identified several gaps in existing explainability
frameResearch in human-AI interaction suggests that users works stemming from the generative nature of the AI
may view an AI application as filling a role such as an system, indicating that existing XAI techniques may not
assistant, coach, or teammate [
          <xref ref-type="bibr" rid="ref29">29</xref>
          ]. In a study of video be suficient for generative AI applications. Thus, we
game co-creation, Guzdial et al. [
          <xref ref-type="bibr" rid="ref71">71</xref>
          ] found participants make the following recommendations for how to design
to ascribe roles of friend, collaborator, student, and man- for explanations.
ager to the AI system. Recent work by Ross et al. [
          <xref ref-type="bibr" rid="ref42">42</xref>
          ]
examined software engineers’ role orientations toward 2.7.1. Calibrate Trust by Communicating
a programming assistant and found that people viewed Capabilities and Limitations
the assistant with a tool orientation, but interacted with
it as if it were a social agent. Clearly establishing the role Because of the inherent imperfection of generative AI
of a generative AI application in a user’s workflow, as outputs, users would be well-served if they understood
well as its level of autonomy (e.g. [
          <xref ref-type="bibr" rid="ref72 ref73 ref74 ref75">72, 73, 74, 75</xref>
          ]), will the limitations of these systems [
          <xref ref-type="bibr" rid="ref81 ref82">81, 82</xref>
          ], allowing them
help users better understand how to interact efectively to calibrate their trust in terms of what the application
with it. Designers can reason about the role of their ap- can and cannot do [
          <xref ref-type="bibr" rid="ref83">83</xref>
          ]. When these kinds of
imperfecplication by answering questions such as, is it a tool or tions (Section 2.3) are not signaled, users of co-creative
tools may mistakenly blame themselves for shortcomings mans, and cultures. Even with our focus on the design
of generated artifacts in co-creative applications [
          <xref ref-type="bibr" rid="ref34">34</xref>
          ], of generative applications, an analysis of harms that is
and users in Q &amp; A use cases can be shown deceptive limited to design concepts may blur into
technosolutionmisconceptions and harmful falsehoods as objective an- ism [
          <xref ref-type="bibr" rid="ref89 ref90 ref91">89, 90, 91</xref>
          ].
swers [
          <xref ref-type="bibr" rid="ref84">84</xref>
          ]. One way to communicate the capabilities of We do posit that human-centered approaches to
genera generative AI application is to show examples of what ative AI design are a useful first step, but must be part
it can do. For example, Midjourney provides a public dis- of a larger strategy to understand who are the direct and
cussion space to orient new users and show them what indirect stakeholders of a generative application [
          <xref ref-type="bibr" rid="ref92 ref93">92, 93</xref>
          ],
other users have produced with the model. This space and to work directly with those stakeholders to identify
not only shows the outputs of the model (e.g. images), harms, understand what are their difering priorities and
but the textual prompts that produced the images. In this value tensions [94], and negotiate issues of culture, policy,
way, users can more quickly come to understand how and (yes) technology to meet these diverse challenges
diferent prompts influence the application’s output. To (e.g., [95, 96, 97]).
communicate limitations, systems like ChatGPT contain
modal screens to inform users of the system’s limitations. 2.8.1. Hazardous Model Outputs
        </p>
      </sec>
      <sec id="sec-1-8">
        <title>Generative AI applications may produce artifacts that</title>
        <p>2.7.2. Use Explanations to Create and Reinforce cause harm. In an integrative survey paper, Weidinger</p>
        <p>
          Accurate Mental Models et al. [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ] list six types of potential harms of large
lanWeisz et al. [
          <xref ref-type="bibr" rid="ref36">36</xref>
          ] explored how a generative model’s con- guage models, three of which regard the harms that may
ifdence could be surfaced in a user interface. Working be caused by the model’s output:
with a transformer model on a code translation task, • Discrimination, Exclusion, and Toxicity.
Generthey developed a prototype UI that highlighted tokens ative models may produce outputs that promote
disin the translation that the model was not confident in. crimination against certain groups, exclude certain
In their user study, they found that those highlights also groups from representation, or produce toxic content.
served as explanations for how the model worked: users Examples include text-to-image models that fail to
procame to understand that each source code token was cho- duce ethnically diverse outputs for a given input (e.g.
sen probabilistically, and that the model had considered a request for images of doctors produces images of
other alternatives. This design transformed an algorith- male, white doctors [98] or language models that
promic weakness (imperfect output) into a resource for users duce inappropriate language such as swear words, hate
to understand how the algorithm worked, and ultimately, speech, or ofensive content [
          <xref ref-type="bibr" rid="ref17 ref20">17, 20</xref>
          ].
to control its output (by showing users where they might
need to make changes).
        </p>
        <sec id="sec-1-8-1">
          <title>2.8. Design Against Harms</title>
          <p>
            The use of AI systems – including generative AI
applications – may unfortunately lead to diverse forms of harms,
especially for people in vulnerable situations. Much work
in AI ethics communities has identified how
discriminative AI systems may perpetuate harms such as the denial
of personhood or identity [
            <xref ref-type="bibr" rid="ref49 ref85 ref86">49, 85, 86</xref>
            ]; the deprivation of
liberty or children [
            <xref ref-type="bibr" rid="ref87 ref88">87, 88</xref>
            ], and the erasure of persons,
cultures, or nations through data silences [
            <xref ref-type="bibr" rid="ref81">81</xref>
            ]. We
identify four types of potential harms, some of which are
unique to the generative domain, and others which
represent existing risks of AI applications that may manifest
in new ways.
          </p>
          <p>Our aim in this section is to sensitize designers to the
potential risks and harms that generative AI systems
may pose. We do not prescribe solutions to address these
risks, in part because it is an active area of research to
understand how these kinds of risks could be mitigated.</p>
          <p>
            Risk identification, assessment, and mitigation is a
sociotechnical problem involving computing resources,
hu• Information Hazards. Generative models may
inadvertently leak private or sensitive information from
their training data. For example, Carlini et al. [99]
found that strategically prompting GPT-2 revealed an
individual’s full name, work address, phone number,
email, and fax number. Additionally, larger models
may be more vulnerable to these types of attacks [99,
100].
• Misinformation Harms. Generative models may
produce inaccurate misinformation in response to a user’s
query. Lin et al. [
            <xref ref-type="bibr" rid="ref84">84</xref>
            ] found that GPT-3 can provide false
answers that mimic human falsehoods and
misconceptions, such as “coughing can help stop a heart attack” or
“[cold weather] tells us that global warming is a hoax”.
          </p>
          <p>Singhal et al. [101] caution against the tendency of
LLMs to hallucinate references, especially if consulted
for medical decisions. Albrecht et al. [102] claim that
LLMs have few defenses against adversarial attacks
while advising about ethical questions. The Galactica
model was found to hallucinate non-existent scientific
references [103], and Stack Overflow has banned
responses sourced from ChatGPT due to their high rate
of incorrect, yet plausible, responses [104].</p>
          <p>
            In addition to those harms, a generative model’s out- or multiple people may be required to review or approve
puts may be hazardous in other ways as well. a model’s outputs before they can be used.
2.8.3. Human Displacement
• Deceit, Impersonation, and Manipulation.
Generative algorithms can be used to create false records or
“deep fakes” (e.g., [
            <xref ref-type="bibr" rid="ref11">11, 105</xref>
            ]), to impersonate others (e.g.
[106]), or to distort information into politically-altered
content [107]. In addition, they may manipulate users
who believe that they are chatting with another human
rather than with an algorithm, as in the case of an
unreviewed ChatGPT “experiment” in which at least 4,000
people seeking mental health support were connected
to a chatbot rather than a human counselor [108].
          </p>
          <p>
            One consequence of the large-scale deployment of
generative AI technologies is that they may come to replace,
rather than augment human workers. Such concerns
have been raised in related areas, such as the use of
automated AI technologies in data science Wang et al.
[114, 115]. Weidinger et al. [
            <xref ref-type="bibr" rid="ref10">10</xref>
            ] specifically discuss the
potential economic harms and inequalities that may arise
as a consequence of widespread adoption of generative
AI. If a generative model is capable of producing
high• Copyright, Licenses, and Intellectual Property. fidelity outputs that rival (or even surpass) what can
Generative models may have been trained on data pro- be created by human efort, are the humans necessary
tected by regulations such as the GDPR, which pro- anymore? Contemporary fears of human displacement
hibits the re-use of data beyond the purposes for which by generative technologies are beginning to manifest in
it was collected. In addition, large language models mainstream media, such as in the case of illustrators’
have been referred to as “stochastic parrots” due to concerns that text-to-image models such as Stable
Diftheir ability to reproduce data that was used during fusion and Midjourney will put them out of a job [116].
their training [109]. One consequence of this efect We urge designers to find ways to design generative AI
is that the model may produce outputs that incorpo- applications that enhance or augment human abilities,
rate or remix materials that are subject to copyright or rather than applications that aim to replace human
workintellectual property protections [110, 111, 112]. For ex- ers. Copilot serves as one example of a tool that clearly
ample, the Codex model, which produces source code enhances the abilities of a software engineer: it operates
as output, may (re-)produce source code that is copy- on the low-level details of a source code implementation,
righted or subject to a software license, or that was freeing up software engineers to focus more of their
atopenly shared under a creative commons license that tention on higher-level architectural and system design
prohibits commercial re-use (e.g., in a pay-to-access issues.
          </p>
          <p>LLM). Thus, the use of a model’s outputs in a project
may cause that project to violate copyright protections,
or subject that project to a restrictive license (e.g. GPL). 3. Discussion
As of this writing, there is a lawsuit against GitHub,
Microsoft, and OpenAI on alleged copyright violations 3.1. Designing for User Aims
in the training of Codex [113].</p>
        </sec>
      </sec>
      <sec id="sec-1-9">
        <title>Users of generative AI applications may have varied aims</title>
        <p>
          or goals in using those systems. Some users may be in
2.8.2. Misuse pursuit of perfecting a singular artifact, such as a method
Weidinger et al. [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ] describe how generative AI appli- implementation in a software program. Other users may
cations may be misused in ways unanticipated by the be in pursuit of inspiration or creative ideas, such as when
creators of those systems. Examples include making dis- exploring a visual design space. As a consequence of
information cheaper and more efective, facilitating fraud working with a generative AI application, users may also
and scams, assisting code generation for cyberattacks, or enhance their own learning or understanding of the domain
conducting illegitimate surveillance and censorship. In in which they are operating, such as when a software
addition to these misuses, Houde et al. [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ] also identify engineer learns something new about a programming
business misuses of generative AI applications such as language from the model’s output. Each of these aims
facilitating insurance fraud and fabricating evidence of can be supported by our design principles, as well as
a crime. Although designers may not be able to prevent help designers determine the appropriate strategy for
users from intentionally misusing their generative AI addressing the challenges posed by each principle.
applications, there may be preventative measures that To support artifact production, designers ought to
caremake sense for a given application domain. For exam- fully consider how to manage a model’s multiple,
imperple, output images may be watermarked to indicate they fect outputs. Interfaces ought to support users in curating,
were generated by a particular model, blocklists may be annotating, and mutating artifacts to help users refine a
used to disallow undesirable words in a textual prompt, singular artifact. The ability to version artifacts, or show
a history of artifact edits, may also be useful to enable
a generative AI application, and explicitly enumerating
their values, designers can make more reasoned
judgments about how those stakeholders might be impacted
by hazardous model outputs, model misuse, and issues
of human displacement.
        </p>
      </sec>
    </sec>
    <sec id="sec-2">
      <title>4. Limitations and Future Work</title>
      <p>Generative AI applications are still in their infancy, and
new kinds of co-creative user experiences are emerging
at a rapid pace. Thus, we consider these principles to
be in their infancy as well, and it is possible that other
important design principles, strategies, and/or user aims
have been overlooked. In addition, although these
principles can provide helpful guidance to designers in making
specific design decisions, they need to be validated in
real-world settings to ensure their clarity and utility.
5. Conclusion
users to revisit discarded options or undo undesirable
modifications. For cases in which users seek to produce
one “ideal” artifact that satisfies some criteria, controls
that enable them to co-create with the generative tool
can help them achieve their goal more eficiently, and
explanations that signal or identify imperfections can tell
them how close or far they are from the mark.</p>
      <p>
        To support inspiration and creativity, designers also
ought to provide adequate controls that enable users to
explore a design space of possibilities [
        <xref ref-type="bibr" rid="ref33">33, 117</xref>
        ].
Visualizations that represent the design space can also be helpful
as they can show which parts the user has vs. has not
explored, enabling them to explore the novel parts of that
space. Tools that help users manage, curate, and filter
the diferent outputs created during their explorations
can be extremely helpful, such as a digital mood board
for capturing inspiring model outputs.
      </p>
      <p>
        Finally, to support learning how to efectively interact
with a generative AI application, designers ought to help
users create accurate mental models [118] through
explanations [
        <xref ref-type="bibr" rid="ref21 ref76 ref77 ref78 ref79">76, 77, 21, 78, 79</xref>
        ]. Explanations can help answer
general questions such as what a generative AI
application is capable or not capable of generating, how the
model’s controls impact its output, and how the model
was trained and the provenance of its training data. They
can also answer questions about a specific model
output, such as how confident the model was in that output,
which portions of that output might need human review
or revision, how to adjust or modify the input or prompt
to adjust properties of the output, or what other options
or alternatives exist for that output.
      </p>
      <sec id="sec-2-1">
        <title>We present a set of seven design principles for generative</title>
        <p>AI applications. These principles are grounded in an
environment of generative variability, the key characteristics
of which are that a generative AI application will
generate artifacts as outputs, and those outputs may be varied
in nature (e.g. of varied quality or character). The
principles focus on designing for multiple outputs and the
imperfection of those outputs, designing for exploration
of a space or range of possible outputs and maintaining
human control over that exploration, and designing to
establish accurate mental models of the generative AI
3.2. The Importance of Value-Sensitive application via explanations. We also urge designers to
design against the potential harms that may be caused</p>
        <p>
          Design in Mitigating Potential Harms by hazardous model output (e.g. the production of
inDesigners need to be sensitive to the potential harms that appropriate language or imagery, the reinforcement of
may be caused by the rapid maturation and widespread existing stereotypes, or a failure to inclusively represent
adoption of generative AI technologies. Although so- diferent groups), by misuse of the model (e.g. by creating
ciotechnical means for mitigating these harms have yet to disinformation or fabricating evidence), or by displacing
be developed, we recommend that designers use a Value human workers (e.g. by designing for the replacement
Sensitive Design approach [
          <xref ref-type="bibr" rid="ref92 ref93">92, 93</xref>
          ] when reasoning about rather than the augmentation of human workers). We
enhow to design generative AI applications. By clearly iden- vision these principles to help designers make reasoned
tifying the diferent stakeholders and impacted parties of choices as they create novel generative AI applications.
        </p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>R.</given-names>
            <surname>Bommasani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. A.</given-names>
            <surname>Hudson</surname>
          </string-name>
          , E. Adeli,
          <string-name>
            <given-names>R.</given-names>
            <surname>Altman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Arora</surname>
          </string-name>
          , S. von Arx,
          <string-name>
            <given-names>M. S.</given-names>
            <surname>Bernstein</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Bohg</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Bosselut</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Brunskill</surname>
          </string-name>
          , et al.,
          <article-title>On the opportunities and risks of foundation models</article-title>
          ,
          <source>arXiv preprint arXiv:2108.07258</source>
          (
          <year>2021</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>A.</given-names>
            <surname>Ramesh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Dhariwal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Nichol</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Chu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <article-title>Hierarchical text-conditional image generation with clip latents</article-title>
          ,
          <source>arXiv preprint arXiv:2204.06125</source>
          (
          <year>2022</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>R.</given-names>
            <surname>Rombach</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Blattmann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Lorenz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Esser</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Ommer</surname>
          </string-name>
          ,
          <article-title>High-resolution image synthesis with latent difusion models</article-title>
          ,
          <source>in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition</source>
          ,
          <year>2022</year>
          , pp.
          <fpage>10684</fpage>
          -
          <lpage>10695</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>B.</given-names>
            <surname>Shneiderman</surname>
          </string-name>
          ,
          <string-name>
            <surname>Human-Centered</surname>
            <given-names>AI</given-names>
          </string-name>
          , Oxford University Press,
          <year>2022</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>A.</given-names>
            <surname>Campero</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Vaccaro</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Song</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Wen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Almaatouq</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T. W.</given-names>
            <surname>Malone</surname>
          </string-name>
          ,
          <article-title>A test for evaluating performance in human-computer systems</article-title>
          ,
          <source>arXiv preprint arXiv:2206.12390</source>
          (
          <year>2022</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>E.</given-names>
            <surname>Clark</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. S.</given-names>
            <surname>Ross</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Tan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Ji</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N. A.</given-names>
            <surname>Smith</surname>
          </string-name>
          ,
          <article-title>Creative writing with a machine in the loop: Case studies on slogans and stories</article-title>
          ,
          <source>in: 23rd International Conference on Intelligent User Interfaces</source>
          ,
          <year>2018</year>
          , pp.
          <fpage>329</fpage>
          -
          <lpage>340</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Buçinca</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. B.</given-names>
            <surname>Malaya</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K. Z.</given-names>
            <surname>Gajos</surname>
          </string-name>
          ,
          <article-title>To trust or to think: cognitive forcing functions can reduce overreliance on ai in ai-assisted decision-making</article-title>
          ,
          <source>Proceedings of the ACM on Human-Computer Interaction</source>
          <volume>5</volume>
          (
          <year>2021</year>
          )
          <fpage>1</fpage>
          -
          <lpage>21</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>M.</given-names>
            <surname>Jacobs</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. F.</given-names>
            <surname>Pradier</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T. H.</given-names>
            <surname>McCoy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. H.</given-names>
            <surname>Perlis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Doshi-Velez</surname>
          </string-name>
          ,
          <string-name>
            <surname>K. Z. Gajos,</surname>
          </string-name>
          <article-title>How machine-learning recommendations influence clinician treatment selections: the example of antidepressant selection</article-title>
          ,
          <source>Translational psychiatry 11</source>
          (
          <year>2021</year>
          )
          <fpage>1</fpage>
          -
          <lpage>9</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>B.</given-names>
            <surname>Kleinberg</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Verschuere</surname>
          </string-name>
          ,
          <article-title>How humans impair automated deception detection performance</article-title>
          ,
          <source>Acta Psychologica</source>
          <volume>213</volume>
          (
          <year>2021</year>
          )
          <fpage>103250</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>L.</given-names>
            <surname>Weidinger</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Mellor</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Rauh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Grifin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Uesato</surname>
          </string-name>
          , P.-S. Huang, M. Cheng, M. Glaese,
          <string-name>
            <given-names>B.</given-names>
            <surname>Balle</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Kasirzadeh</surname>
          </string-name>
          , et al.,
          <article-title>Ethical and social risks of harm from language models</article-title>
          ,
          <source>arXiv preprint arXiv:2112.04359</source>
          (
          <year>2021</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>S.</given-names>
            <surname>Houde</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Liao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Martino</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Muller</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Piorkowski</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Richards</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. D.</given-names>
            <surname>Weisz</surname>
          </string-name>
          ,
          <string-name>
            <surname>Y. Zhang,</surname>
          </string-name>
          <article-title>Business (mis)use cases of generative ai, in: Joint Proceedings of the Workshops on Human-AI CoCreation with Generative Models and User-Aware Conversational Agents co-located with 25th International Conference on Intelligent User Interfaces (IUI</article-title>
          <year>2020</year>
          ),
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>M.</given-names>
            <surname>Muller</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. I.</given-names>
            <surname>Ross</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Houde</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Agarwal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Martinez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. T.</given-names>
            <surname>Richards</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Talamadupula</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. D.</given-names>
            <surname>Weisz</surname>
          </string-name>
          ,
          <article-title>Drinking chai with your (AI) programming partner: A design fiction about generative AI for software engineering 107-122</article-title>
          , in: A.
          <string-name>
            <surname>Smith-Renner</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          Amir (Eds.),
          <source>Joint Proceedings of the IUI</source>
          <year>2022</year>
          <article-title>Workshops: APEx-UI, HAI-GEN, HEALTHI, HUMANIZE, TExSS, SOCIALIZE co-located with the ACM International Conference on Intelligent User Interfaces (IUI</article-title>
          <year>2022</year>
          ), Virtual Event, Helsinki, Finland, March
          <volume>21</volume>
          -22,
          <year>2022</year>
          , volume
          <volume>3124</volume>
          <source>of CEUR Workshop Proceedings, CEUR-WS.org</source>
          ,
          <year>2022</year>
          , pp.
          <fpage>107</fpage>
          -
          <lpage>122</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>V.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L. B.</given-names>
            <surname>Chilton</surname>
          </string-name>
          ,
          <article-title>Design guidelines for prompt engineering text-to-image generative models</article-title>
          ,
          <source>in: CHI Conference on Human Factors in Computing Systems</source>
          ,
          <year>2022</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>23</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>C.</given-names>
            <surname>Greyling</surname>
          </string-name>
          , Prompt engineering,
          <source>text generation and large language models</source>
          ,
          <year>2022</year>
          . URL: https://cobusgreyling.medium.
          <article-title>com/promptengineering-text-generation-large-languagemodels-3d90c527c6d5.</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>L.</given-names>
            <surname>Reynolds</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>McDonell</surname>
          </string-name>
          ,
          <article-title>Prompt programming for large language models: Beyond the few-shot paradigm</article-title>
          ,
          <source>in: Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems</source>
          ,
          <year>2021</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>7</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>P.</given-names>
            <surname>Denny</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Kumar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Giacaman</surname>
          </string-name>
          ,
          <article-title>Conversing with copilot: Exploring prompt engineering for solving cs1 problems using natural language</article-title>
          ,
          <year>2022</year>
          . URL: https://arxiv.org/abs/2210.15157.
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <surname>ACM</surname>
          </string-name>
          ,
          <article-title>Words matter: Alternatives for charged terminology in the computing profession</article-title>
          ,
          <year>2023</year>
          . URL: https://www.acm.org/diversity-inclusion/ words-matter.
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>S.</given-names>
            <surname>Amershi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Weld</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Vorvoreanu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Fourney</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Nushi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Collisson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Suh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Iqbal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. N.</given-names>
            <surname>Bennett</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Inkpen</surname>
          </string-name>
          , et al.,
          <article-title>Guidelines for human-ai interaction</article-title>
          ,
          <source>in: Proceedings of the 2019 chi conference on human factors in computing systems</source>
          ,
          <year>2019</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>13</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>A.</given-names>
            <surname>Computer</surname>
          </string-name>
          , Human interface guidelines,
          <year>2022</year>
          . URL: https://developer.apple.com/design/humaninterface-guidelines/guidelines/overview.
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <surname>IBM</surname>
          </string-name>
          , Racial equity in design,
          <year>2023</year>
          . URL: https:// www.ibm.com/design/racial-equity-in-design/.
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>Q. V.</given-names>
            <surname>Liao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Gruen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Miller</surname>
          </string-name>
          ,
          <article-title>Questioning the ai: informing design practices for explainable ai user experiences</article-title>
          ,
          <source>in: Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems</source>
          ,
          <year>2020</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>15</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>S.</given-names>
            <surname>Deterding</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Hook</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Fiebrink</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Gillies</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Gow</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Akten</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Smith</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Liapis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Compton</surname>
          </string-name>
          ,
          <article-title>Mixed-initiative creative interfaces</article-title>
          ,
          <source>in: Proceedings of the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems</source>
          ,
          <year>2017</year>
          , pp.
          <fpage>628</fpage>
          -
          <lpage>635</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23]
          <string-name>
            <given-names>I.</given-names>
            <surname>Grabe</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>González-Duque</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Risi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Zhu</surname>
          </string-name>
          ,
          <article-title>Towards a framework for human-ai interaction patterns in co-creative gan applications</article-title>
          ,
          <source>Joint Proceedings of the ACM IUI Workshops</source>
          <year>2022</year>
          ,
          <year>March 2022</year>
          , Helsinki, Finland (
          <year>2022</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [24]
          <string-name>
            <surname>M. L. Maher</surname>
          </string-name>
          ,
          <article-title>Computational and collective creativity: Who's being creative?</article-title>
          , in: ICCC,
          <string-name>
            <surname>Citeseer</surname>
          </string-name>
          ,
          <year>2012</year>
          , pp.
          <fpage>67</fpage>
          -
          <lpage>71</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [25]
          <string-name>
            <surname>M. L. Maher</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          <string-name>
            <surname>Magerko</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <string-name>
            <surname>Venura</surname>
            , D. Fisher, R. Cardona-rivera, N. Fulda,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Gooth</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Lee</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <string-name>
            <surname>Wilson</surname>
          </string-name>
          , J. Kaufman, et al.,
          <article-title>A research plan for integrating generative and cognitive ai for human centered, explainable co-creative ai</article-title>
          ,
          <source>in: ACM CHI Conference on Human Factors in Computing Systems</source>
          ,
          <year>2022</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          [26]
          <string-name>
            <given-names>M.</given-names>
            <surname>Muller</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. D.</given-names>
            <surname>Weisz</surname>
          </string-name>
          , W. Geyer,
          <article-title>Mixed initiative generative ai interfaces: An analytic framework for generative ai applications</article-title>
          ,
          <source>ICCC 2020 Workshop, The Future of Co-Creative Systems</source>
          ,
          <year>2020</year>
          . URL: https://computationalcreativity.net/ workshops/cocreative-iccc20/papers/Future_of_ co-creative
          <source>_systems_185.pdf .</source>
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          [27]
          <string-name>
            <given-names>M.</given-names>
            <surname>Muller</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Weisz</surname>
          </string-name>
          ,
          <article-title>Extending a human-ai collaboration framework with dynamism and sociality</article-title>
          ,
          <source>in: 2022 Symposium on Human-Computer Interaction for Work</source>
          ,
          <year>2022</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>12</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          [28]
          <string-name>
            <given-names>T.</given-names>
            <surname>Lubart</surname>
          </string-name>
          ,
          <article-title>How can computers be partners in the creative process: classification and commentary on the special issue</article-title>
          ,
          <source>International Journal of Human-Computer Studies</source>
          <volume>63</volume>
          (
          <year>2005</year>
          )
          <fpage>365</fpage>
          -
          <lpage>369</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          [29]
          <string-name>
            <given-names>I.</given-names>
            <surname>Seeber</surname>
          </string-name>
          , E. Bittner,
          <string-name>
            <given-names>R. O.</given-names>
            <surname>Briggs</surname>
          </string-name>
          , T. De Vreede, G.
          <string-name>
            <surname>-J. De Vreede</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Elkins</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Maier</surname>
            ,
            <given-names>A. B.</given-names>
          </string-name>
          <string-name>
            <surname>Merz</surname>
            , S. OesteReiß,
            <given-names>N.</given-names>
          </string-name>
          <string-name>
            <surname>Randrup</surname>
          </string-name>
          , et al.,
          <article-title>Machines as teammates: A research agenda on ai in team collaboration</article-title>
          ,
          <source>Information &amp; management</source>
          <volume>57</volume>
          (
          <year>2020</year>
          )
          <fpage>103174</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          [30]
          <string-name>
            <given-names>A.</given-names>
            <surname>Spoto</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Oleynik</surname>
          </string-name>
          ,
          <source>Library of mixedinitiative creative interfaces</source>
          ,
          <year>2017</year>
          . URL: http://mici.codingconduct.cc/.
        </mixed-citation>
      </ref>
      <ref id="ref31">
        <mixed-citation>
          [31]
          <string-name>
            <given-names>M.</given-names>
            <surname>Agarwal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. J.</given-names>
            <surname>Barroso</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Chakraborti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E. M.</given-names>
            <surname>Dow</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Fadnis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Godoy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Pallan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Talamadupula</surname>
          </string-name>
          ,
          <article-title>Project clai: Instrumenting the command line as a new environment for ai agents</article-title>
          , arXiv preprint arXiv:
          <year>2002</year>
          .
          <volume>00762</volume>
          (
          <year>2020</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref32">
        <mixed-citation>
          [32]
          <string-name>
            <given-names>M.</given-names>
            <surname>Agarwal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Talamadupula</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Houde</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Martinez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Muller</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Richards</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Ross</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. D.</given-names>
            <surname>Weisz</surname>
          </string-name>
          ,
          <article-title>Quality estimation &amp; interpretability for code translation</article-title>
          ,
          <source>in: Proceedings of the NeurIPS 2020 Workshop on Computer-Assisted Programming (NeurIPS</source>
          <year>2020</year>
          ),
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref33">
        <mixed-citation>
          [33]
          <string-name>
            <given-names>M.</given-names>
            <surname>Kreminski</surname>
          </string-name>
          , I. Karth,
          <string-name>
            <given-names>M.</given-names>
            <surname>Mateas</surname>
          </string-name>
          ,
          <string-name>
            <surname>N.</surname>
          </string-name>
          <article-title>WardripFruin, Evaluating mixed-initiative creative interfaces via expressive range coverage analysis</article-title>
          .,
          <source>in: IUI Workshops</source>
          ,
          <year>2022</year>
          , pp.
          <fpage>34</fpage>
          -
          <lpage>45</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref34">
        <mixed-citation>
          [34]
          <string-name>
            <given-names>R.</given-names>
            <surname>Louie</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Coenen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. Z.</given-names>
            <surname>Huang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Terry</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. J.</given-names>
            <surname>Cai</surname>
          </string-name>
          ,
          <article-title>Novice-ai music co-creation via ai-steering tools for deep generative models</article-title>
          ,
          <source>in: Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems</source>
          ,
          <year>2020</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>13</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref35">
        <mixed-citation>
          [35]
          <string-name>
            <given-names>J.</given-names>
            <surname>Sun</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q. V.</given-names>
            <surname>Liao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Muller</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Agarwal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Houde</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Talamadupula</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. D.</given-names>
            <surname>Weisz</surname>
          </string-name>
          ,
          <article-title>Investigating explainability of generative ai for code through scenario-based design</article-title>
          ,
          <source>in: 27th International Conference on Intelligent User Interfaces</source>
          ,
          <year>2022</year>
          , pp.
          <fpage>212</fpage>
          -
          <lpage>228</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref36">
        <mixed-citation>
          [36]
          <string-name>
            <given-names>J. D.</given-names>
            <surname>Weisz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Muller</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Houde</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Richards</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. I.</given-names>
            <surname>Ross</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Martinez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Agarwal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Talamadupula</surname>
          </string-name>
          ,
          <article-title>Perfection not required? human-ai partnerships in code translation</article-title>
          ,
          <source>in: 26th International Conference on Intelligent User Interfaces</source>
          ,
          <year>2021</year>
          , pp.
          <fpage>402</fpage>
          -
          <lpage>412</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref37">
        <mixed-citation>
          [37]
          <string-name>
            <given-names>J. D.</given-names>
            <surname>Weisz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Muller</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. I.</given-names>
            <surname>Ross</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Martinez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Houde</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Agarwal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Talamadupula</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. T.</given-names>
            <surname>Richards</surname>
          </string-name>
          ,
          <article-title>Better together? an evaluation of aisupported code translation</article-title>
          ,
          <source>in: 27th International Conference on Intelligent User Interfaces</source>
          ,
          <year>2022</year>
          , pp.
          <fpage>369</fpage>
          -
          <lpage>391</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref38">
        <mixed-citation>
          [38]
          <string-name>
            <given-names>T.</given-names>
            <surname>Brown</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Mann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Ryder</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Subbiah</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. D.</given-names>
            <surname>Kaplan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Dhariwal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Neelakantan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Shyam</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Sastry</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Askell</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Agarwal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Herbert-Voss</surname>
          </string-name>
          , G. Krueger,
          <string-name>
            <given-names>T.</given-names>
            <surname>Henighan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Child</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Ramesh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Ziegler</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Wu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Winter</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Hesse</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Chen</surname>
          </string-name>
          , E. Sigler,
          <string-name>
            <given-names>M.</given-names>
            <surname>Litwin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Gray</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Chess</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Clark</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Berner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>McCandlish</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Radford</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I.</given-names>
            <surname>Sutskever</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Amodei</surname>
          </string-name>
          ,
          <article-title>Language models are few-shot learners</article-title>
          , in: H.
          <string-name>
            <surname>Larochelle</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Ranzato</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Hadsell</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Balcan</surname>
          </string-name>
          , H. Lin (Eds.),
          <source>Advances in Neural Information Processing Systems</source>
          , volume
          <volume>33</volume>
          ,
          <string-name>
            <surname>Curran</surname>
            <given-names>Associates</given-names>
          </string-name>
          , Inc.,
          <year>2020</year>
          , pp.
          <fpage>1877</fpage>
          -
          <lpage>1901</lpage>
          . URL: https://proceedings.neurips.cc/paper/2020/file/ 1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf .
        </mixed-citation>
      </ref>
      <ref id="ref39">
        <mixed-citation>
          [39]
          <string-name>
            <given-names>T. E.</given-names>
            <surname>Johnson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <surname>D. L. O'Connor</surname>
            ,
            <given-names>M. K.</given-names>
          </string-name>
          <string-name>
            <surname>Khalil</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          <string-name>
            <surname>Huang</surname>
          </string-name>
          ,
          <article-title>Measuring sharedness of team-related knowledge: Design and validation of a shared mental model instrument</article-title>
          ,
          <source>Human Resource Development International</source>
          <volume>10</volume>
          (
          <year>2007</year>
          )
          <fpage>437</fpage>
          -
          <lpage>454</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref40">
        <mixed-citation>
          [40]
          <string-name>
            <given-names>B.</given-names>
            <surname>Kaiser</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Csiszar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Verl</surname>
          </string-name>
          ,
          <article-title>Generative models for direct generation of cnc toolpaths</article-title>
          ,
          <source>in: 2018 25th International Conference on Mechatronics and Machine Vision in Practice (M2VIP)</source>
          , IEEE,
          <year>2018</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>6</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref41">
        <mixed-citation>
          [41]
          <string-name>
            <given-names>C.</given-names>
            <surname>Metz</surname>
          </string-name>
          ,
          <article-title>Meet gpt-3. it has learned to code (and blog and argue</article-title>
          ).
          <source>(published</source>
          <year>2020</year>
          ),
          <year>2022</year>
          . URL: https://www.nytimes.com/
          <year>2020</year>
          /11/24/science/ artificial-intelligence-
          <source>ai-gpt3 .html.</source>
        </mixed-citation>
      </ref>
      <ref id="ref42">
        <mixed-citation>
          [42]
          <string-name>
            <given-names>S. I.</given-names>
            <surname>Ross</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Martinez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Houde</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Muller</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. D.</given-names>
            <surname>Weisz</surname>
          </string-name>
          ,
          <article-title>The programmer's assistant: Conversational interaction with a large language model for software development</article-title>
          ,
          <source>in: 28th International Conference on Intelligent User Interfaces</source>
          ,
          <year>2023</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref43">
        <mixed-citation>
          [43]
          <string-name>
            <given-names>W.</given-names>
            <surname>Geyer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L. B.</given-names>
            <surname>Chilton</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. D.</given-names>
            <surname>Weisz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. L.</given-names>
            <surname>Maher</surname>
          </string-name>
          , Hai-gen
          <year>2021</year>
          :
          <article-title>2nd workshop on human-ai co-creation with generative models</article-title>
          ,
          <source>in: 26th International Conference on Intelligent User InterfacesCompanion</source>
          ,
          <year>2021</year>
          , pp.
          <fpage>15</fpage>
          -
          <lpage>17</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref44">
        <mixed-citation>
          [44]
          <string-name>
            <given-names>M.</given-names>
            <surname>Muller</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L. B.</given-names>
            <surname>Chilton</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Kantosalo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. P.</given-names>
            <surname>Martin</surname>
          </string-name>
          ,
          <string-name>
            <surname>G.</surname>
          </string-name>
          <article-title>Walsh, Genaichi: Generative ai and hci</article-title>
          ,
          <source>in: CHI Conference on Human Factors in Computing Systems Extended Abstracts</source>
          ,
          <year>2022</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>7</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref45">
        <mixed-citation>
          [45]
          <string-name>
            <given-names>M.</given-names>
            <surname>Muller</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Agelov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Daume</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q. V.</given-names>
            <surname>Liao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Oliver</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Piorkowski</surname>
          </string-name>
          , et al.,
          <source>Hcai@neurips</source>
          <year>2022</year>
          ,
          <article-title>human centered ai</article-title>
          ,
          <source>in: Annual Conference on Neural Information Processing Systems</source>
          ,
          <year>2022</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref46">
        <mixed-citation>
          [46]
          <string-name>
            <given-names>J. D.</given-names>
            <surname>Weisz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. L.</given-names>
            <surname>Maher</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Strobelt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L. B.</given-names>
            <surname>Chilton</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Bau</surname>
          </string-name>
          , W. Geyer, Hai-gen
          <year>2022</year>
          :
          <article-title>3rd workshop on human-ai co-creation with generative models</article-title>
          ,
          <source>in: 27th International Conference on Intelligent User Interfaces</source>
          ,
          <year>2022</year>
          , pp.
          <fpage>4</fpage>
          -
          <lpage>6</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref47">
        <mixed-citation>
          [47]
          <string-name>
            <given-names>C.</given-names>
            <surname>Aragon</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Guha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Kogan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Muller</surname>
          </string-name>
          , G. Nef,
          <article-title>Human-Centered Data Science: An Introduction</article-title>
          , MIT Press,
          <year>2022</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref48">
        <mixed-citation>
          [48]
          <string-name>
            <given-names>D.</given-names>
            <surname>Boyd</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Crawford</surname>
          </string-name>
          ,
          <article-title>Critical questions for big data: Provocations for a cultural, technological, and scholarly phenomenon</article-title>
          ,
          <source>Information, communication &amp; society 15</source>
          (
          <year>2012</year>
          )
          <fpage>662</fpage>
          -
          <lpage>679</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref49">
        <mixed-citation>
          [49]
          <string-name>
            <given-names>S.</given-names>
            <surname>Costanza-Chock</surname>
          </string-name>
          ,
          <article-title>Design justice: Communityled practices to build the worlds we need</article-title>
          , The MIT Press,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref50">
        <mixed-citation>
          [50]
          <string-name>
            <surname>C. D'ignazio</surname>
            ,
            <given-names>L. F.</given-names>
          </string-name>
          <string-name>
            <surname>Klein</surname>
          </string-name>
          ,
          <article-title>Data feminism</article-title>
          , MIT press,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref51">
        <mixed-citation>
          [51]
          <string-name>
            <given-names>L.</given-names>
            <surname>Gitelman</surname>
          </string-name>
          ,
          <source>Raw Data is an Oxymoron</source>
          , MIT Press,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref52">
        <mixed-citation>
          [52]
          <string-name>
            <given-names>I.</given-names>
            <surname>Sutskever</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Vinyals</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q. V.</given-names>
            <surname>Le</surname>
          </string-name>
          ,
          <article-title>Sequence to sequence learning with neural networks</article-title>
          ,
          <source>Advances in neural information processing systems</source>
          <volume>27</volume>
          (
          <year>2014</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref53">
        <mixed-citation>
          [53]
          <string-name>
            <given-names>K.</given-names>
            <surname>Cho</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B. Van</given-names>
            <surname>Merriënboer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Gulcehre</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Bahdanau</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Bougares</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Schwenk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Bengio</surname>
          </string-name>
          ,
          <article-title>Learning phrase representations using rnn encoderdecoder for statistical machine translation</article-title>
          ,
          <source>arXiv preprint arXiv:1406.1078</source>
          (
          <year>2014</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref54">
        <mixed-citation>
          [54]
          <string-name>
            <given-names>I.</given-names>
            <surname>Goodfellow</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Pouget-Abadie</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Mirza</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Xu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Warde-Farley</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Ozair</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Courville</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Bengio</surname>
          </string-name>
          ,
          <article-title>Generative adversarial networks</article-title>
          ,
          <source>Communications of the ACM</source>
          <volume>63</volume>
          (
          <year>2020</year>
          )
          <fpage>139</fpage>
          -
          <lpage>144</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref55">
        <mixed-citation>
          [55]
          <string-name>
            <given-names>A.</given-names>
            <surname>Vaswani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Shazeer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Parmar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Uszkoreit</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Jones</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. N.</given-names>
            <surname>Gomez</surname>
          </string-name>
          , Ł. Kaiser,
          <string-name>
            <surname>I. Polosukhin</surname>
          </string-name>
          ,
          <article-title>Attention is all you need</article-title>
          ,
          <source>Advances in neural information processing systems</source>
          <volume>30</volume>
          (
          <year>2017</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref56">
        <mixed-citation>
          [56]
          <string-name>
            <given-names>V.</given-names>
            <surname>Chenthamarakshan</surname>
          </string-name>
          ,
          <string-name>
            <surname>P. Das</surname>
            ,
            <given-names>S. C.</given-names>
          </string-name>
          <string-name>
            <surname>Hofman</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          <string-name>
            <surname>Strobelt</surname>
            , I. Padhi,
            <given-names>K. W.</given-names>
          </string-name>
          <string-name>
            <surname>Lim</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          <string-name>
            <surname>Hoover</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Manica</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Born</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          <string-name>
            <surname>Laino</surname>
          </string-name>
          , et al.,
          <article-title>Cogmol: targetspecific and selective drug design for covid-19 using deep generative models</article-title>
          , arXiv preprint arXiv:
          <year>2004</year>
          .
          <volume>01215</volume>
          (
          <year>2020</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref57">
        <mixed-citation>
          [57]
          <string-name>
            <given-names>V.</given-names>
            <surname>Chenthamarakshan</surname>
          </string-name>
          ,
          <string-name>
            <surname>P. Das</surname>
            ,
            <given-names>I. Padhi</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Strobelt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K. W.</given-names>
            <surname>Lim</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Hoover</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. C.</given-names>
            <surname>Hofman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Mojsilovic</surname>
          </string-name>
          ,
          <article-title>Target-specific and selective drug design for covid-19 using deep generative models</article-title>
          ,
          <year>2020</year>
          . arXiv:
          <year>2004</year>
          .01215.
        </mixed-citation>
      </ref>
      <ref id="ref58">
        <mixed-citation>
          [58]
          <string-name>
            <given-names>A.</given-names>
            <surname>Liapis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G. N.</given-names>
            <surname>Yannakakis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Togelius</surname>
          </string-name>
          , et al.,
          <article-title>Sentient sketchbook: Computer-aided game level authoring</article-title>
          .,
          <source>in: FDG</source>
          ,
          <year>2013</year>
          , pp.
          <fpage>213</fpage>
          -
          <lpage>220</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref59">
        <mixed-citation>
          [59]
          <string-name>
            <given-names>J.</given-names>
            <surname>Rose</surname>
          </string-name>
          ,
          <article-title>Facebook pulls its new 'ai for science' because it's broken and terrible</article-title>
          ,
          <source>Vice</source>
          (
          <year>2022</year>
          ). URL: https://www.vice.com/en/article/3adyw9/ facebook-pulls
          <article-title>-its-new-ai-for-science-becauseits-broken-and-terrible.</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref60">
        <mixed-citation>
          [60]
          <string-name>
            <given-names>B.</given-names>
            <surname>Roziere</surname>
          </string-name>
          , M.
          <article-title>-</article-title>
          <string-name>
            <surname>A. Lachaux</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          <string-name>
            <surname>Chanussot</surname>
          </string-name>
          , G. Lample,
          <article-title>Unsupervised translation of programming languages</article-title>
          ., in: NeurIPS,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref61">
        <mixed-citation>
          [61]
          <string-name>
            <given-names>S.</given-names>
            <surname>Kulal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Pasupat</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Chandra</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Padon</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Aiken</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. S.</given-names>
            <surname>Liang</surname>
          </string-name>
          ,
          <article-title>Spoc: Search-based pseudocode to code</article-title>
          ,
          <source>Advances in Neural Information Processing Systems</source>
          <volume>32</volume>
          (
          <year>2019</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref62">
        <mixed-citation>
          [62]
          <string-name>
            <surname>Github</surname>
          </string-name>
          , Copilot,
          <year>2021</year>
          . URL: https: //copilot.github.com.
        </mixed-citation>
      </ref>
      <ref id="ref63">
        <mixed-citation>
          [63]
          <string-name>
            <given-names>B.</given-names>
            <surname>Shneiderman</surname>
          </string-name>
          ,
          <article-title>Human-centered artificial intelligence: Reliable, safe</article-title>
          &amp; trustworthy,
          <source>International Journal of Human-Computer Interaction</source>
          <volume>36</volume>
          (
          <year>2020</year>
          )
          <fpage>495</fpage>
          -
          <lpage>504</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref64">
        <mixed-citation>
          [64]
          <string-name>
            <given-names>B.</given-names>
            <surname>Shneiderman</surname>
          </string-name>
          ,
          <article-title>Human-centered ai</article-title>
          ,
          <source>Issues in Science and Technology</source>
          <volume>37</volume>
          (
          <year>2021</year>
          )
          <fpage>56</fpage>
          -
          <lpage>61</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref65">
        <mixed-citation>
          [65]
          <string-name>
            <given-names>V.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L. B.</given-names>
            <surname>Chilton</surname>
          </string-name>
          ,
          <article-title>Neurosymbolic generation of 3d animal shapes through semantic controls</article-title>
          .,
          <source>in: IUI Workshops</source>
          ,
          <year>2021</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref66">
        <mixed-citation>
          [66]
          <string-name>
            <surname>P. von Platen</surname>
          </string-name>
          ,
          <article-title>How to generate text: using different decoding methods for language generation with transformers</article-title>
          ,
          <source>Hugging Face Blog</source>
          (
          <year>2020</year>
          ). URL: https://huggingface.co/blog/how-to-generate.
        </mixed-citation>
      </ref>
      <ref id="ref67">
        <mixed-citation>
          [67] R. Cheng, R. Wang,
          <string-name>
            <given-names>T.</given-names>
            <surname>Zimmermann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Ford</surname>
          </string-name>
          ,
          <article-title>"it would work for me too": How online communities shape software developers' trust in aipowered code generation tools</article-title>
          ,
          <source>arXiv preprint arXiv:2212.03491</source>
          (
          <year>2022</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref68">
        <mixed-citation>
          [68]
          <string-name>
            <given-names>M.</given-names>
            <surname>Scheutz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. A.</given-names>
            <surname>DeLoach</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. A.</given-names>
            <surname>Adams</surname>
          </string-name>
          ,
          <article-title>A framework for developing and using shared mental models in human-agent teams</article-title>
          ,
          <source>Journal of Cognitive Engineering and Decision Making</source>
          <volume>11</volume>
          (
          <year>2017</year>
          )
          <fpage>203</fpage>
          -
          <lpage>224</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref69">
        <mixed-citation>
          [69]
          <string-name>
            <given-names>S. M.</given-names>
            <surname>Fiore</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Salas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. A.</given-names>
            <surname>Cannon-Bowers</surname>
          </string-name>
          ,
          <article-title>Group dynamics and shared mental model development, How people evaluate others in organizations 234 (</article-title>
          <year>2001</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref70">
        <mixed-citation>
          [70]
          <string-name>
            <given-names>J. E.</given-names>
            <surname>Mathieu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T. S.</given-names>
            <surname>Hefner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G. F.</given-names>
            <surname>Goodwin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Salas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. A.</given-names>
            <surname>Cannon-Bowers</surname>
          </string-name>
          ,
          <article-title>The influence of shared mental models on team process and performance</article-title>
          .,
          <source>Journal of applied psychology</source>
          <volume>85</volume>
          (
          <year>2000</year>
          )
          <fpage>273</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref71">
        <mixed-citation>
          [71]
          <string-name>
            <given-names>M.</given-names>
            <surname>Guzdial</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Liao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Chen</surname>
          </string-name>
          , S.-Y. Chen,
          <string-name>
            <given-names>S.</given-names>
            <surname>Shah</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Shah</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Reno</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Smith</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. O.</given-names>
            <surname>Riedl</surname>
          </string-name>
          , Friend, collaborator, student, manager
          <article-title>: How design of an ai-driven game level editor afects creators</article-title>
          ,
          <source>in: Proceedings of the 2019 CHI conference on human factors in computing systems</source>
          ,
          <year>2019</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>13</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref72">
        <mixed-citation>
          [72]
          <string-name>
            <surname>P. M. Fitts</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Viteles</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          <string-name>
            <surname>Barr</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <string-name>
            <surname>Brimhall</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          <string-name>
            <surname>Finch</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          <string-name>
            <surname>Gardner</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          <string-name>
            <surname>Grether</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          <string-name>
            <surname>Kellum</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Stevens</surname>
          </string-name>
          ,
          <article-title>Human engineering for an efective air-navigation and trafic-control system, and appendixes 1 thru 3</article-title>
          ,
          <string-name>
            <surname>Technical</surname>
            <given-names>Report</given-names>
          </string-name>
          , Ohio State Univ Research Foundation Columbus,
          <year>1951</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref73">
        <mixed-citation>
          [73]
          <string-name>
            <surname>T. B. Sheridan</surname>
            ,
            <given-names>W. L.</given-names>
          </string-name>
          <string-name>
            <surname>Verplank</surname>
          </string-name>
          ,
          <article-title>Human and computer control of undersea teleoperators</article-title>
          ,
          <source>Technical Report, Massachusetts Inst of Tech Cambridge Man-Machine Systems Lab</source>
          ,
          <year>1978</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref74">
        <mixed-citation>
          [74]
          <string-name>
            <given-names>R.</given-names>
            <surname>Parasuraman</surname>
          </string-name>
          , T. B.
          <string-name>
            <surname>Sheridan</surname>
            ,
            <given-names>C. D.</given-names>
          </string-name>
          <string-name>
            <surname>Wickens</surname>
          </string-name>
          ,
          <article-title>A model for types and levels of human interaction with automation</article-title>
          ,
          <source>IEEE Transactions on systems, man, and cybernetics-Part A: Systems and Humans</source>
          <volume>30</volume>
          (
          <year>2000</year>
          )
          <fpage>286</fpage>
          -
          <lpage>297</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref75">
        <mixed-citation>
          [75]
          <string-name>
            <given-names>E.</given-names>
            <surname>Horvitz</surname>
          </string-name>
          ,
          <article-title>Principles of mixed-initiative user interfaces</article-title>
          ,
          <source>in: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '99</source>
          ,
          <string-name>
            <surname>Association</surname>
          </string-name>
          for Computing Machinery, New York, NY, USA,
          <year>1999</year>
          , p.
          <fpage>159</fpage>
          -
          <lpage>166</lpage>
          . URL: https://doi.org/10.1145/302979.303030. doi:
          <volume>10</volume>
          .1145/302979.303030.
        </mixed-citation>
      </ref>
      <ref id="ref76">
        <mixed-citation>
          [76]
          <string-name>
            <given-names>V.</given-names>
            <surname>Arya</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. K.</given-names>
            <surname>Bellamy</surname>
          </string-name>
          , P.-Y. Chen,
          <string-name>
            <given-names>A.</given-names>
            <surname>Dhurandhar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Hind</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. C.</given-names>
            <surname>Hofman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Houde</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q. V.</given-names>
            <surname>Liao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Luss</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Mojsilovic</surname>
          </string-name>
          , et al.,
          <source>Ai</source>
          explainability
          <volume>360</volume>
          :
          <article-title>An extensible toolkit for understanding data and machine learning models</article-title>
          .,
          <source>J. Mach. Learn. Res</source>
          .
          <volume>21</volume>
          (
          <year>2020</year>
          )
          <fpage>1</fpage>
          -
          <lpage>6</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref77">
        <mixed-citation>
          [77]
          <string-name>
            <given-names>U.</given-names>
            <surname>Ehsan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Wintersberger</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q. V.</given-names>
            <surname>Liao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E. A.</given-names>
            <surname>Watkins</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Manger</surname>
          </string-name>
          , H.
          <string-name>
            <surname>Daumé</surname>
            <given-names>III</given-names>
          </string-name>
          ,
          <string-name>
            <surname>A. Riener</surname>
            ,
            <given-names>M. O.</given-names>
          </string-name>
          <string-name>
            <surname>Riedl</surname>
          </string-name>
          ,
          <article-title>Human-centered explainable ai (hcxai): beyond opening the black-box of ai</article-title>
          ,
          <source>in: CHI Conference on Human Factors in Computing Systems Extended Abstracts</source>
          ,
          <year>2022</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>7</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref78">
        <mixed-citation>
          [78]
          <string-name>
            <given-names>Q. V.</given-names>
            <surname>Liao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Singh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Zhang</surname>
          </string-name>
          , R. Bellamy, Introduction to explainable ai,
          <source>in: Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems</source>
          ,
          <year>2021</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>3</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref79">
        <mixed-citation>
          [79]
          <string-name>
            <given-names>A.</given-names>
            <surname>Simkute</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Surana</surname>
          </string-name>
          , E. Luger,
          <string-name>
            <given-names>M.</given-names>
            <surname>Evans</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Jones</surname>
          </string-name>
          ,
          <article-title>Xai for learning: Narrowing down the digital divide between “new” and “old” experts</article-title>
          ,
          <source>in: Adjunct Proceedings of the 2022 Nordic Human-Computer Interaction Conference</source>
          ,
          <year>2022</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>6</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref80">
        <mixed-citation>
          [80]
          <string-name>
            <surname>C. Molnar,</surname>
          </string-name>
          <article-title>Interpretable machine learning</article-title>
          ,
          <source>Lulu. com</source>
          ,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref81">
        <mixed-citation>
          [81]
          <string-name>
            <given-names>M.</given-names>
            <surname>Muller</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Stroymayer</surname>
          </string-name>
          ,
          <article-title>Forgetting practices in the data sciences</article-title>
          ,
          <source>in: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems</source>
          ,
          <year>2022</year>
          . In press.
        </mixed-citation>
      </ref>
      <ref id="ref82">
        <mixed-citation>
          [82]
          <string-name>
            <given-names>C.</given-names>
            <surname>Pinhanez</surname>
          </string-name>
          ,
          <article-title>Expose uncertainty, instill distrust, avoid explanations: Towards ethical guidelines for ai</article-title>
          ,
          <source>in hcai@neurips 2021 workshop</source>
          ,
          <year>2021</year>
          . URL: https://www.google.
          <source>com/url?q=https%3A% 2F%2Farxiv.org%2Fabs%2F2112</source>
          .01281&amp;
          <string-name>
            <surname>sa</surname>
          </string-name>
          =D.
        </mixed-citation>
      </ref>
      <ref id="ref83">
        <mixed-citation>
          [83]
          <string-name>
            <given-names>C.</given-names>
            <surname>Pinhanez</surname>
          </string-name>
          ,
          <article-title>Breakdowns, language use, and weird errors: Past, present, and future of research on conversational agents at brl</article-title>
          , in ibm research cambridge lab guess speaker series,
          <year>2022</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref84">
        <mixed-citation>
          [84]
          <string-name>
            <given-names>S.</given-names>
            <surname>Lin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Hilton</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Evans</surname>
          </string-name>
          ,
          <article-title>Truthfulqa: Measuring how models mimic human falsehoods</article-title>
          ,
          <source>arXiv preprint arXiv:2109.07958</source>
          (
          <year>2021</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref85">
        <mixed-citation>
          [85]
          <string-name>
            <given-names>S.</given-names>
            <surname>Kantayya</surname>
          </string-name>
          , Coded bias,
          <year>2020</year>
          . URL: https://www.pbs.org/independentlens/ documentaries/coded-bias/.
        </mixed-citation>
      </ref>
      <ref id="ref86">
        <mixed-citation>
          [86]
          <string-name>
            <given-names>K.</given-names>
            <surname>Spiel</surname>
          </string-name>
          , ”
          <article-title>why are they all obsessed with gender?”-(non) binary navigations through technological infrastructures</article-title>
          ,
          <source>in: Designing Interactive Systems Conference</source>
          <year>2021</year>
          ,
          <year>2021</year>
          , pp.
          <fpage>478</fpage>
          -
          <lpage>494</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref87">
        <mixed-citation>
          [87]
          <string-name>
            <given-names>A.</given-names>
            <surname>Lyn</surname>
          </string-name>
          , Risky business:
          <article-title>Artificial intelligence and risk assessments in sentencing and bail procedures in the united states</article-title>
          ,
          <source>Available at SSRN</source>
          <volume>3831441</volume>
          (
          <year>2020</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref88">
        <mixed-citation>
          [88]
          <string-name>
            <given-names>D.</given-names>
            <surname>Saxena</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Badillo-Urquiola</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. J.</given-names>
            <surname>Wisniewski</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Guha</surname>
          </string-name>
          ,
          <article-title>A framework of high-stakes algorithmic decision-making for the public sector developed through a case study of child-welfare</article-title>
          ,
          <source>Proceedings of the ACM on Human-Computer Interaction</source>
          <volume>5</volume>
          (
          <year>2021</year>
          )
          <fpage>1</fpage>
          -
          <lpage>41</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref89">
        <mixed-citation>
          [89]
          <string-name>
            <given-names>S.</given-names>
            <surname>Lindtner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Bardzell</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Bardzell</surname>
          </string-name>
          ,
          <article-title>Reconstituting the utopian vision of making: Hci after technosolutionism</article-title>
          ,
          <source>in: Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems</source>
          ,
          <year>2016</year>
          , pp.
          <fpage>1390</fpage>
          -
          <lpage>1402</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref90">
        <mixed-citation>
          [90]
          <string-name>
            <given-names>M. A.</given-names>
            <surname>Madaio</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Stark</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. Wortman</given-names>
            <surname>Vaughan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Wallach</surname>
          </string-name>
          ,
          <article-title>Co-designing checklists to understand organizational challenges and opportunities around fairness in ai</article-title>
          ,
          <source>in: Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems</source>
          ,
          <year>2020</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>14</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref91">
        <mixed-citation>
          [91]
          <string-name>
            <given-names>A.</given-names>
            <surname>Resseguier</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Rodrigues</surname>
          </string-name>
          ,
          <article-title>Ethics as attention to context: recommendations for the ethics of artificial intelligence</article-title>
          ,
          <source>Open Research Europe</source>
          <volume>1</volume>
          (
          <year>2021</year>
          )
          <fpage>27</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref92">
        <mixed-citation>
          [92]
          <string-name>
            <given-names>B.</given-names>
            <surname>Friedman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. G.</given-names>
            <surname>Hendry</surname>
          </string-name>
          ,
          <article-title>Value sensitive design: Shaping technology with moral imagination</article-title>
          , Mit Press,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref93">
        <mixed-citation>
          [93]
          <string-name>
            <given-names>D. G.</given-names>
            <surname>Hendry</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Friedman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Ballard</surname>
          </string-name>
          ,
          <article-title>Value sensitive design as a formative framework</article-title>
          ,
          <source>Ethics and Information Technology</source>
          <volume>23</volume>
          (
          <year>2021</year>
          )
          <fpage>39</fpage>
          -
          <lpage>44</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>