<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>Arenzano (Genoa), Italy, June</journal-title>
      </journal-title-group>
      <issn pub-type="ppub">1613-0073</issn>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Music Composition as a Lens for Understanding Human-AI Collaboration</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Eric Tron Gianet</string-name>
          <email>erictrngnt@gmail.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Luigi Di Caro</string-name>
          <email>luigi.dicaro@unito.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Amon Rapp</string-name>
          <email>amon.rapp@unito.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="editor">
          <string-name>Human-AI Collaboration, Music Composition, Generative AI, Human-AI Co-creation</string-name>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>University of Turin, Computer Science Department</institution>
          ,
          <addr-line>Torino</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2024</year>
      </pub-date>
      <volume>03</volume>
      <issue>2024</issue>
      <abstract>
        <p>As generative artificial intelligence (GenAI) systems gain human-like capabilities in creative tasks, they seem to blur the line between machines and users, prompting questions about how to design systems where AI and humans collaborate. Music composition with AI may ofer a lens to explore the nuances of human-AI collaboration. We review recent literature on music generation with AI, highlighting key challenges like the need for user control and context awareness, and noting a potential shift in the user's role towards curation or co-production when using AI tools. However, much of the existing research evaluates the impact of current AI tools rather than engaging in fieldwork to investigate music composition “in practice” within specific socio-cultural contexts. We then propose an ethnographic study to understand music composition as a situated practice, considering composers' personal motivations, artistic sensibilities, and the broader socio-cultural context. Preliminary findings highlight the importance of creative intentionality and meaning-making in driving compositional choices. Furthermore, music creation often involves collaboration between various human actors, raising questions about whether AI should facilitate this already present collaboration or disrupt existing dynamics.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>CEUR
ceur-ws.org</p>
    </sec>
    <sec id="sec-2">
      <title>1. Introduction</title>
      <p>The rapid evolution of Artificial Intelligence (AI), and particularly GenAI, is reshaping how
humans interact with technology. We are moving beyond more traditional interaction models
towards scenarios of collaboration between humans and AI, which raises critical questions about
how we ought to design the interaction with systems that promise to leverage the strengths
of both. GenAI systems can not only perform classification tasks but also create artifacts like
music, images, and text, blurring the line between traditional tools and active collaborators
with creative abilities.</p>
      <p>
        In the broad research area of Human-Centered AI (HCAI) [
        <xref ref-type="bibr" rid="ref1 ref2 ref3">1, 2, 3</xref>
        ], a perspective has emerged
that seeks to leverage the strengths of both humans and AI, creating synergistic systems that
surpass their individual capabilities [
        <xref ref-type="bibr" rid="ref3 ref4 ref5">3, 4, 5</xref>
        ]. This concept of human-AI collaboration arguably
represents a more human-centered approach compared to human-in-the-loop models, as it
makes the ”best use of both human and AI capabilities, rather than the human simply being called
Proceedings of the 1st International Workshop on Designing and Building Hybrid Human–AI Systems (SYNERGY 2024),
CEUR
Workshop
Proceedings
upon to do what the AI cannot yet manage in an AI led project” [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. However, this emphasis on AI
agency, which Sarkar [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] terms as an “agentistic turn”, necessitates critical consideration. While
it aligns with Bruno Latour’s notion of agency [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ], which extends agency across a network of
human and non-human actors, showing how intricate the relationships between humans and
their surroundings are, it could obscure the vast amount of “ghost work” [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] that powers AI,
frequently conducted in the Global South at low wages (e.g., by data labelers) [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. The concept of
“collaboration” in human-AI interaction is certainly multifaceted, but, while some interactions
with non-generative AI might exhibit collaborative aspects already [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ], the emergence of GenAI
presents unique challenges. These systems, capable of human-like outputs, blur the lines
between tools and collaborators. This necessitates a deeper understanding of how human
roles are redefined and how decision-making, creative processes, and information handling are
reshaped.
      </p>
      <p>To explore this further, we focus on a specific domain – music composition with AI – as
an illustrative case study, to understand how collaboration and co-creation occur in practice,
taking into account not only the user’s objectives, needs, and motivations, but also the social
and cultural context within which humans and GenAI systems may collaborate to achieve
situated goals. The collaborative process of composing music raises intriguing questions about
ownership, control, and intentions, all of which are central themes in the broader discussion of
human-AI collaboration.</p>
      <p>
        We will begin by reviewing recent literature on human-AI music composition and then
propose a study investigating the situated practices [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] of music composition.
      </p>
    </sec>
    <sec id="sec-3">
      <title>2. AI in Music Composition</title>
      <p>
        The interest in computer-based music composition has steadily increased since the 1980s, with
various techniques of Algorithmic Composition like Markov Models, Generative Grammars,
and Genetic Algorithms. Later, as Neural Networks became more prominent, the advancements
in Deep Learning led to the adoption of established architectures from fields like Computer
Vision and Natural Language Processing in music generation as well [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ].
      </p>
      <p>
        While research on interactive musical systems had already highlighted the importance of
applying user-centered approaches to support composers’ processes of creation, exploration,
and learning [
        <xref ref-type="bibr" rid="ref12 ref13 ref14">12, 13, 14</xref>
        ], more recent studies have delved into the specific challenges and
strategies of co-creating music with AI.
      </p>
      <p>
        Newman et al. [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ] investigated through interviews how current AI tools concretely influence
musical creativity. They then proposed a new model for developing ethical and productive
collaborative AI tools for music. This model emphasizes the importance of clearly defined
roles for AI and pays attention to how control is distributed. Their research reveals that users
perceive as positive AI use cases those where they can maintain control, agency, intention, and
choice throughout iterative cycles of generation and evaluation [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ].
      </p>
      <p>
        Huang et al. [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ] studied the challenges and strategies of co-creating music with AI through
a survey conducted during an AI song contest. The results highlight the importance of context
awareness and user control, advising future AI systems to be designed to adapt to existing
composers’ practices rather than forcing new AI-determined workflows.
      </p>
      <p>
        In another study investigating the use of “steering tools” for collaborative music creation,
Louie et al. [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ] identified two challenges in using GenAI: information overload and
nondeterministic output. Their findings suggest that steering tools can enhance the user’s sense of
control, trust, and understanding of the AI system, resulting in a greater feeling of involvement
in the creative process. The authors also comment that users often have pre-existing mental
models about music composition and use them for tackling problems. These preconceptions
should be considered so that both the AI models and the interfaces can be designed to be more
intuitive, requiring less cognitive efort, and ultimately increasing the user’s sense of agency.
Louie et al. [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ] also argue that the AI’s role should adapt to the user’s needs and the creative
context: while ceding control is welcomed during exploratory phases, where the user is in
search of unexpected inspiration, maintaining control over specific details becomes critical
during production. Context thus plays a crucial role in shaping the dynamics of human-AI
collaboration, influencing how control and agency are perceived.
      </p>
      <p>
        Suh et al. [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ] explored how AI systems can act as a “social glue” to support human-human
collaboration in compositional practices. Their findings suggest that AI can facilitate the
exchange of ideas and group cohesion, potentially reducing tensions typically associated with
collaboration. They advocate for an intentional design of AI to further strengthen social
collaboration. However, while AI can act as a support system, they also observed a potential
shift in roles: participants reported feeling more like curators or co-producers, focusing on
evaluating AI-generated material rather than actively developing ideas, leading to a weaker
sense of creative involvement. This observation aligns with the findings of Civit et al. [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ], who
noted that “the composer became more of an arranger of diferent melodies” comparing their role
to that of a producer managing misbehaving musicians. While this shift is viewed as a “very
creative, fruitful process” by them instead, it still highlights the need for future AI systems to be
adaptive to the creative context, user needs, and the composer’s specific intentions.
      </p>
      <p>
        This said, despite the growing interest in human-AI collaboration for music composition,
current research seems to have limited scope: much of the focus is on evaluating the impact of
existing generative systems, exploring strategies for composers to navigate their challenges,
and integrating steering tools into current interfaces. Eforts to bridge the gap between AI
music generation and the social and cultural complexities of music composition often lack
engagement with field studies on compositional practices. Unlike humans, who are influenced
by personal and social motivations and cultural context, AI systems currently operate solely
on algorithms and predictive models, potentially limiting their efectiveness and failing to
fully capture the nuances of human creativity [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ]. The existing research gap highlights the
importance of investigating current practices adopted by music composers, as well as their
personal motivations, artistic sensibilities, the broader cultural and social context that influences
their work, and the specific characteristics of various musical genres. This knowledge will be
crucial for designing human-AI systems and creative workflows that efectively complement
human strengths and intentions.
      </p>
    </sec>
    <sec id="sec-4">
      <title>3. Music Composition as a Situated Practice</title>
      <p>
        The existing literature emphasizes the need to better understand the complexities of music
compositions in order to design human-AI collaborative music systems that account for both
the individual and the socio-cultural context. While eforts like that of Hernandez-Olivan and
Beltrán [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] to create generalized models of music composition are valuable for highlighting
core principles, such rigid frameworks can hardly capture the dynamic and diverse nature of
music creation, which is constantly evolving, shaped by genre, stylistic trends, personal choices,
improvisation, and the unique socio-cultural setting where musicians operate.
      </p>
      <p>To address this complexity, we propose an ethnographic approach that foregrounds the
situated nature of music composition. This approach will delve into both the composers’
personal motivations (e.g., creative aspirations and career goals) and the socio-cultural context
that influences their work. Ethnography allows us to explore these aspects of music creation and
the lived experiences of composers, and has already been applied and discussed as a method for
AI research [21, 22, 23, 24, 25], feeding into a broader discussion about the lack of, and thus need
to integrate, the social sciences into AI research to mitigate the exclusive use of quantitative
methods, which lack of socio-technical perspective, and their uncritical and positivist use that
often leads to ignoring the context and causes that bring to a certain outcome and the ways in
which it occurs [26, 21, 27]. By employing ethnography, we look at users not simply as passive
recipients of technology, but as active agents who shape its context, meanings, and consequences
[28]. This approach emphasizes the context-dependent nature of music composition, laying the
groundwork for designing synergistic human-AI systems that are sensitive to the nuances of
human creativity and to the specific settings where music is created.</p>
      <p>Our research aims to answer the following provisional questions:
• Can the use of Generative AI systems be actually considered a collaboration?
• How does the specific context of music creation influence decision-making, workflows,
and creative choices when working with AI?
• How might collaboration with Generative AI systems redefine the roles of composers in
the creative process of composition?
• How do composers and AI negotiate creative control and authorship within this
collaboration?</p>
      <p>To answer these questions, we propose an ethnographic approach combining semi-structured
interviews and participant observation. We will interview 18 musicians with experience in
composing for diverse genres, exploring topics like:
• Motivations, aspirations, creative sensibilities and personal workflow
• The role of context in their music creation process
• Experiences with AI tools and their perception of AI’s role
• How users perceive their own role evolving in collaboration with AI</p>
      <p>Following the interviews, we will conduct at least 60 hours of participant observation, to
immerse ourselves in the composers’ practice.</p>
      <p>In summary, we aim to shed light on the situated practice of composing music and on how
the broader context influences the collaborative process. We do this to inform the design of
human-AI collaborative systems that support and empower, not replace, musicians in their
creative practices. Additionally, this study can ofer insights into human-AI collaboration
beyond music, contributing to uncover design patterns for systems that are synergistic with
human capabilities and responsive to the specific context in which they are used.
3.1. Preliminary findings
Our analysis of initial interviews reveals some preliminary findings:</p>
      <p>(a) Intentionality Shapes and Gives Meaning to Music: Composers’ creative intentions
significantly impact their approach to composition. For example, whether they start with a
melody, harmony, or specific sound (timbre) often depends on what they want to convey. While
the existing literature acknowledges the link between intention, control, and creative agency,
our study highlights the specific link between intention and a deeper meaning-making process,
where composers strive to construct a “coherent discourse” through their music. How could we
design human-AI collaborative systems that support and adapt to user intentions?
(b) Music is already collaborative: Music composition and production are already an often
collaborative process. From bandmates to collaborators, clients, and sound engineers, various
stakeholders contribute to and have an interest in the final product. This raises the question of
whether AI systems should be designed to enhance these existing human-human interactions,
or whether they themselves should become an additional collaborator within a system in which
creative control is already dynamically distributed and negotiated.</p>
      <p>More findings will be shared during the workshop.
Handbook of Artificial Intelligence for Music, Springer International Publishing, Cham,
2021, pp. 1–20. doi:10.1007/978-3-030-72116-9_1.
[21] V. Marda, S. Narayan, On the importance of ethnographic methods in AI research, Nature</p>
      <p>Machine Intelligence 3 (2021) 187–189. doi:10.1038/s42256-021-00323-0.
[22] A. Christin, The ethnographer and the algorithm: Beyond the black box, Theory and</p>
      <p>Society 49 (2020) 897–918. doi:10.1007/s11186-020-09411-3.
[23] A. F. Blackwell, Ethnographic artificial intelligence, Interdisciplinary Science Reviews 46
(2021) 198–211. doi:10.1080/03080188.2020.1840226.
[24] R. Van Voorst, T. Ahlin, Key points for an ethnography of AI: An approach towards
crucial data, Humanities and Social Sciences Communications 11 (2024) 337. doi:10.1057/
s41599-024-02854-4.
[25] N. Seaver, Algorithms as culture: Some tactics for the ethnography of algorithmic systems,</p>
      <p>Big Data &amp; Society 4 (2017) 205395171773810. doi:10.1177/2053951717738104.
[26] M. Sloane, E. Moss, AI’s social sciences deficit, Nature Machine Intelligence 1 (2019)
330–331. doi:10.1038/s42256-019-0084-6.
[27] E. Dahlin, Mind the gap! On the future of AI research, Humanities and Social Sciences</p>
      <p>Communications 8 (2021) 1–4. doi:10.1057/s41599-021-00750-9.
[28] P. Dourish, Implications for design (2006) 541–550. doi:10.1145/1124772.1124855.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>W.</given-names>
            <surname>Xu</surname>
          </string-name>
          ,
          <article-title>Toward human-centered AI: A perspective from human-computer interaction</article-title>
          ,
          <source>Interactions</source>
          <volume>26</volume>
          (
          <year>2019</year>
          )
          <fpage>42</fpage>
          -
          <lpage>46</lpage>
          . doi:
          <volume>10</volume>
          .1145/3328485.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>B.</given-names>
            <surname>Shneiderman</surname>
          </string-name>
          ,
          <string-name>
            <surname>Human-Centered</surname>
            <given-names>AI</given-names>
          </string-name>
          , Oxford University Press, Oxford,
          <year>2022</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>T.</given-names>
            <surname>Capel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Brereton</surname>
          </string-name>
          ,
          <article-title>What is Human-Centered about Human-Centered AI? A Map of the Research Landscape</article-title>
          , in
          <source>: Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, CHI '23</source>
          ,
          <string-name>
            <surname>Association</surname>
          </string-name>
          for Computing Machinery, Hamburg, Germany,
          <year>2023</year>
          , p.
          <fpage>23</fpage>
          . doi:
          <volume>10</volume>
          .1145/3544548.3580959.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <surname>L. G.</surname>
          </string-name>
          <article-title>Terveen, Overview of human-computer collaboration, Knowledge-Based Systems 8 (</article-title>
          <year>1995</year>
          )
          <fpage>67</fpage>
          -
          <lpage>81</lpage>
          . doi:
          <volume>10</volume>
          .1016/
          <fpage>0950</fpage>
          -
          <lpage>7051</lpage>
          (
          <issue>95</issue>
          )
          <fpage>98369</fpage>
          -
          <lpage>H</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Wu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Ji</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Yu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Zeng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Wu</surname>
          </string-name>
          ,
          <string-name>
            <surname>M.</surname>
          </string-name>
          <article-title>Shidujaman, AI Creativity and the Human-AI Co-creation Model</article-title>
          , in: M.
          <string-name>
            <surname>Kurosu</surname>
          </string-name>
          (Ed.),
          <article-title>Human-Computer Interaction</article-title>
          . Theory, Methods and Tools, volume
          <volume>12762</volume>
          of Lecture Notes in Computer Science, Springer International Publishing, Cham,
          <year>2021</year>
          , pp.
          <fpage>171</fpage>
          -
          <lpage>190</lpage>
          . doi:
          <volume>10</volume>
          .1007/978- 3-
          <fpage>030</fpage>
          - 78462- 1_
          <fpage>13</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>A.</given-names>
            <surname>Sarkar</surname>
          </string-name>
          , Enough With “
          <article-title>Human-AI Collaboration”</article-title>
          ,
          <source>in: Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems, CHI EA '23</source>
          ,
          <string-name>
            <surname>Association</surname>
          </string-name>
          for Computing Machinery, Hamburg, Germany,
          <year>2023</year>
          , p.
          <fpage>8</fpage>
          . doi:
          <volume>10</volume>
          .1145/3544549.3582735.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>B.</given-names>
            <surname>Latour</surname>
          </string-name>
          ,
          <article-title>Reassembling the Social: An Introduction to Actor-Network-</article-title>
          <string-name>
            <surname>Theory</surname>
          </string-name>
          , Clarendon Lectures in Management Studies, Oxford University Press, Oxford,
          <year>2005</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>M. L.</given-names>
            <surname>Gray</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Suri</surname>
          </string-name>
          , Ghost Work:
          <article-title>How to Stop Silicon Valley from Building a New Global Underclass</article-title>
          , Houghton Miflin Harcourt, Boston,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>A.</given-names>
            <surname>Rapp</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Boldi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Curti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Perrucci</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Simeoni</surname>
          </string-name>
          ,
          <article-title>Collaborating with a Text-Based Chatbot: An Exploration of Real-World Collaboration Strategies Enacted during HumanChatbot Interactions</article-title>
          ,
          <source>in: Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, CHI '23</source>
          ,
          <string-name>
            <surname>Association</surname>
          </string-name>
          for Computing Machinery, New York, NY, USA,
          <year>2023</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>17</lpage>
          . doi:
          <volume>10</volume>
          .1145/3544548.3580995.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>L. A.</given-names>
            <surname>Suchman</surname>
          </string-name>
          ,
          <article-title>Plans and Situated Actions: The Problem of Human-Machine Communication</article-title>
          , volume
          <volume>103</volume>
          , Cambridge University Press, USA,
          <year>1987</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>C.</given-names>
            <surname>Hernandez-Olivan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. R.</given-names>
            <surname>Beltrán</surname>
          </string-name>
          ,
          <article-title>Music Composition with Deep Learning: A Review, in: A</article-title>
          .
          <string-name>
            <surname>Biswas</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          <string-name>
            <surname>Wennekes</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Wieczorkowska</surname>
            ,
            <given-names>R. H.</given-names>
          </string-name>
          <string-name>
            <surname>Laskar</surname>
          </string-name>
          (Eds.),
          <source>Advances in Speech and Music Technology: Computational Aspects and Applications</source>
          , Signals and Communication Technology, Springer International Publishing, Cham,
          <year>2023</year>
          , pp.
          <fpage>25</fpage>
          -
          <lpage>50</lpage>
          . doi:
          <volume>10</volume>
          .1007/978- 3-
          <fpage>031</fpage>
          - 18444-
          <issue>4</issue>
          _
          <fpage>2</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>R.</given-names>
            <surname>Fiebrink</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Trueman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Britt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Nagai</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Kaczmarek</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Early</surname>
          </string-name>
          , MR. Daniel,
          <string-name>
            <given-names>A.</given-names>
            <surname>Hege</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Cook</surname>
          </string-name>
          ,
          <article-title>Toward Understanding Human-Computer Interaction In Composing The Instrument</article-title>
          , in: Proceedings of the International Computer Music Conference, International Computer Music Association, New York,
          <year>2010</year>
          , pp.
          <fpage>135</fpage>
          -
          <lpage>142</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Wu</surname>
          </string-name>
          ,
          <string-name>
            <surname>N.</surname>
          </string-name>
          <article-title>Bryan-Kinns, Supporting Non-Musicians? Creative Engagement with Musical Interfaces</article-title>
          ,
          <source>in: Proceedings of the 2017 ACM SIGCHI Conference on Creativity and Cognition</source>
          , C&amp;C '17,
          <string-name>
            <surname>Association</surname>
          </string-name>
          for Computing Machinery, New York, NY, USA,
          <year>2017</year>
          , pp.
          <fpage>275</fpage>
          -
          <lpage>286</lpage>
          . doi:
          <volume>10</volume>
          .1145/3059454.3059457.
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>H.</given-names>
            <surname>Scurto</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Bevilacqua</surname>
          </string-name>
          ,
          <article-title>Appropriating Music Computing Practices Through Human-AI Collaboration, in: Journées d'Informatique Musicale (JIM</article-title>
          <year>2018</year>
          ), Amiens, France,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>M.</given-names>
            <surname>Newman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Morris</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. H.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <surname>Human-AI Music</surname>
          </string-name>
          <article-title>Creation: Understanding the Perceptions and Experiences of Music Creators for Ethical and Productive Collaboration</article-title>
          ,
          <source>in: Proc. of the 24th Int. Society for Music Information Retrieval Conf</source>
          .,
          <source>International Society for Music Information Retrieval</source>
          , Milan, Italy,
          <year>2023</year>
          , pp.
          <fpage>80</fpage>
          -
          <lpage>88</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <surname>C.-Z. A. Huang</surname>
            ,
            <given-names>H. V.</given-names>
          </string-name>
          <string-name>
            <surname>Koops</surname>
          </string-name>
          , E. Newton-Rex,
          <article-title>AI Song Contest: Human-AI Co-creation in Songwriting</article-title>
          ,
          <source>in: Proc. of the 21st Int. Society for Music Information Retrieval Conf</source>
          .,
          <source>International Society for Music Information Retrieval</source>
          , Montréal, Canada,
          <year>2020</year>
          , pp.
          <fpage>708</fpage>
          -
          <lpage>716</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>R.</given-names>
            <surname>Louie</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Coenen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. Z.</given-names>
            <surname>Huang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Terry</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. J.</given-names>
            <surname>Cai</surname>
          </string-name>
          ,
          <string-name>
            <surname>Novice-AI Music</surname>
          </string-name>
          Co-Creation via
          <article-title>AI-Steering Tools for Deep Generative Models</article-title>
          ,
          <source>in: Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, CHI '20</source>
          ,
          <string-name>
            <surname>Association</surname>
            for Computing Machinery, Honolulu,
            <given-names>HI</given-names>
          </string-name>
          , USA,
          <year>2020</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>13</lpage>
          . doi:
          <volume>10</volume>
          .1145/3313831.3376739.
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <surname>M. M. Suh</surname>
            , E. Youngblom,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Terry</surname>
            ,
            <given-names>C. J.</given-names>
          </string-name>
          <string-name>
            <surname>Cai</surname>
          </string-name>
          ,
          <article-title>AI as Social Glue: Uncovering the Roles of Deep Generative AI during Social Music Composition</article-title>
          ,
          <source>in: Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, CHI '21</source>
          ,
          <string-name>
            <surname>ACM</surname>
          </string-name>
          , Yokohama, Japan,
          <year>2021</year>
          , p.
          <fpage>11</fpage>
          . doi:
          <volume>10</volume>
          .1145/3411764.3445219.
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>M.</given-names>
            <surname>Civit</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Civit-Masot</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Cuadrado</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. J.</given-names>
            <surname>Escalona</surname>
          </string-name>
          ,
          <article-title>A systematic review of artificial intelligence-based music generation: Scope, applications, and future trends</article-title>
          ,
          <source>Expert Systems with Applications</source>
          <volume>209</volume>
          (
          <year>2022</year>
          )
          <fpage>118</fpage>
          -
          <lpage>190</lpage>
          . doi:
          <volume>10</volume>
          .1016/j.eswa.
          <year>2022</year>
          .
          <volume>118190</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>O.</given-names>
            <surname>Bown</surname>
          </string-name>
          ,
          <article-title>Sociocultural and Design Perspectives on AI-Based Music Production: Why Do We Make Music and What Changes if AI Makes It for Us?</article-title>
          , in: E. R. Miranda (Ed.),
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>