<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Learning AI with Music: A Sound-based Dissemination Activity for High School Students</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Guido Vallarino</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Giorgio Delzanno</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Giovanna Guerrini</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Lorenzo Luciano Morelato</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Matteo Moro</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Università degli Studi di Genova</institution>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2025</year>
      </pub-date>
      <abstract>
        <p>We present a dissemination activity developed as an interactive laboratory for high school students aimed at introducing the basic principles of Artificial Intelligence and Machine Learning through music as an entry point. The activity is designed to be engaging, using the familiarity of the students with musical instruments to explain the concept of machine learning from data. Participants are invited to record sounds of real instruments, which are then used to train a classifier in real time, showing how a machine can learn to recognize diferent types of input data. Then we briefly introduce current music creation tools based on Machine Learning, such as splitters that can separate individual instruments from a mixed track. A second part of the activity focuses on Generative AI, introducing the core ideas through links between using text and signals and a musical Turing test: students are asked to distinguish between songs played by human musicians and others generated by AI models. We report the results collected from the hundreds of students who have already participated, along with the educational objectives of the activity, which include stimulating curiosity and critical thinking about the field and the possible future impacts.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Artificial Intelligence</kwd>
        <kwd>Machine Learning</kwd>
        <kwd>Generative AI</kwd>
        <kwd>Creativity</kwd>
        <kwd>Music Production</kwd>
        <kwd>Education</kwd>
        <kwd>Critical Thinking</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Background The adoption of tools based on Artificial Intelligence (AI) has become a concrete reality
within the music industry, and it is a field undergoing constant transformation. Beyond the well-known
recommendation systems used by streaming platforms, which significantly influence how music is
consumed by listeners, recent advancements in Machine Learning (ML), have introduced new support
systems for musicians and producers and brought significant ethical and economic implications in
the music production industry, see [
        <xref ref-type="bibr" rid="ref1 ref2">1, 2</xref>
        ]. AI systems such as OpenAI’s MuseNet, Google’s Magenta,
and Amper Music can analyze vast datasets of existing music to learn complex patterns and styles,
resulting in the ability to generate music that can mimic human creativity. These tools are capable to
assist in various tasks related to music production that were previously far more labor-intensive and
time-consuming. They have unlocked opportunities that were previously unimaginable, due to the
limitations of earlier technologies that were either inaccurate or inefective. A notable example is the
development of stem splitters, see e.g. Spleeter based on TensorFlow [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], which can isolate individual
instruments or vocals from a mixed audio track, which is something that was extremely dificult to
achieve using traditional methods. Another important innovation is the evolution of auto-tuners or
tools that correct tempo defects used in recording studios: today, these tools are significantly faster and
more accurate at recognizing melodies and rhythms, allowing producers to correct imperfections in a
singer’s or musician’s performance with greater eficiency than ever before.
      </p>
      <p>
        Perhaps with an even greater impact on large-scale music production, Generative AI enables the
creation of new compositions with minimal input. Similarly to GPT models that break down text into
tokens, music sheets or wave files can be broken down into discrete units that AI algorithms can process
[
        <xref ref-type="bibr" rid="ref4 ref5 ref6">4, 5, 6</xref>
        ]. The AI algorithms can then predicts the next likely token in a sequence, similar to how it
predicts the next word in a sentence for text generation. Furthermore, tools based on Generative AI
such as Suno, ElevenMusic , Soundraw, and many others, now enables the creation of original music
directly from text prompts making music composition accessible to everyone. AI-based tools also gave
rise to major controversies on the intellectual property rights of generated compositions, see e.g. [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]
and the 2023 lawsuit against Anthropic by diferent music publishers [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ].
      </p>
      <p>Motivations and Goals Given the rich and complex scenario mentioned above, creating music seems
a highly relevant and timely way to disseminate AI methods and tools. The main objective of our project
is to utilize these innovative tools alongside targeted educational resources. The idea is to ofer students
a general introduction to artificial intelligence using music, a topic that naturally sparks their interest.
This will stimulate curiosity and encourage critical thinking, ideally equipping participants with the
foundational skills needed to analyze and forecast the potential impacts of the current AI revolution and,
in the meantime, reason on ethical and legal issues. For this purpose, we use the concrete and engaging
example of the artistic field of music production. In the rest of the paper we present the format of our
dissemination lab and present preliminary results obtained via questionnaires proposed to participants
of diferent editions proposed as an orientation activity for the Computer Science Degree courses of our
University. The lab proposal has also been accepted at the 2025 edition of the Pisa’s Internet Festival
and at the Genoa’s Science Festival that will take place in between October and November 2025.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Activity overview</title>
      <p>The general structure and plan of the lab is summarized in Table 1. In the rest of the section we will
describe each part in detail.</p>
      <p>First part: AI basics explanation The activity is designed to be presented as a live show. Visitors
are welcomed while a small group of two or three musicians performs a song (for instance, using two
guitars and an electric bass) while following a backing track (that might include drums and a lead vocal).</p>
      <p>Only at the end of the performance, once all visitors have taken their seats, will the surprise be
revealed: the song they just heard was actually “composed” using generative AI. From there, the session
will continue with an exploration of the principles behind AI-based and generative AI tools, as well as
practical demonstrations showing how these technologies can be useful to musicians in their creative
and production processes.</p>
      <p>To begin, visitors are invited to reflect on how they themselves learned to recognize diferent things,
focusing specifically on the example of musical instruments. Most likely, they first heard the sound of
an instrument and, only afterwards, someone told them its name. After hearing just a few examples,
they were able to identify that instrument among others.</p>
      <p>At this point, the facilitator (the person guiding and explaining the activity) plays sounds from
well-known and easily recognizable instruments, asking the audience to guess their names (for example,
an acoustic guitar or a drum kit). After a series of such examples, for which the audience can easily
provide the correct answer, the facilitator suddenly plays the sound of a less familiar instrument, such
as an oboe or a bassoon. At this moment, the facilitator (or, if a classical musician is in the audience,
it might step in) reveals the instrument’s name. The audience is then given a couple more listening
examples of this “new” instrument, while the facilitator points out that they are now more likely to
recognize it, having just learned to distinguish it from others.
0https://suno.com/home
0https://elevenlabs.io/
0https://soundraw.io/</p>
      <p>Activity
Inference: hearing accu- Participants test the classifier and
racy and precision discuss outcomes of the experiment</p>
      <p>(scores, errors, etc)
Generative AI</p>
      <p>Example tool: music STEM splitter
based on ML
Introduction to the basic concepts
of Generative AI: diference from
non-generative AI and its output
features
Turing test: audience tries to
recognize recorded song from others
generated with AI
Discussion of the results and the
impact of AI and Gen-AI in our lives</p>
      <p>This exercise is designed to mirror the process of supervised learning, the same way humans acquire
knowledge through examples. It then serves as a bridge to introduce a simplified, accessible explanation
of how computers use supervised machine learning to identify patterns and make predictions: there is
a problem to solve (for example: given a sound, which instrument is producing it) and the machine
attempts to provide an answer based on the features it has learned from previous examples of this same
problem. During the training phase, it is “fed” with annotated data samples (in our case, recordings of
diferent instruments labeled with their correct names) so that it can learn the patterns that distinguish
each instrument’s sound.</p>
      <p>
        Next, a hands-on example of training a simple model is provided following a more human-centered
approach for a more empirical interactive explanation of ML [
        <xref ref-type="bibr" rid="ref8 ref9">8, 9</xref>
        ] by inviting volunteers from the
audience. At their choice, they can either play one of the instruments previously introduced at the start
of the activity, or try out a selection of other instruments to spark curiosity, such as small percussion
instruments, unique ethnic instruments (e.g. Tibetan singing bowls or maracas), or even DIY instruments
built using Makey Makey [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. The sounds they produce are recorded and used to train a basic algorithm
via Google’s Teachable Machine [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ], chosen for its intuitive graphical interface, which makes every
step of both the training and inference process easy to understand. During the brief technical pause
required for training, the facilitator explains what the neural network is doing: namely, identifying
which features are most likely to correspond to one instrument or another. Once the inference phase
begins, the system demonstrates how, for each input sample, it estimates the probability that the
sound belongs to one of the trained instrument categories. The audience is also shown what happens
when the model encounters an instrument it has never “seen” before, causing it to make incorrect or
uncertain predictions. This provides a perfect opportunity to explain the challenges of creating large
and comprehensive datasets, even for relatively simple classification problems, and how this issue
becomes even more significant for complex, real-world scenarios, where training can require substantial
computational resources.
      </p>
      <p>Second part: tool based on AI and Generative AI In the second half of the activity, after providing
a general explanation of what all supervised machine learning systems have in common (particularly in
the context of classification) we focus on showcasing some useful and visually engaging applications.</p>
      <p>
        We begin by presenting a STEM track separator: given the audio file of a pop or light music track, it
splits the song into separate components: drums, bass, vocals, and the rest of the arrangement. The
audience learns that these tools work by analyzing the song’s spectrogram and applying an algorithm
trained on a dataset containing pre-separated tracks. This essentially allows the AI to recognize the
typical frequency ranges associated with each instrument [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]. A famous song is first played (e.g. Hit
the Road Jack), and then the algorithm is used to separate it into its individual stems. At this point,
to keep the session entertaining and performance-oriented, the theoretical explanation is paused: the
musicians who performed at the start of the event take the stage again, this time playing live along with
the AI-separated tracks. For example, if the musicians play bass and guitar, the drums and vocals come
from the original track, isolated by the algorithm, while the live instruments are performed in real time.
      </p>
      <p>The theoretical segment wraps up with a brief introduction to Generative AI. Using the same
explanatory framework applied earlier to define AI, we clarify that generative AI learns patterns from
its training data and then produces new examples that follow those patterns. Continuing with the
musical analogy, if the AI is trained on a dataset of blues songs, it can generate entirely new blues
tracks that stylistically match the originals.</p>
      <p>
        Finally, we return to a possible definition of AI [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ], describing it, paraphrasing Alan Turing thoughts
in [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ], describing it, as an artificial system capable of exhibiting intelligent behavior (that is, behavior
comparable to human actions). At this point, the audience is introduced to the Turing Test, which
evaluates a system’s ability to appear human.
      </p>
      <p>An interactive version of this test is then carried out with the participants. They are asked to listen to
four short instrumental blues tracks (A,B,C,D): one (B) is an original recording performed by us, while
the other three were generated by AI. Using a platform such as Wooclap, the audience votes on which
track they believe was the genuine human performance.</p>
      <p>Prologo As discussed in the previous section, the Turing test usually results in being lost by the
majority of the audience. At this point, to conclude the activity, the audience is asked: What does this
all mean? The discussion then reveals that AI has already reached capabilities comparable to the human
level in certain tasks. This means that our focus should not be on what might happen in the future
but rather on what is already happening now. These tools are designed to serve humans, and it is our
responsibility to understand them so that we can decide how best to use them.</p>
      <p>In fact, during our trials with professional musicians (about ten attempts in total) they were
consistently able to identify which track was played by humans. Almost all of them pointed out that what
is missing from AI-generated music is the subtle “human touch”: interpretation, nuance, and even
imperfections. This leads to our final invitation to the audience: use these tools to better understand
what your own “added touch” is, the unique quality you can bring to your work.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Educational goals and preliminary results</title>
      <p>The main educational goals of the activities are to provide participants with a foundational understanding
of AI including basic definitions. Participants are also guided to develop an intuitive grasp of core
Machine Learning concepts such as models, training, inference, and supervised learning. The hands-on
experiment using Google’s Teachable Machine introduces the principles of feature extraction, labeling,
and model evaluation through real-time testing. The use of pre-trained models for music STEM
separation serves to illustrate practical applications of deep learning in music processing. Finally,
the exploration of generative models for the creation of musical content fosters critical reflection
on creativity, randomness, and pattern learning in AI, encouraging participants to evaluate both the
potential and the limitations of these technologies. In general, the lab promotes curiosity-driven learning
through experimentation and provides an opportunity to engage with real-world AI tools that integrate
concepts from multiple disciplines.</p>
      <p>
        The activity was carried out during several editions of the laboratory, involving high school classes
from various areas of the Liguria and Tuscany regions. As suggested in [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ] throughout the sessions,
students were asked to share their thoughts and emotions related to recent news about Artificial
Intelligence . Their responses were anonymously collected through the Wooclap platform and visualized
as a word cloud. A preliminary analysis revealed that the most frequent keywords were curiosity (which
consistently emerged as the dominant term in all editions), interest, but also concern, fear, and anxiety.
These results show the importance of outreach and educational initiatives such as this one, which
can help students better understand and critically approach the ongoing digital revolution. The other
interesting statistics derived from the Wooclap questionnaires concern the results of the interactive
Turing test. In all editions, the majority of participants were unable to correctly identify the track
performed by humans among those generated by AI. In the first 3 editions the average number of
responses for the correct answer (song B) was between (35%), song C being the most voted answer (40%).
The percentage of wrong answers (in particular, for songs C and D) increased in the other editions of
the lab. In some editions we also introduce an additional test using three trap songs generated and one
minor artist’s song. The generated songs turned out to be indistinguishable from the real one (around
5% of votes only), thus showing the possible impact of generative AI in music production.
      </p>
    </sec>
    <sec id="sec-4">
      <title>4. Conclusion</title>
      <p>
        Generative algorithms - aside from ongoing debates over copyright on the music they are trained on
can be useful for artists. However, the true artistic value lies in what a human wants to communicate
to another human: the primary goal of these algorithms is essentially to “pass” the Turing test [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ].
However, the cognitive process that leads to something produced by generative AI has nothing in
common with the human reasoning behind a creative work. Their efectiveness is evaluated solely on
the outcome — that is, how convincingly human-like the result appears — rather than on the process
that produces it. This is why it is important to develop a solid understanding of these systems: to be
aware of their limitations and, consequently, to identify the contexts in which they can be considered
reliable tools, and those where they cannot.
      </p>
      <p>
        In future editions of the activity we are considering to introduce an activity entirely dedicated to
AI-based signal processing using tools such as ML-machine1 and Vitta Science AI2. Furthermore, we
plan to introduce a more explanatory activity on generative AI using Google’s Music VAE [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ] for a
better understanding of the functioning of generative models. Moreover, we planned to develop an
initial and final questionnaire to gather information on the activity’s outcome, evaluating both the
content learned and the audience’s emotional response. This will allow us to perform a meaningful
analysis after several editions of the activity, once we have a larger statistical sample.
Acknowledgment This research has received funding from the EU program NextGenerationEU and
the Ministry of University and Research, National Recovery and Resilience Plan, Mission 4, Component
2, Investment 1.5, project RAISE (ECS00000035).
      </p>
    </sec>
    <sec id="sec-5">
      <title>Declaration on Generative AI</title>
      <p>During the preparation of this work, the authors used ChatGPT, DeepL in order to: Paraphrase and
reword, Text Translation. After using this tool/service, the authors reviewed and edited the content as
needed and take full responsibility for the publication’s content.
1https://ml-machine.org/
2https://it.vittascience.com/ai</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>S.</given-names>
            <surname>Gera</surname>
          </string-name>
          ,
          <article-title>The impact of Artificial Intelligence on music production: Creative potential, ethical dilemmas, and the future of the industry</article-title>
          ,
          <year>2025</year>
          . https://nhsjs.com/
          <year>2025</year>
          /.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>G.</given-names>
            <surname>Chou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Lang</surname>
          </string-name>
          ,
          <article-title>The sound shift: How Generative AI is redefining the music industry's business model</article-title>
          ,
          <year>2024</year>
          . https://www.artefact.com/blog/.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>R.</given-names>
            <surname>Hennequin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Khlif</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Voituret</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Moussallam</surname>
          </string-name>
          ,
          <article-title>Spleeter: a fast and eficient music source separation tool with pre-trained models</article-title>
          ,
          <source>Journal of Open Source Software</source>
          <volume>5</volume>
          (
          <year>2020</year>
          )
          <article-title>2154</article-title>
          . URL: https://doi.org/10.21105/joss.02154. doi:
          <volume>10</volume>
          .21105/joss.02154, deezer Research.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>A.</given-names>
            <surname>Agostinelli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T. I.</given-names>
            <surname>Denk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Borsos</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Engel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Verzetti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Caillon</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q.</given-names>
            <surname>Huang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Jansen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Roberts</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Tagliasacchi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Sharifi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Zeghidour</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Frank</surname>
          </string-name>
          , Musiclm: Generating music from text,
          <year>2023</year>
          . URL: https://arxiv.org/abs/2301.11325. arXiv:
          <volume>2301</volume>
          .
          <fpage>11325</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>C. A.</given-names>
            <surname>Huang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Vaswani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Uszkoreit</surname>
          </string-name>
          , I. Simon,
          <string-name>
            <given-names>C.</given-names>
            <surname>Hawthorne</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Shazeer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. M.</given-names>
            <surname>Dai</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. D.</given-names>
            <surname>Hofman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Dinculescu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Eck</surname>
          </string-name>
          ,
          <article-title>Music transformer: Generating music with long-term structure</article-title>
          ,
          <source>in: 7th International Conference on Learning Representations, ICLR</source>
          <year>2019</year>
          ,
          <article-title>New Orleans</article-title>
          , LA, USA, May 6-
          <issue>9</issue>
          ,
          <year>2019</year>
          , OpenReview.net,
          <year>2019</year>
          . URL: https://openreview.net/forum?id=rJe4ShAcF7.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>J.</given-names>
            <surname>Copet</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Kreuk</surname>
          </string-name>
          , I. Gat,
          <string-name>
            <given-names>T.</given-names>
            <surname>Remez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Kant</surname>
          </string-name>
          , G. Synnaeve,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Adi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Défossez</surname>
          </string-name>
          ,
          <article-title>Simple and controllable music generation</article-title>
          , in: A.
          <string-name>
            <surname>Oh</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          <string-name>
            <surname>Naumann</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Globerson</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          <string-name>
            <surname>Saenko</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Hardt</surname>
          </string-name>
          , S. Levine (Eds.),
          <source>Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems</source>
          <year>2023</year>
          , NeurIPS
          <year>2023</year>
          , New Orleans, LA, USA, December
          <volume>10</volume>
          -
          <issue>16</issue>
          ,
          <year>2023</year>
          ,
          <year>2023</year>
          . URL: http://papers.nips.cc/paper_files/paper/2023/hash/ 94b472a1842cd7c56dcb125fb2765fbd-Abstract-Conference.html.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>W.</given-names>
            <surname>Tan</surname>
          </string-name>
          ,
          <article-title>Defining authorship for the copyright of AI-Generated music</article-title>
          ,
          <year>2024</year>
          . https://www.artefact.
          <article-title>com/blog/the-sound-shift-how-generative-ai-is-redefining-the-music-industrys-business-model/.</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>J. A.</given-names>
            <surname>Fails</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. R.</given-names>
            <surname>Olsen</surname>
          </string-name>
          <string-name>
            <surname>Jr,</surname>
          </string-name>
          <article-title>Interactive machine learning</article-title>
          ,
          <source>in: Proceedings of the 8th international conference on Intelligent user interfaces</source>
          ,
          <year>2003</year>
          , pp.
          <fpage>39</fpage>
          -
          <lpage>45</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>M.</given-names>
            <surname>Gillies</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Fiebrink</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Tanaka</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Garcia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Bevilacqua</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Heloir</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Nunnari</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Mackay</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Amershi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Lee</surname>
          </string-name>
          , et al.,
          <article-title>Human-centred machine learning</article-title>
          ,
          <source>in: Proceedings of the 2016 CHI conference extended abstracts on human factors in computing systems</source>
          ,
          <year>2016</year>
          , pp.
          <fpage>3558</fpage>
          -
          <lpage>3565</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>B. M.</given-names>
            <surname>Collective</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Shaw</surname>
          </string-name>
          ,
          <article-title>Makey makey: improvising tangible and nature-based user interfaces</article-title>
          ,
          <source>in: Proceedings of the sixth international conference on tangible, embedded and embodied interaction</source>
          ,
          <year>2012</year>
          , pp.
          <fpage>367</fpage>
          -
          <lpage>370</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>M.</given-names>
            <surname>Carney</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Webster</surname>
          </string-name>
          , I. Alvarado,
          <string-name>
            <given-names>K.</given-names>
            <surname>Phillips</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Howell</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Grifith</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Jongejan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Pitaru</surname>
          </string-name>
          ,
          <string-name>
            <surname>A. Chen,</surname>
          </string-name>
          <article-title>Teachable machine: Approachable web-based tool for exploring machine learning classification, in: Extended abstracts of the 2020 CHI conference on human factors in computing systems</article-title>
          ,
          <year>2020</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>8</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Rafii</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Liutkus</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.-R.</given-names>
            <surname>Stöter</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. I.</given-names>
            <surname>Mimilakis</surname>
          </string-name>
          , D. FitzGerald,
          <string-name>
            <given-names>B.</given-names>
            <surname>Pardo</surname>
          </string-name>
          ,
          <article-title>An overview of lead and accompaniment separation in music</article-title>
          ,
          <source>IEEE/ACM Transactions on Audio, Speech, and Language Processing</source>
          <volume>26</volume>
          (
          <year>2018</year>
          )
          <fpage>1307</fpage>
          -
          <lpage>1335</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>S.</given-names>
            <surname>Russell</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Norvig</surname>
          </string-name>
          , Artificial Intelligence:
          <string-name>
            <given-names>A Modern</given-names>
            <surname>Approach</surname>
          </string-name>
          , 3rd ed., Prentice Hall Press, USA,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <surname>A. M. Turing</surname>
          </string-name>
          ,
          <article-title>Computing machinery and intelligence, in: Parsing the Turing test: Philosophical and methodological issues in the quest for the thinking computer</article-title>
          , Springer,
          <year>2007</year>
          , pp.
          <fpage>23</fpage>
          -
          <lpage>65</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>L.</given-names>
            <surname>Cesaro</surname>
          </string-name>
          , G. Dodero,
          <article-title>Generazione digitale, ma non consapevole: giovani e IA fra percezioni e pratiche, in: Atti del Convegno Italiano sulla Didattica dell'Informatica (ITADINFO</article-title>
          <year>2025</year>
          ),
          <year>2025</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>L.</given-names>
            <surname>Floridi</surname>
          </string-name>
          ,
          <source>The ethics of artificial intelligence: Principles</source>
          , challenges, and
          <string-name>
            <surname>opportunities</surname>
          </string-name>
          (
          <year>2023</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>A.</given-names>
            <surname>Roberts</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Engel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Rafel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Hawthorne</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Eck</surname>
          </string-name>
          ,
          <article-title>A hierarchical latent vector model for learning long-term structure in music</article-title>
          ,
          <source>in: International conference on machine learning, PMLR</source>
          ,
          <year>2018</year>
          , pp.
          <fpage>4364</fpage>
          -
          <lpage>4373</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>