<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Prototype of Accessible Digital Musical Instruments for Musical Inclusion</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Vanessa Faschi</string-name>
          <email>vanessa.faschi@unimi.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>(Ph.D. Student)</string-name>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Laboratory of Music Informatics (LIM), Dept. of Computer Science, University of Milan</institution>
          ,
          <addr-line>via G. Celoria 18, Milan</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2025</year>
      </pub-date>
      <fpage>6</fpage>
      <lpage>10</lpage>
      <abstract>
        <p>Accessible digital musical instruments (ADMIs) are a promising approach to music-making, demonstrating that technology can play a transformative role in promoting inclusivity in musical practice. In this paper, the author's industrial Ph.D. project is presented, a prototype of ADMI, developed together with the company Audio Modeling and the founder of Musica Senza Confini . The prototype was tested in live ensemble performances involving both users with disabilities and professional musicians, yielding encouraging and insightful results regarding its practical applicability and musical integration. This paper contributes to the broader conversation on inclusion in digital music-making and ofers a concrete step toward the development of tools that accommodate diversity without compromise. Accessibility, usability, digital music instruments, accessible digital musical instruments, human-computer CHItaly 2025: Technologies and Methodologies of Human-Computer Interaction in the Third Millenium, Doctoral Consortium, ∗Corresponding author.</p>
      </abstract>
      <kwd-group>
        <kwd>interaction</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        In the field of Human-Computer Interaction (HCI), music plays an important role in its form as a
digital instrument, whether it has an interface or not. We can also assert that music plays a crucial
role in the interaction between humans and the computer when this union intentionally generates
sound. Adding the aspects of accessibility and inclusivity, the field leads directly to accessible digital
musical instruments (ADMIs), intended as accessible musical control interfaces used in electronic music,
inclusive music practice and music therapy settings[
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. The intersection of HCI and ADMIs presents a
critical frontier for inclusive design and creative expression. As digital tools increasingly mediate musical
experiences, ensuring accessibility for people with disabilities has become both a technological and an
ethical imperative. Traditional music interfaces often assume visual, auditory, or motor capabilities that
exclude many potential users from full participation in music creation, education, and appreciation.
Recent advances in HCI ofer promising avenues for addressing these challenges by reimagining how
people interact with music technologies. From gesture-based controllers and adaptive user interfaces to
haptic feedback and brain-computer interfaces, novel interaction paradigms are enabling more inclusive
access to musical expression. However, designing efective and accessible systems requires a deep
understanding of both the diverse needs of users and the advantages of emerging technologies. This
paper will present the author’s industrial Ph.D. project, a prototype of ADMI, developed together with
the company Audio Modeling1 and the founder of Musica Senza Confini 2. Called in its latest version
UniMIDIHub, it acts as a multilayer ecosystem designed to facilitate MIDI communication on multiple
platforms and devices. The remainder of the paper is structured as follows. Section 2 presents an
analysis of the scientific literature on the subject, studying it to uncover prevailing trends and recent
innovations. Section 3 provides an in-depth look at UniMIDIHub, specifying the design principles of
the interface and the technical solutions implemented. Section 4 presents the observational studies
      </p>
      <p>CEUR
Workshop</p>
      <p>ISSN1613-0073
undertaken, detailing the research question, experimental protocol, testing procedures, and outcomes
derived from empirical data. Section 5 brings together the findings of the author and outlines the main
takeaways from the study.</p>
    </sec>
    <sec id="sec-2">
      <title>2. State of the Art</title>
      <p>
        The development of ADMIs has become a central concern in the intersection of music technology
and HCI. These systems aim to broaden access to musical expression for individuals with physical
or cognitive impairments, and have found applications in inclusive education, music therapy, and
performance contexts[
        <xref ref-type="bibr" rid="ref2">2</xref>
        ][
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. Early innovations, such as the bioelectric controller by Knapp and Lusted[
        <xref ref-type="bibr" rid="ref4">4</xref>
        ],
laid the foundation for using alternative input modalities in musical control. More recent work has
proposed formal frameworks for evaluating ADMIs, such as the Dimension Space model[
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], and has
emphasized participatory and inclusive design practices[
        <xref ref-type="bibr" rid="ref5">5</xref>
        ][
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. Tangible User Interfaces (TUIs) represent
a widely explored category within ADMIs. These systems ofer embodied interaction through physical
artifacts and have been used to support creativity, education, and rehabilitation. Examples include
magnetic-tag interfaces[
        <xref ref-type="bibr" rid="ref7">7</xref>
        ], block-based systems like Block Jam[
        <xref ref-type="bibr" rid="ref8">8</xref>
        ], and commercial platforms such
as the Reactable[
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. Gaze-based interaction has emerged as a powerful alternative for users with
severe motor impairments. Systems such as the EyeHarp[
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] and Kiroll[
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] enable musical control
through eye movements. However, these often depend on specialized hardware, posing accessibility
challenges in everyday contexts[
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]. Additionally, the accessibility of mainstream music software
has gained attention. Studies on digital audio workstations (DAWs) have highlighted both progress
and ongoing limitations in screen reader support and UI design[
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]. Research eforts increasingly
recognize the importance of inclusive design, not only for assistive tools but across the broader digital
music ecosystem[
        <xref ref-type="bibr" rid="ref14">14</xref>
        ]. In summary, ADMIs and related accessible technologies continue to evolve, with
research focusing on usability, adaptability, and creative empowerment across diverse user populations.
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. An Inclusive and Accessible Music Software Instument</title>
      <p>The project prototype presented in this paper, UniMIDIHub, focus of the author’s research, is a flexible
software application developed to enable MIDI communication across multiple platforms. The concept
originated from the founders of Musica Senza Confini (a musical initiative aimed at children and adults
with psychophysical disabilities, focused on inclusive music making as part of its work on accessible
ensemble performance), who was tasked with creating performance strategies for a user with extremely
limited mobility, only able to move their fingers, toes, and eyes.</p>
      <p>Based on these assumptions, the research question on which the research and subsequent
developments were based concerns the possibility that the software project could be aimed at individuals
without a specific disability or ability, but rather at everyone, regardless of their ability. In fact, ADMIs
dedicated to a specific disability are more common than this approach. To follow this path, the research
framework planned to test the software on people with diferent types of disability, one at a time, and
then conduct an initial observational study aimed at people with unique and diferent characteristics,
together. to understand how the software works, as shown in Figure 1, the user is able to play a song
with UniMIDIHub, reproducing musical chords and sequences through the movement of his eyes, fingers
and toes. The system leveraged an eye tracker, routinely used by the user for his daily activities, to let
her select colored pads on the screen, while micromuscular sensors triggered control actions mapped to
notes, chords, short MIDI sequences, or other sounds. Thanks to its configurable design, UniMIDIHub
can be adapted to diferent user needs and ofers a broader selection of pieces to play.</p>
      <p>The software runs on a central computer, receives input from a variety of devices, and sends MIDI
messages to a digital audio work station (DAW) hosted on a separate system. Within the DAW,
a multitrack project is loaded and each track can be assigned to a diferent input device, enabling
collaborative and inclusive musical performances.</p>
      <p>UniMIDIHub is developed in C++ through the JUCE framework and is compatible with both macOS
and Windows. Consists of a standalone user interface with configurable colored pads, ranging from 2
to 12 per screen (Figure 2), that can be activated through diferent type of input devices, facilitating
accessibility for diverse user needs.</p>
      <p>Through the Settings menu, each colored pad can be fully customized with a range of parameters,
including:
• Input Type: Trigger that activates the sound on mouse hover; Hold that activates on hover and
deactivates when the cursor exits the area; Latch, that toggles the sound on hover, deactivates
on the next hover;
• Generated Action: assignable to a MIDI Note, MIDI Control Change, MIDI Chord, MIDI Sequence,
a sample, or a navigation command (e.g., page change);
• MIDI settings: channel, note, velocity, release-velocity values, chord properties, and adjustable
delay in milliseconds for both MIDI Note-On and Note-Of messages;
• Visuals: custom pads color;
• Input controls: two main keyboard shortcuts and an optional shortcut to repeat the last triggered
note;
• Output for the event: MIDI Out, which sends the message to a DAW, to be played, or Sampler, an
additional view to set various type of music samples.</p>
      <p>In the Setting menu it is also possible to switch between ”Main Pads” (the colored ones) and ”Aux
Pads”, which are additional configurable pads that function identically to the main ones but are hidden
from the main interface. These pads are triggered exclusively via keyboard shortcuts. In this way,
any external device, such as video game controllers, extendable keyboards, accessible devices, etc.
that can be mapped with keyboard keys, can “play” UniMIDIHub. For every setup, it is possible to
save, load, and reset the configuration, with the opportunity to save numerous setups to load at the
time of the performance. There are two possible screen views: the ”Eye Tracker” Mode (see Figure
2) which provides a rectangular area with rounded corners in the center of the screen, as a rest area
for the gaze and as a transit area from one pad to another non-adjacent one. The ”Touch Screen”
Mode which instead shows the pads in the entire screen and allows easier performance with touch
devices. One of the standout features of UniMIDIHub is its integration with the MIDI protocol. As
a universal standard for digital music communication, MIDI allows the software to interface with a
wide array of tools, including DAWs, plugin hosts, virtual instruments, and external MIDI hardware,
enabling maximum workflow flexibility and compatibility. Due to its cross-platform compatibility,
configurable interface, and comprehensive MIDI functionality, UniMIDIHub ofers a flexible toolset for
music interaction. It supports a range of use cases, including sample triggering, virtual instrument
performance, and efect manipulation, thereby facilitating diverse approaches to music production and
control. The developments achieved so far have led to the release of a first marketable version on Audio
Modeling channels. This step is not considered a point of arrival, but rather a starting point for sales to
the public. Research and development will continue even beyond my Ph.D.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Observational Studies and Results</title>
      <p>
        Two observational studies, informal uses, and musical performances were held with UniMIDIHub.
The software developments achieved so far have taken into account all the feedback received. The
observational studies aimed at evaluating the usability and accessibility of the software prototype. The
ifrs empirical investigation[
        <xref ref-type="bibr" rid="ref15">15</xref>
        ] was designed to assess whether users with diferent physical and sensory
abilities, each with varying levels of musical knowledge and technological familiarity, could successfully
interact with the system. The study was grounded in a central research question, as anticipated in
section 3: Can UniMIDIHub be understood and used by individuals regardless of their physical abilities?
The underlying hypothesis suggested that the system’s flexible design would accommodate a broad
spectrum of users by allowing them to configure the software with the input devices they already
know and use in their daily lives. Three individuals participated in the study, each bringing unique
profiles that enriched the evaluation process. Primarily they were trained to the purpose of the software,
without showing it, allowing the study to observe how intuitively the interface could be understood
and used. During the test, participants were asked to complete a series of progressively complex
tasks using UniMIDIHub. These tasks tested basic interactions, such as play a button, as well as more
advanced operations like configuring pads and navigating interface pages. Observations and recordings
were used to document each user’s approach, errors, and success rate. After the test, a post-test
questionnaire was submitted using both closed (Likert-scale) and open-ended responses to evaluate
the user experience from the participants’ perspectives. The first observational study revealed that
participants were generally able to complete the assigned tasks, though users with visual impairments
encountered notable accessibility challenges, particularly due to limited screen reader support and
interface feedback. Post-test feedback reflected a range of user experiences, from high satisfaction
among users with motor impairments to moderate frustration among blind participants. The findings
highlight both the adaptability of the UniMIDIHub and the need for improvements in visual accessibility,
ofering guidance for future development toward more inclusive musical interfaces.
      </p>
      <p>A second observational study was conducted to evaluate UniMIDIHub following the integration of
improvements based on feedback from the initial user study. This iteration involved seven participants,
all of whom were students enrolled in a Sound and Music Computing program. The study aimed to
investigate the following research question: Can students with a background in music technology
understand and operate UniMIDIHub independently, and would they propose any additions or
modifications to its design? As in the previous study, participants were introduced to the context and purpose
of the software but received no prior demonstration or detailed instructions on its use. They were
then asked to complete a series of tasks with increasing levels of complexity. The results revealed
two primary trends. On one hand, several participants expressed a desire for a more streamlined
interface or the inclusion of contextual guidance to enhance usability. On the other hand, despite
an initial period of uncertainty, all participants were ultimately able to explore and interact with the
software independently. These findings suggest that the application is, in principle, accessible to users
with relevant domain knowledge and that its interface can support self-directed exploration, though
further refinements may improve the overall user experience. The feedback gathered from this second
study has also informed the ongoing development of UniMIDIHub. The current version, incorporating
these refinements, has subsequently been tested in live ensemble performances involving both users
with disabilities and professional musicians, yielding encouraging and insightful results regarding its
practical applicability and musical integration.</p>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusion</title>
      <p>In the field of HCI, accessibility remains one of the primary challenges. For software to be considered
truly usable and inclusive, it must be accessible to individuals with a wide range of abilities and
disabilities. This concern is particularly significant in the context of digital musical instruments, which,
through their transition from analog to digital, have already contributed to lowering several barriers to
musical expression. The growing field of ADMIs ofers further promise, demonstrating that technology
can play a transformative role in promoting inclusivity in musical practice. It is within this framework
that the present research project is situated. The aim of UniMIDIHub is to broaden the possibilities for
musical interaction without fundamentally altering the daily practices of its users. By enabling control
through devices and tools that users are already familiar with, the system seeks to support creative
expression while minimizing the cognitive and technical load often associated with new technologies.
The observational studies carried out, first with individuals with diverse impairments, and later with
students in the field of music technology, have provided valuable feedback that continues to shape the
development of the system. The results confirm that, despite initial uncertainties, users can navigate and
operate the software independently. Furthermore, live ensemble sessions involving both professional
musicians and users with disabilities have ofered promising insights into the practical efectiveness
and musical potential of the system in real-world settings. Looking ahead, this work contributes to
the broader conversation on inclusion in digital music-making and ofers a concrete step toward the
development of tools that accommodate diversity without compromise.</p>
    </sec>
    <sec id="sec-6">
      <title>Acknowledgments</title>
      <p>The authors wish to express their sincere gratitude to Audio Modeling for their collaboration in the
co-development of UniMIDIHub and for their valuable technical support throughout the project. Special
thanks are extended to Manuele Maestri, founder of Musica Senza Confini, for his insightful feedback
and for enabling the organization of live performance sessions involving both professional musicians
and users with disabilities. The authors also gratefully acknowledge the Laboratory of Music Informatics
(LIM) at the University of Milan for their continued support and essential contributions to the research
and development process.</p>
    </sec>
    <sec id="sec-7">
      <title>Declaration on Generative AI</title>
      <p>Either:
The author(s) have not employed any Generative AI tools.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>E.</given-names>
            <surname>Frid</surname>
          </string-name>
          ,
          <article-title>Accessible digital musical instruments-a survey of inclusive instruments</article-title>
          ,
          <source>in: Proceedings of the international computer music conference, ICMC Daegu</source>
          ,
          <year>2018</year>
          , pp.
          <fpage>53</fpage>
          -
          <lpage>59</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>E.</given-names>
            <surname>Frid</surname>
          </string-name>
          ,
          <article-title>Accessible digital musical instruments-a review of musical interfaces in inclusive music practice</article-title>
          ,
          <source>Multimodal Technologies and Interaction</source>
          <volume>3</volume>
          (
          <year>2019</year>
          )
          <fpage>57</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>N.</given-names>
            <surname>Davanzo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Avanzini</surname>
          </string-name>
          , et al.,
          <article-title>A dimension space for the evaluation of accessible digital musical instruments</article-title>
          ,
          <source>in: Proceedings of the International Conference on New Interfaces for Musical Expression</source>
          , Birmingham City University,
          <year>2020</year>
          , pp.
          <fpage>214</fpage>
          -
          <lpage>220</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>R. B.</given-names>
            <surname>Knapp</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H. S.</given-names>
            <surname>Lusted</surname>
          </string-name>
          ,
          <article-title>A bioelectric controller for computer music applications</article-title>
          ,
          <source>Computer music journal 14</source>
          (
          <year>1990</year>
          )
          <fpage>42</fpage>
          -
          <lpage>47</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>A.</given-names>
            <surname>Ward</surname>
          </string-name>
          ,
          <article-title>The development of a modular accessible musical instrument technology toolkit using action research</article-title>
          ,
          <source>Frontiers in Computer Science</source>
          <volume>5</volume>
          (
          <year>2023</year>
          )
          <fpage>1113078</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>A.</given-names>
            <surname>Förster</surname>
          </string-name>
          ,
          <article-title>Accessible digital musical instruments in special educational needs schools-design considerations based on 16 qualitative interviews with music teachers</article-title>
          ,
          <source>International Journal of Human-Computer Interaction</source>
          <volume>39</volume>
          (
          <year>2023</year>
          )
          <fpage>863</fpage>
          -
          <lpage>873</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>J. A.</given-names>
            <surname>Paradiso</surname>
          </string-name>
          , K.-y. Hsiao,
          <string-name>
            <given-names>A.</given-names>
            <surname>Benbasat</surname>
          </string-name>
          ,
          <article-title>Tangible music interfaces using passive magnetic tags</article-title>
          , arXiv preprint arXiv:
          <year>2010</year>
          .
          <volume>01575</volume>
          (
          <year>2020</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>H.</given-names>
            <surname>Newton-Dunn</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Nakano</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Gibson</surname>
          </string-name>
          ,
          <article-title>Block jam: a tangible interface for interactive music</article-title>
          ,
          <source>Journal of New Music Research</source>
          <volume>32</volume>
          (
          <year>2003</year>
          )
          <fpage>383</fpage>
          -
          <lpage>393</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>S.</given-names>
            <surname>Jordà</surname>
          </string-name>
          , G. Geiger,
          <string-name>
            <given-names>M.</given-names>
            <surname>Alonso</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Kaltenbrunner</surname>
          </string-name>
          ,
          <article-title>The reactable: exploring the synergy between live music performance and tabletop tangible interfaces</article-title>
          ,
          <source>in: Proceedings of the 1st international conference on Tangible and embedded interaction</source>
          ,
          <year>2007</year>
          , pp.
          <fpage>139</fpage>
          -
          <lpage>146</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Vamvakousis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Ramirez</surname>
          </string-name>
          ,
          <article-title>The eyeharp: A gaze-controlled digital musical instrument</article-title>
          ,
          <source>Frontiers in psychology 7</source>
          (
          <year>2016</year>
          )
          <fpage>906</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>N.</given-names>
            <surname>Davanzo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Valente</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L. A.</given-names>
            <surname>Ludovico</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Avanzini</surname>
          </string-name>
          ,
          <article-title>Kiroll: A gaze-based instrument for quadriplegic musicians based on the context-switching paradigm</article-title>
          ,
          <source>in: Proceedings of the 18th International Audio Mostly Conference</source>
          ,
          <year>2023</year>
          , pp.
          <fpage>59</fpage>
          -
          <lpage>62</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>D.</given-names>
            <surname>Kandpal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. R.</given-names>
            <surname>Kantan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Serafin</surname>
          </string-name>
          ,
          <article-title>A gaze-driven digital interface for musical expression based on real-time physical modelling synthesis</article-title>
          ,
          <source>in: 19th Sound and Music Computing Conference, SMC</source>
          <year>2022</year>
          ,
          <article-title>Sound</article-title>
          and Music Computing Network,
          <year>2022</year>
          , pp.
          <fpage>461</fpage>
          -
          <lpage>468</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>G.</given-names>
            <surname>Pedrini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L. A.</given-names>
            <surname>Ludovico</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Presti</surname>
          </string-name>
          , et al.,
          <article-title>Evaluating the accessibility of digital audio workstations for blind or visually impaired people</article-title>
          ,
          <source>in: Proceedings of the International Conference on ComputerHuman Interaction Research and Applications (CHIRA</source>
          <year>2020</year>
          ),
          <source>Science and Technology Publications (SCITEPRESS)</source>
          ,
          <year>2020</year>
          , pp.
          <fpage>225</fpage>
          -
          <lpage>232</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>E.</given-names>
            <surname>Frid</surname>
          </string-name>
          ,
          <article-title>Diverse sounds: Enabling inclusive sonic interaction</article-title>
          ,
          <source>Ph.D. thesis</source>
          , KTH Royal Institute of Technology,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>V.</given-names>
            <surname>Faschi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L. A.</given-names>
            <surname>Ludovico</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Avanzini</surname>
          </string-name>
          , E. Parravicini,
          <string-name>
            <given-names>M.</given-names>
            <surname>Maestri</surname>
          </string-name>
          , et al.,
          <article-title>An accessible software interface for collaborative music performance</article-title>
          ,
          <source>in: Sound and Music Computing Conference (SMC)</source>
          ,
          <source>SMC</source>
          ,
          <year>2024</year>
          , pp.
          <fpage>150</fpage>
          -
          <lpage>157</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>