<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Possibilities Emerging on the Trajectory from IoT to IoMusT: Enabling Ubiquitous Musical Interactions for Wellbeing</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Azeema Yaseen</string-name>
          <email>azeema.yaseen.2020@mumail.ie</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Joseph Timoney</string-name>
          <email>joseph.timoney@mu.ie</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Maynooth University</institution>
          ,
          <addr-line>Co. Kildare</addr-line>
          ,
          <country country="IE">Ireland</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>The Internet of Musical Things (IoMusT) and ubiquitous music (ubimus) are interrelated research fields concerned with the design and development of advanced technology to support novel musical and artistic experiences. IoMusT primarily motivates the deployment of embedded computing devices and platforms for musically oriented individual or collaborative activities. Ubimus schemes are musically driven while keeping the participating agents and entities (such as tablets or computers), interfaces (hardware, or software), interaction metaphors (visual, tactile, or gestural), and instruments (traditional or novel) not necessarily complex. It also encourages the design of new musical interactions for participants with little know-how of the musical domain. Within the context of initiatives embracing the internet of things (IoT) for healthcare, alongside remodeling IoT devices for IoMusT interactions, this paper considers the area of ubiquitous musical interaction design for music therapies, ofering an example application. This convergence, with the appropriate technologies, could enable new opportunities for the promotion of wellbeing, both individually and collectively, in-person and remotely.</p>
      </abstract>
      <kwd-group>
        <kwd>Internet of Things (IoT)</kwd>
        <kwd>Internet of Musical Things (IoMusT)</kwd>
        <kwd>Ubiquitous music (ubimus)</kwd>
        <kwd>Wellbeing</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>Musical Interactions for</p>
    </sec>
    <sec id="sec-2">
      <title>1. Introduction</title>
      <p>
        The Internet of Musical Things is a confluence of cross disciplinary fields including music
technology, the internet of things, human–computer interaction, and artificial intelligence
applied to musical contexts. As a technological perspective, the IoMusT ecosystem is composed
of three core components: i) musical things, ii) connectivity and iii) applications and services.
In the IoMusT network, musical things are computing devices of any form such as wearables,
computers, and tablets dedicated to the production and/or reception of musical content. Musical
information is data detected and processed by a musical thing and is sent to a human or
another musical thing across a network [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. IoMusT scenarios ofer possibilities to support
remote musical experiences and interactions between users involved in a musical activity.
These interactions are also supported by the direct interconnection of interoperable musical
instruments. Smart musical instruments (SMIs) are a subpart of IoMusT applications. SMIs are
embedded with sensors and actuators, and are able to perform intelligent operations allowing
connectivity to both local networks and the web. On the other hand, the growth of IoT has
facilitated many proposals for healthcare technologies that should provide better life quality. The
main purpose of this paper is to discuss the integration of IoMusT, ubimus, and IoT paradigms,
to present a framework, and to discuss a simple application. The potential of this area is for
applications that could be quickly successful in supporting new creative therapies that could
simultaneously help both the musically experienced and inexperienced. We commence with a
brief review of music technology enabled therapies for wellbeing and then mention some of the
established contributions by ubimus regarding everyday devices for music making. IoMusT
would be the glue that joins them together.
      </p>
    </sec>
    <sec id="sec-3">
      <title>2. Related Work for Motivation</title>
      <sec id="sec-3-1">
        <title>2.1. Wellbeing and its Relationship to Music Therapy</title>
        <p>
          Wellbeing is broader than just the absence of a disease in human body. It brings psychological,
social, and spiritual health along with physical health. Music has been widely used for therapies
improving resilience and building confidence to deal with various scenarios of anxiety and
stress. Recently [
          <xref ref-type="bibr" rid="ref2">2</xref>
          ] used beat-making technologies for improvisation during music therapy.
The Ableton Push 2, a Roland TR-8 drum machine and audio interface were used for the
sessions. These MIDI-Linked devices were set up to play and control presets at a synchronized
tempo. Improvisation is an even freer form of music therapy where users are deeply involved
in spontaneous music creation using diferent instruments, tools, or technologies.
        </p>
      </sec>
      <sec id="sec-3-2">
        <title>2.2. Ubimus and its Implementation</title>
        <p>
          Ubimus believes that music is universal, and everyone has the potential to create sounds that
are musical. It started in 2007 and afords new metaphors for creative actions (e.g., soundsphere
[
          <xref ref-type="bibr" rid="ref3">3</xref>
          ]), interaction modalities (gesture, touch, visualization) and everyday devices to create musical
content. Ubimus research involves performing participants, material resources and extends
musical activities by means of creativity supported systems. An example is Playsound.space, a
web-based tool that searches the sounds of Freesound.org, that can then be dragged and dropped
into a mixing window for free improvisation and experimental music production. In [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ], a
hand metaphor (Handy hear and Handy see) was developed to enable camera-based touchless
sonic interactions mapped through gesture recognition for modifying pitch, amplitude, and /or
duration. Using these metaphors even a lay person can interact with it. Ryan Monro designed
Bloomish [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ] as a web-based interface implemented in JavaScript to generate tones and melodies
by just screen tapping. The pitch gets higher from bottom to top in the given space. It allows
improvisation, or presets can be played based on user selection.
        </p>
      </sec>
      <sec id="sec-3-3">
        <title>2.3. Relationship to IoMusT</title>
        <p>
          IoMusT uses Wireless Sensor Networks (WSNs), IoT, and Tactile Internet (TI) as background
communication technologies. These technologies enable the development of SMIs. SMIs cross
boundaries of traditional music settings and venues such as studios and stages due to embedded
technologies for sound production and processing [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ]. The smart guitar and smart mandolin
are two examples of SMIs for ubimus remote interactions. The embedded instruments give
lfexibility to deliver sonic interaction using hardware (instrument) and software. The real benefit
of IoMusT is the built-in assumption of networked interaction. Such systems can be connected
via local area networks (LAN), wide area networks (WAN), or WSNs. It gives freedom in choices
of collaborations as well as experiences. A known example of a networked music performance
(NMP) system is Reactable [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ]. It consists of a touchable table interface which allows users to
control musical output when objects are moved on the surface. Musicians can participant from
diferent locations to collaborate with each other.
        </p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>3. Ubimus Driven IoT and IoMusT: An Application for Active</title>
    </sec>
    <sec id="sec-5">
      <title>Music Therapies</title>
      <p>Figure 1 highlights where IoT and IoMusT can share the same devices and computing resources
for healthcare and music making together. To let cross modal devices and sensors communicate
for such applications, protocols and communication standards for wireless data collection,
processing and transmission are required as a common language. For example, electromyography
sensor (EMG) measures small electrical signals generated by human muscles as they are moved.
This includes lifting an arm up, clenching a fist, or gestures like moving a finger.</p>
      <p>
        With IoT-based monitoring devices designed in form of bands or headsets that observe
physiological data, the next stage of IoMusT is to give meaning to this data. Thus, the data is
analyzed to detect an artistic intention that reflects some emotion. This data is sent to a musical
interface that configures the smart instrument presets or music sound repositories based on
the users’ requirement. The physiological data can be mapped to various music parameters
such as pitch, timbre, or volume. In active music therapy, users create melodies or improvise,
and this process is continuous. Designing interfaces for such interactions is a challenging task.
The prediction of human musical activity can also inform the interface configurations to ofer
more customized resources, but it is also dificult. To accommodate an “active” role of music,
the user would perform gestures that are simpler as the interaction begins. In the therapeutic
scenario, the configuration of movement-musical events would partially depend on the user but
the system would take responsibility for constraining the user to a selection of musical objects
on the interface that are specifically designed for IoMusT. If the initial interaction is based on
the triggering of musical events when two arms are moved, the user would have the control to
add more such mappings between their movements and the music objects as the interaction
evolves. It might be too optimistic to say at this stage, but use of artificial intelligence for
human-centered interfaces presented in [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] suggests a direction towards musical interfaces
that adapt to user scenarios. Though, the designer must ultimately draw a line between how
much control users have and where the system comes forward. Achieving the correct balance
is crucial for such applications.
      </p>
      <sec id="sec-5-1">
        <title>3.1. Short Description about Work in Progress and Future Steps</title>
        <p>
          In our ongoing project we are working on interaction modalities for musical interaction including
color-based metaphors for amateurs and gesture-based musical interactions using computer
vision techniques. The aim is to design and develop low-cost, easy-to-use, and available
technologies for music making and defining new applications based on these systems as an
application to the field of IoMusT. Currently, sensing of the heart rate and galvanic skin response
as shown in figure 2 (b) are used for a low fidelity prototype design of a wrist band (see figure
2 (a)) that triggers music based on the Solfeggio note frequencies for body and mind health.
These Solfeggio tones are known as sound healing modalities for various mental, emotional, and
physical ailments [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ]. During an investigation across the multitude of soothing possibilities, we
experimented with a vibration motor to give tactile feedback to users on their wrist as rhythmic
vibrations. In the activity, the prototype was ascertained to operate as was intended, and the
output responses were expected for particular inputs. The alignment of movements along with
IoMusT based musical interactions require mapping between these two domains, and in the
current prototype the selected solfege tones are independent of the user’s control, as it was
simply a rhythmic presentation of the sounds and the vibrations that were delivered. Thus, this
prototype is an example of a passive approach to music therapy. The long-term goal however is
to extend it for active therapies by giving the users more control over the devices.
        </p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>4. Conclusion</title>
      <p>The short paper has introduced a music-based rhythmic vibration device that responds to a
users’ stress level. It embraces the concepts and ideas of using IoT (the sensors), ubimus (the
ease of use), and IoMusT (the musical responsiveness) together in the context of health and
wellbeing. This proposed framework is a work in progress and the mapping of movements to
music, inclusion of interaction over the network and user testing are the next tasks.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>L.</given-names>
            <surname>Turchet</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Fischione</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Essl</surname>
          </string-name>
          , D. Keller, M. Barthet,
          <article-title>Internet of musical things: Vision and challenges</article-title>
          ,
          <source>Ieee access 6</source>
          (
          <year>2018</year>
          )
          <fpage>61994</fpage>
          -
          <lpage>62017</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>A. H. D.</given-names>
            <surname>Crooke</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K. S.</given-names>
            <surname>Mcferran</surname>
          </string-name>
          ,
          <article-title>Improvising using beat making technologies in music therapy with young people</article-title>
          ,
          <source>Music Therapy Perspectives</source>
          <volume>37</volume>
          (
          <year>2019</year>
          )
          <fpage>55</fpage>
          -
          <lpage>64</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>W. R. B.</given-names>
            <surname>Bessa</surname>
          </string-name>
          , D. Keller,
          <string-name>
            <surname>J. B. F. Da Silva</surname>
            ,
            <given-names>D. F.</given-names>
          </string-name>
          <string-name>
            <surname>Da Costa</surname>
          </string-name>
          ,
          <article-title>A metáfora da esfera sonora desde a perspectiva wydiwyhe</article-title>
          ,
          <source>Journal of Digital Media &amp; Interaction</source>
          <volume>3</volume>
          (
          <year>2020</year>
          )
          <fpage>60</fpage>
          -
          <lpage>88</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>D.</given-names>
            <surname>Keller</surname>
          </string-name>
          , C. Gomes,
          <string-name>
            <surname>L. Aliel,</surname>
          </string-name>
          <article-title>The handy metaphor: Bimanual, touchless interaction for the internet of musical things</article-title>
          ,
          <source>Journal of New Music Research</source>
          <volume>48</volume>
          (
          <year>2019</year>
          )
          <fpage>385</fpage>
          -
          <lpage>396</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>R.</given-names>
            <surname>Monro</surname>
          </string-name>
          , Bloomish,
          <year>2019</year>
          . URL: https://www.ryanmonro.com/bloomish/.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>L.</given-names>
            <surname>Turchet</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Bouquet</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Molinari</surname>
          </string-name>
          ,
          <string-name>
            <surname>G. Fazekas,</surname>
          </string-name>
          <article-title>The smart musical instruments ontology</article-title>
          ,
          <source>Journal of Web Semantics</source>
          <volume>72</volume>
          (
          <year>2022</year>
          )
          <fpage>100687</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>S.</given-names>
            <surname>Jordà</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Kaltenbrunner</surname>
          </string-name>
          , G. Geiger,
          <string-name>
            <given-names>R.</given-names>
            <surname>Bencina</surname>
          </string-name>
          ,
          <article-title>The reactable</article-title>
          , in: ICMC,
          <string-name>
            <surname>Citeseer</surname>
          </string-name>
          ,
          <year>2005</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>F.</given-names>
            <surname>Catania</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G. D.</given-names>
            <surname>Luca</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Bombaci</surname>
          </string-name>
          , E. Colombo,
          <string-name>
            <given-names>P.</given-names>
            <surname>Crovari</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Beccaluva</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Garzotto</surname>
          </string-name>
          ,
          <article-title>Musical and conversational artificial intelligence</article-title>
          ,
          <source>in: Proceedings of the 25th International Conference on Intelligent User Interfaces Companion</source>
          ,
          <year>2020</year>
          , pp.
          <fpage>51</fpage>
          -
          <lpage>52</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>Y. N. E. H.</given-names>
            <surname>Baakek</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. M. E. A.</given-names>
            <surname>Debbal</surname>
          </string-name>
          ,
          <article-title>Digital drugs (binaural beats): how can it afect the brain/their impact on the brain</article-title>
          ,
          <source>Journal of medical engineering &amp; technology</source>
          <volume>45</volume>
          (
          <year>2021</year>
          )
          <fpage>546</fpage>
          -
          <lpage>551</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>